paperhash
stringlengths
40
40
s2_corpus_id
stringlengths
3
9
arxiv_id
stringclasses
0 values
title
stringlengths
7
324
abstract
stringlengths
0
7.23k
authors
sequence
summary
stringclasses
0 values
field_of_study
sequencelengths
venue
stringlengths
15
253
publication_date
stringdate
1952-06-01 00:00:00
2019-07-01 00:00:00
n_references
int32
0
4.92k
n_citations
int32
0
84.2k
n_influential_citations
int32
introduction
stringlengths
15
173k
background
stringlengths
2
115k
methodology
stringlengths
40
140k
experiments_results
stringlengths
1
142k
conclusion
stringlengths
7
38k
full_text
stringlengths
29
195k
decision
bool
0 classes
decision_text
stringclasses
0 values
reviews
sequence
comments
sequence
references
sequence
hypothesis
stringlengths
105
1.27k
month_since_publication
int32
67
872
avg_citations_per_month
float32
0
1.24k
mean_score
float32
mean_confidence
float32
mean_novelty
float32
mean_correctness
float32
mean_clarity
float32
mean_impact
float32
mean_reproducibility
float32
openreview_submission_id
stringclasses
0 values
83f0fd6a9a3ff55996987777073285d00a8962d1
244077614
null
Present interest in mechanical translation
On November 30, 1950, WFL sent out a letter of inquiry on the subject of Mechanical Translation to various men in the field. Their answers are now in, and the following is WFL's attempt at summarizing the present status of this problem as it is being tackled in both Europe and America. I. Men who are actually doing something, roughly in order of their activity.
{ "name": [ "Loomis, W. F." ], "affiliation": [ null ] }
null
null
Proceedings of the Conference on Mechanical Translation
1952-06-01
0
0
null
Booth writes that Richens' approach deals mainly with dictionary translation plus explanation which enables account to be taken of word endings in accordance with standard grammar also contained in the dictionary. He says that Richens is the most notable worker in MT in England.Calvin Mooers of the Zator Company writes that when he was recently in England he was told by Mr. R. A. Fairthorn of the Royal Aircraft Establishment, Farnborough, Hampshire, England, that Dr. Richens had actually operated a tabulating machine to do translation by printing the multiple equivalents.Williams writes that before tackling any hardware experiments they decided to survey the field in a series of small studies. The first of these was entitled "An Experimental Study of Ambiguity and Context," by Abraham Kaplan, which they sent to us. At present they have no money for this work, as they feel such funds cannot rightly come from the Government but must come from an outside agency.
null
null
null
Huskey writes that he is interested in running pilot tests concerning MT on their SWAC. This machine has an internal memory of 256 words at present, which is being enlarged to 8,000 with a magnetic drum and even 100,000 with a magnetic tape unit. SWAC was not designed for non-numerical work, but Huskey feels that it will be useful for preliminary testing. He writes that formal work on this project stopped last September as they ran out of money. He has sparked both the Departments of Spanish and German at U.C.L.A. into making some paper and pencil studies of vocabulary ratios and syntax translation problems.The former studies on vocabulary ratios are being made by William E. Bull of the Department of Spanish, who has subsequently submitted a $2,000 request for a project to cover a full-time research assistant for eight months to complete the analysis.Victor A. Oswald, Jr., of the Department of German, has sent us a manuscript entitled "Proposals for the Mechanical Resolution of German Syntax Patterns," which indicates that syntax problems can be solved by using a numerical code to identify syntax functions and by employing mechanical routines resolving foreign syntax patterns into English ones. J. D. Williams has written us that Huskey is particularly interested in the hardware aspects of MT and is looking for jobs for his SWAC. He states that the above groups are starving and almost limited by the supply of paper and pencil.Booth writes that he is primarily interested at present in codifying words so as to utilize memory space most advantageously.Donald MacKay writes that Booth hopes to mechanize a dictionary by electronic means.
Main paper: dr. a. d. booth, birkbeck college, london.: Booth writes that he is primarily interested at present in codifying words so as to utilize memory space most advantageously.Donald MacKay writes that Booth hopes to mechanize a dictionary by electronic means. dr. r. h. richens, institute of agricultural genetics, cambridge, england.: Booth writes that Richens' approach deals mainly with dictionary translation plus explanation which enables account to be taken of word endings in accordance with standard grammar also contained in the dictionary. He says that Richens is the most notable worker in MT in England.Calvin Mooers of the Zator Company writes that when he was recently in England he was told by Mr. R. A. Fairthorn of the Royal Aircraft Establishment, Farnborough, Hampshire, England, that Dr. Richens had actually operated a tabulating machine to do translation by printing the multiple equivalents. j. d. williams, rand corporation: Williams writes that before tackling any hardware experiments they decided to survey the field in a series of small studies. The first of these was entitled "An Experimental Study of Ambiguity and Context," by Abraham Kaplan, which they sent to us. At present they have no money for this work, as they feel such funds cannot rightly come from the Government but must come from an outside agency. : Huskey writes that he is interested in running pilot tests concerning MT on their SWAC. This machine has an internal memory of 256 words at present, which is being enlarged to 8,000 with a magnetic drum and even 100,000 with a magnetic tape unit. SWAC was not designed for non-numerical work, but Huskey feels that it will be useful for preliminary testing. He writes that formal work on this project stopped last September as they ran out of money. He has sparked both the Departments of Spanish and German at U.C.L.A. into making some paper and pencil studies of vocabulary ratios and syntax translation problems.The former studies on vocabulary ratios are being made by William E. Bull of the Department of Spanish, who has subsequently submitted a $2,000 request for a project to cover a full-time research assistant for eight months to complete the analysis.Victor A. Oswald, Jr., of the Department of German, has sent us a manuscript entitled "Proposals for the Mechanical Resolution of German Syntax Patterns," which indicates that syntax problems can be solved by using a numerical code to identify syntax functions and by employing mechanical routines resolving foreign syntax patterns into English ones. J. D. Williams has written us that Huskey is particularly interested in the hardware aspects of MT and is looking for jobs for his SWAC. He states that the above groups are starving and almost limited by the supply of paper and pencil. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
872
0
null
null
null
null
null
null
null
null
5c9910a11e5ee5212ecf09e122e37da10c36a47d
19613776
null
Problems of vocabulary frequency and distribution
than in the procedures by which such information was obtained. To save valuable time for discussion, I shall make a few introductory and rather categorical statements especially pertinent to frequency problems in linguistics and to mechanical translation. If I sound dogmatic, the impression should be attributed to haste rather than intention. I shall begin by exposing four major fallacies which are current in most discussions of word frequencies: (I) The traditional vocabulary frequency studies, with which you are all familiar, are not primarily linguistic investigations. Neither the number of nouns in the Oxford Dictionary nor the frequency with which any English noun is used are basic linguistic facts. The total number of English nouns is a manifestation of the technological and cultural advancement of speakers of English (the language would be neither more English nor less English with an increase or decrease in the number of nouns), and the frequency of a noun like aspirin, for example, is simply a reflection of the headache rate of these speakers. Frequency studies of vocabulary are, consequently, not primarily language studies; they are investigations of human activities. Our conceptualization of the entire frequency problem is one thing if we ask "What is the frequency of 'vector' in the English language?" (a false linguistic frame of reference), and something quite different if we ask "What proportion of the total population, at what time intervals, has use for the word 'vector'?" Man if he talks at all, always talks about something specific and what word counters are trying 'to find out is what he talks about most, that is, how he distributes his time among all the possible things he might talk about. This leads us to the second fallacy. (II) If we are actually investigating, in frequency counts, the specific verbal activities of real people every utterance has space-time coordinates, that is, every speaker talks somewhere at some time. (Printed material is a fossil of this activity and becomes, as a result, ambiguous in space-time.) Now space and time, as elements of objective reality, determine human activity and, consequently, the frequency of word usage. The frequency potential of all words, then, depends upon the distribution of population in space and time. For example, the frequency of the word "rain" is undoubtedly much higher here in New England than in Southern California, first, because there are more people to use the word and, second, because it rains here more often. The total frequency of "rain" for these two regions represents neither area, no reality, and is obviously not a significant linguistic fact. There exists, if this principle is extended, no uniform vocabulary frequency potential for the language and an average is meaningless for any specific purpose. (III) There does not exist, nor can there be devised, a scientific method of sampling which will reveal anything reliable about word frequencies in a language as a whole. Actual speech (writing, etc.) has a linear structure. What I am saying now is coming at you word by word, serially, on a time line. A number of random segments of such linear speech cannot be welded together into a composite line which will represent any reality. A set of such examples is not even a satisfactory report on the specific material sampled. On an absurdly simple level, a linear weld of this kind produces something like the following three piece composite: "The special significance of vector analysis-in all Congregational Church socials-causes most hens to produce twice the normal number of eggs." The distribution of lexical items along this linear compound cannot possibly provide useful information about any extended segment of each compounding sample. Distribution and frequency have meaning only in terms of a homogeneous whole which is, theoretically, a non-existent entity in actual speech. (IV) The 80 per cent fallacy deserves special attention. It has been demonstrated by numerous
{ "name": [ "Bull, William E." ], "affiliation": [ null ] }
null
null
Proceedings of the Conference on Mechanical Translation
1952-06-01
0
1
null
PART I: Introduction: I assumed in preparing this report that this group would be more interested in conclusions and operational facts than in the procedures by which such information was obtained. To save valuable time for discussion, I shall make a few introductory and rather categorical statements especially pertinent to frequency problems in linguistics and to mechanical translation. If I sound dogmatic, the impression should be attributed to haste rather than intention.I shall begin by exposing four major fallacies which are current in most discussions of word frequencies:(I) The traditional vocabulary frequency studies, with which you are all familiar, are not primarily linguistic investigations. Neither the number of nouns in the Oxford Dictionary nor the frequency with which any English noun is used are basic linguistic facts. The total number of English nouns is a manifestation of the technological and cultural advancement of speakers of English (the language would be neither more English nor less English with an increase or decrease in the number of nouns), and the frequency of a noun like aspirin, for example, is simply a reflection of the headache rate of these speakers. Frequency studies of vocabulary are, consequently, not primarily language studies; they are investigations of human activities.Our conceptualization of the entire frequency problem is one thing if we ask "What is the frequency of 'vector' in the English language?" (a false linguistic frame of reference), and something quite different if we ask "What proportion of the total population, at what time intervals, has use for the word 'vector'?" Man if he talks at all, always talks about something specific and what word counters are trying 'to find out is what he talks about most, that is, how he distributes his time among all the possible things he might talk about.This leads us to the second fallacy. (II) If we are actually investigating, in frequency counts, the specific verbal activities of real people every utterance has space-time coordinates, that is, every speaker talks somewhere at some time. (Printed material is a fossil of this activity and becomes, as a result, ambiguous in space-time.) Now space and time, as elements of objective reality, determine human activity and, consequently, the frequency of word usage. The frequency potential of all words, then, depends upon the distribution of population in space and time. For example, the frequency of the word "rain" is undoubtedly much higher here in New England than in Southern California, first, because there are more people to use the word and, second, because it rains here more often. The total frequency of "rain" for these two regions represents neither area, no reality, and is obviously not a significant linguistic fact.There exists, if this principle is extended, no uniform vocabulary frequency potential for the language and an average is meaningless for any specific purpose.(III) There does not exist, nor can there be devised, a scientific method of sampling which will reveal anything reliable about word frequencies in a language as a whole. Actual speech (writing, etc.) has a linear structure. What I am saying now is coming at you word by word, serially, on a time line. A number of random segments of such linear speech cannot be welded together into a composite line which will represent any reality. A set of such examples is not even a satisfactory report on the specific material sampled. On an absurdly simple level, a linear weld of this kind produces something like the following three piece composite: "The special significance of vector analysis-in all Congregational Church socials-causes most hens to produce twice the normal number of eggs."The distribution of lexical items along this linear compound cannot possibly provide useful information about any extended segment of each compounding sample. Distribution and frequency have meaning only in terms of a homogeneous whole which is, theoretically, a non-existent entity in actual speech.(IV) The 80 per cent fallacy deserves special attention. It has been demonstrated by numerous word counts that a few hundred words make up some 80 per cent of all the running words found in the counts. It has been concluded, as a result, that you can say almost everything you want to say with a very small vocabulary. It has even been said that the average American uses only some 500 words per day. Both the facts and the conclusions drawn from these counts are non-scientific and relatively meaningless. Let us take a simple example. James Joyce's Ulysses has 260,430 running words. Approximately 1000 words make up 80 per cent of this total. There are, however, 29,899 different words in the novel. Consequently, while 1000 words take care of 80 per cent by volume of what Joyce wrote, they actually represent, if we assume that he intended every different word to be meaningful, only 3.3 per cent of what he said. The word counters have been, obviously, misled into the belief that quantity and quality are identical. By such logic we should have to contend that once we have bought the nails we practically own a house since, after all, there are more nails in a house than anything else.Preliminary conclusions relevant to MT: There exists no scientific method of establishing a limited vocabulary which will translate any predictable percentage of the content (not the volume) of heterogeneous material. An all-purpose mechanical memory will have to contain something approaching the total available vocabulary of both the foreign language and the target language. In order to cover most semantic variations several million of items would be needed. At the present time we have no machine which can manage such a number at a profitable speed.PART II: A statistical analysis of arbitrarily selected samples (segments) of various types of discourse does reveal a number of facts which are important to problems of mechanical translation.I should like now to show you a number of slides which represent the results of an analysis of 60 samples of about 500 words each taken from contemporary Spanish.(A) Internal ratios. Language as a structural system is statistically closed, that is, any finite sample will exhibit a ratio pattern of the various parts of speech. Some notion of the volume of each part of speech will be essential to the spatial arrangement of vocabulary items in a mechanical memory, that is, the contact rate of each part of speech will determine whether its vocabulary should be in the high or slow speed memory of any multiple speed machine.Slide 1: parts of speech in which form and function are identical; not all of total vocabulary covered. Observations: (a) ratios are not determined by content, that is, subject matter, but by types of discourse: dialogue, expository prose, narration, description, etc.(b) the frequency of specific items in any part of speech is strongly conditioned by the total ratio of the part of speech and the total available items of that part of speech in the lexicon of the language. Although nouns make up the largest percentage of items in most of the samples, the fact that some 200,000 are potentially available means that no single noun can build up a high frequency. This suggests that even when dealing with highly specialized topics that no predictable limit can be established for the content-bearing vocabulary simply because the low incidence of individual items would require an exhaustive search which is highly impractical.(c) three major patterns are to be observed: noun, definite article, and preposition behave alike. Pronouns, verbs, and adverbs are in reverse complementary distribution. The conjunctions, indefinite article, and adjectives are independent of the ratios of the contrasting constellations.Slide 2: all major parts of speech, items classified entirely by function. Conjunction is now only individualistic class. Two major constellations which reveal patterns determined by types of discourse.General conclusion: Treatment of vocabulary will probably be most satisfactory for mechanical translation if dealt with in terms of types of discourse.(B) Frequency ratios: The speed and efficiency of mechanical translation depends, vocabulary-wise, on the number of times specific lexical items are repeated in the text. To demonstrate the problem I shall show you three slides giving the ratio distribution of the noun, verb, and adjective (instances where form and function are identical) for the 60 samples. Observe the following points:(1) Words which appear twice or three times in the samples are relatively stable. The singlettes and high frequency words are in complementary distribution and make up the major portions of most of the samples.(2) Ratios are not identical for the different parts of speech.(3) Several factors appear to determine ratios: potential vocabulary of the writer (children's short stories), restricted topic of discourse (physics), type of discourse (dialogue, etc.).General observations:(1) The suggestion that words which appear only once may be ignored in MT does not appear profitable. The singlettes make up too large a portion of discourse.(2) Planned omissions should be analyzed in terms of the part of speech.(3) High frequency words are dominant and may be presumed to determine the main topic or theme of discourse. The low frequency words, in contrast, are most critical since they represent what the writer is saying about the core topic.(C) Frequency and distribution of the various parts of speech, by function. Preliminary observations:Aside from the general fact that all linguistic data plot out in the form of a parabolic curve, the various parts of speech behave statistically in quite different fashions:(1) the degree of incline of the parabolic curve is (except for proper names, formulae, etc.) in proportion to the number of lexical items available in the language, that is, the frequency and distribution reach 1 (or the lowest possible minimum) sooner, for example, in the case of adverbs than nouns since there are fewer of the former than the latter. Slides: Articles, Pronouns, Adverb 1, Verb 1, Adjective 1, Noun 1.By extending this principle to the problem of micro-vocabularies we may predict, with reasonable assurance, that there should be found in all highly technical and restricted fields a high frequency of a few words and an extremely large number of single words in typical samples. This explains, in part at least, why Oswald found that a relatively few nouns cover nearly 90 per cent of the vocabulary in brain surgery in German. Whenever the total available number of lexical items is small, repetition must increase or discourse cannot be sustained.The facts just demonstrated also provide a general answer to the question of the feasibility of micro-vocabularies and throw some light on the problem of not translating rare words. The less specialized the field, the larger the number of available words, the lower the frequency of the most common and, consequently, the greater the semantic importance of rare words. The present paper, for example, deals with a highly technical field in linguistics but draws upon such a wide and unpredictable vocabulary that no micro-vocabulary based on previous articles on word frequency would provide a satisfactory translation. I shall cite a few critical words (those that cannot be omitted without serious distortion of the sense) to demonstrate the point. Notice, also, that you cannot determine the topic of discourse from this list: coordinate, aspirin, technological, headache, rain, population, linear, weld, slide, church, machine, hen, constellation, singlette, egg, children, physics, core.A micro-vocabulary appears feasible only if one is dealing with a micro-subject, a field in which the number of objective entities and the number of possible actions are extremely limited. The number of such fields is, probably, insignificant.(2) The non-content-bearing parts of speech (articles, prepositions, conjunctions, adverb 2, relative, demonstrative, and indefinite pronouns) exhibit an extremely high degree of correlation between frequency and distribution. 3 slides: articles, pronouns, adverbs.Within the same part of speech the correlation between frequency and distribution increases as the referential value decreases. 2 slides: Adjective 1, Adjective 2, 3, etc. The possessive, demonstrative, pronoun adjectives show a much higher correlation than adjectives which refer to the nature of reality. These adjectives are, so to speak, indifferent to the subject of discourse while fat, lateral, huge, etc. are restricted to certain subjects.High correlation appears in other parts of speech only at the tail of the parabola where it is, for obvious reasons, insignificant, and at the head where it indicates that all high frequency words are non-specific, operational, and non-indicative of the subject of discourse. 2 slides: Noun 1, Verb 1. This principle may be expected to show some significant variations when applied to restricted fields, what I have called micro-subjects. In specific examples of discourse high frequency contentbearing words commonly outnumber the parallel high frequency words (of the same part of speech) in the general language. They define the main topic, for example, the way frequency, word, speech, distribution, noun, etc. define the subject of this paper, but, curiously enough, these words do not tell us what is being said about the topic. This fact establishes a principle which cannot be overstressed in dealing with vocabulary problems in mechanical translation, namely, that the rarer words carry the significant and critical message in most extended communications. The tail of the parabola is what makes one article on brain surgery different from another.(3) The middle range of the parabola for all content-bearing words exhibits a low correlation between frequency and distribution. The actual degree of divergence depends upon the general semantic function of the various parts of speech and their potential descriptive range or combinatory power. Modifiers have a wider distribution potential than head words, and verbs more than nouns. 4 slides: Noun 1, Adjective 1, Verb l, Adverb 1. This confirms a principle which has been much debated in structural linguistics, namely, that the noun is the core word in communication.We have now established a hierarchy of the parts of speech which should provide an operational principle in the preparation of vocabulary lists for a machine memory. The value of micro-vocabularies depends directly upon the function of the part of speech and the total number of available words. It may be predicted that as the degree of correlation between frequency and distribution increases the larger the percentage of the total available vocabulary for such parts of speech which will have to be included in any micro-vocabulary. Thus, to pick up Oswald's problem again, a micro-vocabulary for brain surgery will require less than 1 per cent of the available nouns in German but probably100 per cent of the secondary adverbs. If it seem valuable, further research could presumably define rather accurately the percentage of the total vocabulary for each part of speech normally required to carry on discourse in any well-defined field.Conclusion: The present data also points to another division of vocabulary which I should like to discuss in the way of conclusion. Vocabulary appears to fall into three major classes:(1) words which are primarily indifferent to the subject of discourse and which will be indispensable for any type of translation.(2) words which define the theme or topic of discourse and which cluster in somewhat predictable constellations and appear with especially high frequency within specialized fields.(3) words which provide the running commentaries upon the theme or topic of discourse and which appear with very low frequencies and do not tend to cluster. These words make up the tail of the parabola and are not amenable to precise prediction since they represent the potential associations which every speaker may establish between his topic and the infinite universe. They are the bridge between the closed system of structural vocabulary, the restricted vocabulary of the specialties, and the cosmic reality within which the language system and the speciality exist and operate. To presuppose that such a vocabulary can be defined and limited requires the assumption that knowledge has reached its maximum potential and that man will discover no new and hitherto unknown associations between departments of knowledge.The limitations of machine translation which we must face are, vocabulary-wise, the inadequacy of a closed and rigid system operating as the medium of translation within an everexpanding, open continuum.Special mention must be made of the contributions of Harry Huskey and Charles Africa to the preparation of this paper. Mr.Huskey has made many valuable suggestions and his staff has done the graphs and slides. Mr.Africa has done almost all of the actual counting and the preparation of the raw data.
null
null
null
null
Main paper: : PART I: Introduction: I assumed in preparing this report that this group would be more interested in conclusions and operational facts than in the procedures by which such information was obtained. To save valuable time for discussion, I shall make a few introductory and rather categorical statements especially pertinent to frequency problems in linguistics and to mechanical translation. If I sound dogmatic, the impression should be attributed to haste rather than intention.I shall begin by exposing four major fallacies which are current in most discussions of word frequencies:(I) The traditional vocabulary frequency studies, with which you are all familiar, are not primarily linguistic investigations. Neither the number of nouns in the Oxford Dictionary nor the frequency with which any English noun is used are basic linguistic facts. The total number of English nouns is a manifestation of the technological and cultural advancement of speakers of English (the language would be neither more English nor less English with an increase or decrease in the number of nouns), and the frequency of a noun like aspirin, for example, is simply a reflection of the headache rate of these speakers. Frequency studies of vocabulary are, consequently, not primarily language studies; they are investigations of human activities.Our conceptualization of the entire frequency problem is one thing if we ask "What is the frequency of 'vector' in the English language?" (a false linguistic frame of reference), and something quite different if we ask "What proportion of the total population, at what time intervals, has use for the word 'vector'?" Man if he talks at all, always talks about something specific and what word counters are trying 'to find out is what he talks about most, that is, how he distributes his time among all the possible things he might talk about.This leads us to the second fallacy. (II) If we are actually investigating, in frequency counts, the specific verbal activities of real people every utterance has space-time coordinates, that is, every speaker talks somewhere at some time. (Printed material is a fossil of this activity and becomes, as a result, ambiguous in space-time.) Now space and time, as elements of objective reality, determine human activity and, consequently, the frequency of word usage. The frequency potential of all words, then, depends upon the distribution of population in space and time. For example, the frequency of the word "rain" is undoubtedly much higher here in New England than in Southern California, first, because there are more people to use the word and, second, because it rains here more often. The total frequency of "rain" for these two regions represents neither area, no reality, and is obviously not a significant linguistic fact.There exists, if this principle is extended, no uniform vocabulary frequency potential for the language and an average is meaningless for any specific purpose.(III) There does not exist, nor can there be devised, a scientific method of sampling which will reveal anything reliable about word frequencies in a language as a whole. Actual speech (writing, etc.) has a linear structure. What I am saying now is coming at you word by word, serially, on a time line. A number of random segments of such linear speech cannot be welded together into a composite line which will represent any reality. A set of such examples is not even a satisfactory report on the specific material sampled. On an absurdly simple level, a linear weld of this kind produces something like the following three piece composite: "The special significance of vector analysis-in all Congregational Church socials-causes most hens to produce twice the normal number of eggs."The distribution of lexical items along this linear compound cannot possibly provide useful information about any extended segment of each compounding sample. Distribution and frequency have meaning only in terms of a homogeneous whole which is, theoretically, a non-existent entity in actual speech.(IV) The 80 per cent fallacy deserves special attention. It has been demonstrated by numerous word counts that a few hundred words make up some 80 per cent of all the running words found in the counts. It has been concluded, as a result, that you can say almost everything you want to say with a very small vocabulary. It has even been said that the average American uses only some 500 words per day. Both the facts and the conclusions drawn from these counts are non-scientific and relatively meaningless. Let us take a simple example. James Joyce's Ulysses has 260,430 running words. Approximately 1000 words make up 80 per cent of this total. There are, however, 29,899 different words in the novel. Consequently, while 1000 words take care of 80 per cent by volume of what Joyce wrote, they actually represent, if we assume that he intended every different word to be meaningful, only 3.3 per cent of what he said. The word counters have been, obviously, misled into the belief that quantity and quality are identical. By such logic we should have to contend that once we have bought the nails we practically own a house since, after all, there are more nails in a house than anything else.Preliminary conclusions relevant to MT: There exists no scientific method of establishing a limited vocabulary which will translate any predictable percentage of the content (not the volume) of heterogeneous material. An all-purpose mechanical memory will have to contain something approaching the total available vocabulary of both the foreign language and the target language. In order to cover most semantic variations several million of items would be needed. At the present time we have no machine which can manage such a number at a profitable speed.PART II: A statistical analysis of arbitrarily selected samples (segments) of various types of discourse does reveal a number of facts which are important to problems of mechanical translation.I should like now to show you a number of slides which represent the results of an analysis of 60 samples of about 500 words each taken from contemporary Spanish.(A) Internal ratios. Language as a structural system is statistically closed, that is, any finite sample will exhibit a ratio pattern of the various parts of speech. Some notion of the volume of each part of speech will be essential to the spatial arrangement of vocabulary items in a mechanical memory, that is, the contact rate of each part of speech will determine whether its vocabulary should be in the high or slow speed memory of any multiple speed machine.Slide 1: parts of speech in which form and function are identical; not all of total vocabulary covered. Observations: (a) ratios are not determined by content, that is, subject matter, but by types of discourse: dialogue, expository prose, narration, description, etc.(b) the frequency of specific items in any part of speech is strongly conditioned by the total ratio of the part of speech and the total available items of that part of speech in the lexicon of the language. Although nouns make up the largest percentage of items in most of the samples, the fact that some 200,000 are potentially available means that no single noun can build up a high frequency. This suggests that even when dealing with highly specialized topics that no predictable limit can be established for the content-bearing vocabulary simply because the low incidence of individual items would require an exhaustive search which is highly impractical.(c) three major patterns are to be observed: noun, definite article, and preposition behave alike. Pronouns, verbs, and adverbs are in reverse complementary distribution. The conjunctions, indefinite article, and adjectives are independent of the ratios of the contrasting constellations.Slide 2: all major parts of speech, items classified entirely by function. Conjunction is now only individualistic class. Two major constellations which reveal patterns determined by types of discourse.General conclusion: Treatment of vocabulary will probably be most satisfactory for mechanical translation if dealt with in terms of types of discourse.(B) Frequency ratios: The speed and efficiency of mechanical translation depends, vocabulary-wise, on the number of times specific lexical items are repeated in the text. To demonstrate the problem I shall show you three slides giving the ratio distribution of the noun, verb, and adjective (instances where form and function are identical) for the 60 samples. Observe the following points:(1) Words which appear twice or three times in the samples are relatively stable. The singlettes and high frequency words are in complementary distribution and make up the major portions of most of the samples.(2) Ratios are not identical for the different parts of speech.(3) Several factors appear to determine ratios: potential vocabulary of the writer (children's short stories), restricted topic of discourse (physics), type of discourse (dialogue, etc.).General observations:(1) The suggestion that words which appear only once may be ignored in MT does not appear profitable. The singlettes make up too large a portion of discourse.(2) Planned omissions should be analyzed in terms of the part of speech.(3) High frequency words are dominant and may be presumed to determine the main topic or theme of discourse. The low frequency words, in contrast, are most critical since they represent what the writer is saying about the core topic.(C) Frequency and distribution of the various parts of speech, by function. Preliminary observations:Aside from the general fact that all linguistic data plot out in the form of a parabolic curve, the various parts of speech behave statistically in quite different fashions:(1) the degree of incline of the parabolic curve is (except for proper names, formulae, etc.) in proportion to the number of lexical items available in the language, that is, the frequency and distribution reach 1 (or the lowest possible minimum) sooner, for example, in the case of adverbs than nouns since there are fewer of the former than the latter. Slides: Articles, Pronouns, Adverb 1, Verb 1, Adjective 1, Noun 1.By extending this principle to the problem of micro-vocabularies we may predict, with reasonable assurance, that there should be found in all highly technical and restricted fields a high frequency of a few words and an extremely large number of single words in typical samples. This explains, in part at least, why Oswald found that a relatively few nouns cover nearly 90 per cent of the vocabulary in brain surgery in German. Whenever the total available number of lexical items is small, repetition must increase or discourse cannot be sustained.The facts just demonstrated also provide a general answer to the question of the feasibility of micro-vocabularies and throw some light on the problem of not translating rare words. The less specialized the field, the larger the number of available words, the lower the frequency of the most common and, consequently, the greater the semantic importance of rare words. The present paper, for example, deals with a highly technical field in linguistics but draws upon such a wide and unpredictable vocabulary that no micro-vocabulary based on previous articles on word frequency would provide a satisfactory translation. I shall cite a few critical words (those that cannot be omitted without serious distortion of the sense) to demonstrate the point. Notice, also, that you cannot determine the topic of discourse from this list: coordinate, aspirin, technological, headache, rain, population, linear, weld, slide, church, machine, hen, constellation, singlette, egg, children, physics, core.A micro-vocabulary appears feasible only if one is dealing with a micro-subject, a field in which the number of objective entities and the number of possible actions are extremely limited. The number of such fields is, probably, insignificant.(2) The non-content-bearing parts of speech (articles, prepositions, conjunctions, adverb 2, relative, demonstrative, and indefinite pronouns) exhibit an extremely high degree of correlation between frequency and distribution. 3 slides: articles, pronouns, adverbs.Within the same part of speech the correlation between frequency and distribution increases as the referential value decreases. 2 slides: Adjective 1, Adjective 2, 3, etc. The possessive, demonstrative, pronoun adjectives show a much higher correlation than adjectives which refer to the nature of reality. These adjectives are, so to speak, indifferent to the subject of discourse while fat, lateral, huge, etc. are restricted to certain subjects.High correlation appears in other parts of speech only at the tail of the parabola where it is, for obvious reasons, insignificant, and at the head where it indicates that all high frequency words are non-specific, operational, and non-indicative of the subject of discourse. 2 slides: Noun 1, Verb 1. This principle may be expected to show some significant variations when applied to restricted fields, what I have called micro-subjects. In specific examples of discourse high frequency contentbearing words commonly outnumber the parallel high frequency words (of the same part of speech) in the general language. They define the main topic, for example, the way frequency, word, speech, distribution, noun, etc. define the subject of this paper, but, curiously enough, these words do not tell us what is being said about the topic. This fact establishes a principle which cannot be overstressed in dealing with vocabulary problems in mechanical translation, namely, that the rarer words carry the significant and critical message in most extended communications. The tail of the parabola is what makes one article on brain surgery different from another.(3) The middle range of the parabola for all content-bearing words exhibits a low correlation between frequency and distribution. The actual degree of divergence depends upon the general semantic function of the various parts of speech and their potential descriptive range or combinatory power. Modifiers have a wider distribution potential than head words, and verbs more than nouns. 4 slides: Noun 1, Adjective 1, Verb l, Adverb 1. This confirms a principle which has been much debated in structural linguistics, namely, that the noun is the core word in communication.We have now established a hierarchy of the parts of speech which should provide an operational principle in the preparation of vocabulary lists for a machine memory. The value of micro-vocabularies depends directly upon the function of the part of speech and the total number of available words. It may be predicted that as the degree of correlation between frequency and distribution increases the larger the percentage of the total available vocabulary for such parts of speech which will have to be included in any micro-vocabulary. Thus, to pick up Oswald's problem again, a micro-vocabulary for brain surgery will require less than 1 per cent of the available nouns in German but probably100 per cent of the secondary adverbs. If it seem valuable, further research could presumably define rather accurately the percentage of the total vocabulary for each part of speech normally required to carry on discourse in any well-defined field.Conclusion: The present data also points to another division of vocabulary which I should like to discuss in the way of conclusion. Vocabulary appears to fall into three major classes:(1) words which are primarily indifferent to the subject of discourse and which will be indispensable for any type of translation.(2) words which define the theme or topic of discourse and which cluster in somewhat predictable constellations and appear with especially high frequency within specialized fields.(3) words which provide the running commentaries upon the theme or topic of discourse and which appear with very low frequencies and do not tend to cluster. These words make up the tail of the parabola and are not amenable to precise prediction since they represent the potential associations which every speaker may establish between his topic and the infinite universe. They are the bridge between the closed system of structural vocabulary, the restricted vocabulary of the specialties, and the cosmic reality within which the language system and the speciality exist and operate. To presuppose that such a vocabulary can be defined and limited requires the assumption that knowledge has reached its maximum potential and that man will discover no new and hitherto unknown associations between departments of knowledge.The limitations of machine translation which we must face are, vocabulary-wise, the inadequacy of a closed and rigid system operating as the medium of translation within an everexpanding, open continuum.Special mention must be made of the contributions of Harry Huskey and Charles Africa to the preparation of this paper. Mr.Huskey has made many valuable suggestions and his staff has done the graphs and slides. Mr.Africa has done almost all of the actual counting and the preparation of the raw data. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
872
0.001147
null
null
null
null
null
null
null
null
7e4ed27ac1f48d1493c0f3e39474767c9bb311d9
244077686
null
The structure of the problem of mechanical translation
The problem of mechanical translation has three principal components: (i) the formulation of a set of specifications for the objective to be attained; (ii) the design of a translating machine; and (iii) theoretical translation problems.
{ "name": [ "Helmer, Olaf" ], "affiliation": [ null ] }
null
null
Proceedings of the Conference on Mechanical Translation
1952-06-01
0
0
null
null
null
null
null
Decisions under (i) will have to be made under consideration of the purpose to which the translation output will be put. The prime distinction here is as to whether the emphasis is placed on an accurate transmittal of the cognitive content of the input or rather on a faithful rendering of the style and emotional content. In the former case one may want to require a strict logical equivalence of input and output, or one may be satisfied with a certain (high) degree of message equivalence, in which case it would be desirable to have a criterion by which it is possible to decide which of two proposed translation schemes is to be considered preferable.As for (ii), it appears that highspeed general-purpose computing machines will be able to handle the main translation task. Engineering design therefore should concentrate on ancillary devices designed to reduce the input and output bottlenecks. In particular, an automatic reading machine would be necessary for fast mass translation. In addition, radical devices for reducing the input and output volume of the translation machine might be considered, such as automatic pre-or post-translation abstracting methods.The theoretical translation problems, (iii), divide themselves into syntactical and semantic ones. The former are concerned with word and sentence structure, the latter with the meanings of words and phrases. Both probably require separate translation procedures. Syntactical problems, in principle, can be handled by a logical analysis of the languages in question, while semantic problems lead into questions of word frequency, optimal dictionary size, and the dependence of ambiguity of meaning on the context.
Main paper: : Decisions under (i) will have to be made under consideration of the purpose to which the translation output will be put. The prime distinction here is as to whether the emphasis is placed on an accurate transmittal of the cognitive content of the input or rather on a faithful rendering of the style and emotional content. In the former case one may want to require a strict logical equivalence of input and output, or one may be satisfied with a certain (high) degree of message equivalence, in which case it would be desirable to have a criterion by which it is possible to decide which of two proposed translation schemes is to be considered preferable.As for (ii), it appears that highspeed general-purpose computing machines will be able to handle the main translation task. Engineering design therefore should concentrate on ancillary devices designed to reduce the input and output bottlenecks. In particular, an automatic reading machine would be necessary for fast mass translation. In addition, radical devices for reducing the input and output volume of the translation machine might be considered, such as automatic pre-or post-translation abstracting methods.The theoretical translation problems, (iii), divide themselves into syntactical and semantic ones. The former are concerned with word and sentence structure, the latter with the meanings of words and phrases. Both probably require separate translation procedures. Syntactical problems, in principle, can be handled by a logical analysis of the languages in question, while semantic problems lead into questions of word frequency, optimal dictionary size, and the dependence of ambiguity of meaning on the context. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
872
0
null
null
null
null
null
null
null
null
3a02ddf01c8742cca94e8470f135d35f56ae1941
244077675
null
Word-by-word translation
When I learned that I had been summoned to address myself to the topic of word-by-word translation I felt like a geographer invited to discuss the utility of the conception that the world is flat. In short, I can only say that word-by-word translation is not possible, if we are to understand by the term a wordwise transverbalization from one language into another, particularly from German into English. It was, indeed, the discovery of the impossibility of wordwise translation that prompted the syntactical investigation outlined in Proposals for the Mechanical Resolution of German Syntax Patterns. The research of the summer of 1950 was begun by translating word by word into English various German texts in the field of mathematics. Our efforts rapidly came to grief, chiefly because of the lamentable fact that the German "articles" are also "words", but words of Protean transformations when carried over into English. Take the harmless-looking little form der. A word-by-word transverbalization into English would require, to be complete, a listing of the following possibilities: "the" (der Mann); "of the" (der Frau or der Frauen); "to the, for the" (also der Frau); "he" (der kommt nicht); "her, to her" (der geb' ich's nie); "who" (der Mann, der kommt ...); "whom" (die Frau, der ich es gab) -and diverse other other more subtle variants. The other forms of the "article" require almost all equally complex transverbalizations. When to circumstances such as these there is added the distressing oddity of German word order, word-by-word translation from German to English becomes either a jest or a horror. To be quite sure of the impossibility of wordwise translation, we concocted diverse multiple-choice translations -primitive ancestors of Mr. Bar-Hillel's appendix to "The Present State of Research on MT" -and these translations we submitted to our mathematical colleagues, who could make no sense out of the gibberish and were thereby confirmed in their conviction that people who dabbled with MT were crackpots. In the appendix you can consult one of the less horrific samples that I found in my files and see for yourselves how tedious it is even to try to make one's way through such a transverbalization maze. The happier results that can be obtained from block-by~block transverbalization, in which process problems of syntactic ambiguity are solved by the connection of each syntactic segment to the other, and the fluid German word order is resolved into a rigid English sequence, are, I take it, familiar to you from the Proposals. I think there can no longer be any doubt that the data of syntactical connection require us henceforth to think in terms not of word-by-word, but of block-byblock translation. At this point I think it will be in order to defend myself briefly from the charge of having failed to realize the importance of producing "one sequential system" of syntactical operations. Quite on the contrary, I have always been of the conviction that such a system is a sine-qua-non of MT. I fear that my critic has "mistook me all this while," and that he, as a theoretician, has misunderstood my pragmatism. The fact is that there can be only theoretical fascination, but no actua1 usefulness, in any complete system that is not devised for a specific mechanism and for a specific purpose. All sorts of complete sequential systems are possible, but we cannot determine which one will be most desirable for MT until we decide how MT is to be accomplished -what functions in the process are to be assigned to machines, what are to be left to human operators, and above all, in what fashion the vocabulary of the FL is to be transferred to the TL. There is no a priori "operational syntax." There are only operational syntaxes, whose elaboration will vary as the deviser's presuppositions and aims vary. Take Pollard's system, for instance, which I admire with more reserve than some do. It is "complete",
{ "name": [ "Oswald, Victor A." ], "affiliation": [ null ] }
null
null
Proceedings of the Conference on Mechanical Translation
1952-06-01
0
3
null
to be sure, but it presupposes a human translator with a grasp of what we vaguely call "the elements" of German, a translator who is, for instance, equipped to find his way among multiple choices of the sort I outlined above. Pollard's rule 1 is that when a noun (identifiable by capitalization) occurs on a "break" (comma, period, etc.) it is possible to translate word-by-word from the beginning of the sentence to the break in question. It is possible, that is, if the translator can recognize the patterns of syntactic connection and knows, or can establish, the significance of the meaning-bearing words -a task which, unfortunately, our poor machines, without benefit of "elements," are incapable of performing. All that the Proposals intended to demonstrate was that machines can be instructed to recognize syntactic connection. Any complete system will have to be devised to meet the operations of some specific machine or of some specific combination of man and machine.Meanwhile, we must investigate the other sine-qua-non for block-by-block translation: the problem of the interpretation of the meaning-bearing words. Syntactic connection will almost infallibly identify word-function, and we now know that a recognition of syntactic connection can be built into the "memory" of machines of the high speed computer type. Word-meaning, on the other hand, is not a factor capable of being solved mechanically except by an elaborate reduction of the possibilities of multiple significance: that is by the production of a large -possibly very large -number of glossaries that pertain to one radically limited field of discourse.I do not think we should discard the possibility of a mechanical solution of the problem of multiple meaning until we have explored it more carefully than has hitherto been proposed. I greatly admire Mr. Reifler's conception of an FL pre-editor -as, indeed, I admire all of his ingenious proposals. But I do not believe that his combination of pre-editor with a mechanical dictionary constitutes the ultimate solution of our problem. In fact, I am of the opinion that we must grapple with the problem precisely at the point where Mr. Reifler abandons it. His proposals are most enlightening for the solution of problems of general language, but he has excluded problems of specific language (the jargons of medicine, mathematics, linguistics, geology etc.) from the domain of mechanical solution. We shall be much closer to the realization of mechanical translation, if we can mechanize the components of his "mechanized" dictionary.Mr. Bull's counts of function frequency and distribution, the purpose of which we have apparently failed to make completely clear, have produced fascinating results on which he will himself report. For my purpose it is enough to point out that he has demonstrated empirically something we have hitherto had to assume on the basis of impressionistic observation: the only meaning-bearing forms that we can purposefully isolate are nouns, verbs, adjectives, and -probably -adverbs. All the other "words" when transverbalized from one language into another are susceptible of such diverse interpretation that they must either, like the German article, be treated primarily as elements of syntactic connection, or else, like prepositions, be transverbalized in a multiplicity of meanings from among which an editor is expected to make his choice.I shall have something to say later about the microsemantics of nouns, the only set of the meaning-bearing group I have studied. For the moment, let us take a generalized view of the problem. The point must be made that no system as yet proposed will solve the problem of multiple significance. A pre-editor can do much to simplify syntactic connection for mechanical "digestion," but I do not see how, as an operator in the FL, he can effectively guide either the machine or the machine and a posteditor through the mazes of multiple meaning in the TL. Nor do I think we can hope for much accurate help from one monolingual post-editor or even from one bilingual consultant. What has been overlooked is the fact that the competence required in the post-editor, even if he be bilingual, is only partially linguistic. The real prerequisite for him is an intimate knowledge of the field to which the translated text pertains.Let us take the case of a hypothetical bilingual post-editor. He may have a perfect competence in translating literary German into English: he may have a working knowledge of geology, let us say, or of chemistry, or of anatomy, or of zoology, or-in an exceptional case -of all of these fields; but unless he knows mathematics well he is never going to decide that Menge in certain contexts must be translated "set" and not "quantity" or "multitude" or "mass" or "crowd" or anything of the like. I am afraid that the appendix to "The Present State of Research in MT" is apt to be misleading. Although the text is apparently specific, it is actually in the nature of a passage from "General Science". Given the one key word "microscope," the remainder of the words fall rather readily into relationship to it. Fortunately almost any educated editor would be expected to know what a microscope is and what sort of functions are expected of it. On the other hand if the key-word were the name of some less generally familiar device -let us say an oscilloscope or a kymograph (as used in phonetics laboratories), only an editor familiar with the gadget and its uses could surmise what functions it might be expected to perform. Some of us have seen bilingual experts trying to make just the sort of interpretation proposed for a bilingual editor: professors of German deciphering the efforts made by graduate students to translate material in their field for the purpose of passing a language-proficiency requirement. The results are all-too-often simply ludicrous. One of my colleagues almost flunked a mathematics student for translating Eigenwerte by the apparently preposterous form "eigenvalues", though this is, nevertheless, the proper English equivalent of the German original in a mathematical context. The fact that the bilingual professor spoke German like a native and knew zoology quite as well as he knew Goethe contributed, you see, nothing to his interpretation of this particular translated text.In short, no one post-editor, not even a bilingual, -unless he were a marvel of universal knowledge, in which case he would probably have something better to do than tinker with other people's texts -would be capable of solving the problem of multiple significance in the TL. A monolingual specialist in the particular field would unquestionably do far better. But in that case our process of MT would require a whole battery of monolingual experts, each of whom, if he would work on MT at all, would have to be taught the techniques appropriate to our operation, and each of whom would be susceptible to all the ills that flesh is heir to: human fallibility, death, illness, vacations, and better offers.Before we surrender our mechanistic autonomy, I suggest that we thoroughly explore the possibility of substituting for specialists, mechanized special micro-glossaries -glossaries which will reduce the range of choice of meaning from a bewildering multiplicity to a matter of -at the mosttwo or three. In fact, if the field be radically limited, we can probably produce bilingual glossaries with a preponderance of one-to-one equivalents.To resume my points in brief: word-by-word translation is literally impossible: the smallest unit with which we can operate is the syntactic block; the blocks can certainly be manipulated, either by mechanical operation alone or by a combination of pre-editing with mechanical operation, in such a way as to resolve patterns of syntactic connection and of word order from the FL into the TL; identification of meaning in the TL cannot be facilitated efficiently by a pre-editor; a monolingualor even bilingual -post-editor could be useful for producing a smoothly flowing text in the TL (e.g., solving the special problems of idioms, making satisfactory choice of significances for prepositions and conjunctions), but he cannot be expected to make choice among multiple significances for meaning-bearing forms in specialized contexts; therefore indispensable would be either a battery of post-editors with an intimate knowledge of diverse fields of specialization, or a body of separate mechanical micro-glossaries.One of our immediate problems is to determine whether we actually have a choice between specialist post-editors and micro-glossaries; that is, whether micro-glossaries can be devised. Thereafter we should have to determine which choice would provide greater efficiency.
null
null
null
null
Main paper: : to be sure, but it presupposes a human translator with a grasp of what we vaguely call "the elements" of German, a translator who is, for instance, equipped to find his way among multiple choices of the sort I outlined above. Pollard's rule 1 is that when a noun (identifiable by capitalization) occurs on a "break" (comma, period, etc.) it is possible to translate word-by-word from the beginning of the sentence to the break in question. It is possible, that is, if the translator can recognize the patterns of syntactic connection and knows, or can establish, the significance of the meaning-bearing words -a task which, unfortunately, our poor machines, without benefit of "elements," are incapable of performing. All that the Proposals intended to demonstrate was that machines can be instructed to recognize syntactic connection. Any complete system will have to be devised to meet the operations of some specific machine or of some specific combination of man and machine.Meanwhile, we must investigate the other sine-qua-non for block-by-block translation: the problem of the interpretation of the meaning-bearing words. Syntactic connection will almost infallibly identify word-function, and we now know that a recognition of syntactic connection can be built into the "memory" of machines of the high speed computer type. Word-meaning, on the other hand, is not a factor capable of being solved mechanically except by an elaborate reduction of the possibilities of multiple significance: that is by the production of a large -possibly very large -number of glossaries that pertain to one radically limited field of discourse.I do not think we should discard the possibility of a mechanical solution of the problem of multiple meaning until we have explored it more carefully than has hitherto been proposed. I greatly admire Mr. Reifler's conception of an FL pre-editor -as, indeed, I admire all of his ingenious proposals. But I do not believe that his combination of pre-editor with a mechanical dictionary constitutes the ultimate solution of our problem. In fact, I am of the opinion that we must grapple with the problem precisely at the point where Mr. Reifler abandons it. His proposals are most enlightening for the solution of problems of general language, but he has excluded problems of specific language (the jargons of medicine, mathematics, linguistics, geology etc.) from the domain of mechanical solution. We shall be much closer to the realization of mechanical translation, if we can mechanize the components of his "mechanized" dictionary.Mr. Bull's counts of function frequency and distribution, the purpose of which we have apparently failed to make completely clear, have produced fascinating results on which he will himself report. For my purpose it is enough to point out that he has demonstrated empirically something we have hitherto had to assume on the basis of impressionistic observation: the only meaning-bearing forms that we can purposefully isolate are nouns, verbs, adjectives, and -probably -adverbs. All the other "words" when transverbalized from one language into another are susceptible of such diverse interpretation that they must either, like the German article, be treated primarily as elements of syntactic connection, or else, like prepositions, be transverbalized in a multiplicity of meanings from among which an editor is expected to make his choice.I shall have something to say later about the microsemantics of nouns, the only set of the meaning-bearing group I have studied. For the moment, let us take a generalized view of the problem. The point must be made that no system as yet proposed will solve the problem of multiple significance. A pre-editor can do much to simplify syntactic connection for mechanical "digestion," but I do not see how, as an operator in the FL, he can effectively guide either the machine or the machine and a posteditor through the mazes of multiple meaning in the TL. Nor do I think we can hope for much accurate help from one monolingual post-editor or even from one bilingual consultant. What has been overlooked is the fact that the competence required in the post-editor, even if he be bilingual, is only partially linguistic. The real prerequisite for him is an intimate knowledge of the field to which the translated text pertains.Let us take the case of a hypothetical bilingual post-editor. He may have a perfect competence in translating literary German into English: he may have a working knowledge of geology, let us say, or of chemistry, or of anatomy, or of zoology, or-in an exceptional case -of all of these fields; but unless he knows mathematics well he is never going to decide that Menge in certain contexts must be translated "set" and not "quantity" or "multitude" or "mass" or "crowd" or anything of the like. I am afraid that the appendix to "The Present State of Research in MT" is apt to be misleading. Although the text is apparently specific, it is actually in the nature of a passage from "General Science". Given the one key word "microscope," the remainder of the words fall rather readily into relationship to it. Fortunately almost any educated editor would be expected to know what a microscope is and what sort of functions are expected of it. On the other hand if the key-word were the name of some less generally familiar device -let us say an oscilloscope or a kymograph (as used in phonetics laboratories), only an editor familiar with the gadget and its uses could surmise what functions it might be expected to perform. Some of us have seen bilingual experts trying to make just the sort of interpretation proposed for a bilingual editor: professors of German deciphering the efforts made by graduate students to translate material in their field for the purpose of passing a language-proficiency requirement. The results are all-too-often simply ludicrous. One of my colleagues almost flunked a mathematics student for translating Eigenwerte by the apparently preposterous form "eigenvalues", though this is, nevertheless, the proper English equivalent of the German original in a mathematical context. The fact that the bilingual professor spoke German like a native and knew zoology quite as well as he knew Goethe contributed, you see, nothing to his interpretation of this particular translated text.In short, no one post-editor, not even a bilingual, -unless he were a marvel of universal knowledge, in which case he would probably have something better to do than tinker with other people's texts -would be capable of solving the problem of multiple significance in the TL. A monolingual specialist in the particular field would unquestionably do far better. But in that case our process of MT would require a whole battery of monolingual experts, each of whom, if he would work on MT at all, would have to be taught the techniques appropriate to our operation, and each of whom would be susceptible to all the ills that flesh is heir to: human fallibility, death, illness, vacations, and better offers.Before we surrender our mechanistic autonomy, I suggest that we thoroughly explore the possibility of substituting for specialists, mechanized special micro-glossaries -glossaries which will reduce the range of choice of meaning from a bewildering multiplicity to a matter of -at the mosttwo or three. In fact, if the field be radically limited, we can probably produce bilingual glossaries with a preponderance of one-to-one equivalents.To resume my points in brief: word-by-word translation is literally impossible: the smallest unit with which we can operate is the syntactic block; the blocks can certainly be manipulated, either by mechanical operation alone or by a combination of pre-editing with mechanical operation, in such a way as to resolve patterns of syntactic connection and of word order from the FL into the TL; identification of meaning in the TL cannot be facilitated efficiently by a pre-editor; a monolingualor even bilingual -post-editor could be useful for producing a smoothly flowing text in the TL (e.g., solving the special problems of idioms, making satisfactory choice of significances for prepositions and conjunctions), but he cannot be expected to make choice among multiple significances for meaning-bearing forms in specialized contexts; therefore indispensable would be either a battery of post-editors with an intimate knowledge of diverse fields of specialization, or a body of separate mechanical micro-glossaries.One of our immediate problems is to determine whether we actually have a choice between specialist post-editors and micro-glossaries; that is, whether micro-glossaries can be devised. Thereafter we should have to determine which choice would provide greater efficiency. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
872
0.00344
null
null
null
null
null
null
null
null
3c0dd4b5f42e88d1317475b575df699d95023edd
27957304
null
The conference on mechanical translation held at {M}.{I}.{T}., {J}une 17-20, 1952
The following report was prepared immediately after the writer's return from the conference. It was written from the viewpoint of an engineer listening to experts in a field far separated from his own. Such judgments as may be found interspersed amongst the reports of individual papers are of an engineering nature, and are not to be construed, as being based upon other than an amateur's knowledge of linguistic theory. Further, they represent only the reporter's evaluation, not necessarily that of his company as a whole. It is of interest, however, that the writer's company, The International Business Machines Corporation, has jointly sponsored with Georgetown University a successful demonstration of syntactically correct mechanical translation from Russian into English. The computer employed was the IBM 701, and the programming techniques used were first discussed at the 1952 conference.
{ "name": [ "Reynolds, Craig" ], "affiliation": [ null ] }
null
null
Proceedings of the Conference on Mechanical Translation
1952-06-01
0
7
null
The concept of mechanical translation originated in two areas, the first being cryptographic work conducted by various governments during the late war, and the second being the successful inauguration and employment of the simultaneous translation schemes presently employed by the UN and other internation conferences. Broken down into basic essentials, translation consists of memory scanning for identification of meaning in two different symbolic systems, called languages, and simultaneous editing by the translator to convert the syntactical relationships of the language being translated to those of the translated language. Of these, the memory scanning is definitely paralleled in computer techniques. If one to one correlations in meaning existed between words of different languages, programming on existing computers would be completely successful. Syntactical relationships and shading of meaning by the context of the words makes the problem of mechanization exceedingly difficult in the absence of a mechanical means of converting from one syntax to another.Much work was stimulated by a memorandum, Translation, written by Dr. Warren Weaver of the Rockefeller Foundation.which was distributed to a selected group of linguists, psychologists, computer engineers, and philosophers. Dr. Yehoshua Bar-Hillel, acting under a grant from the Rockefeller Foundation and then con-* For a linguist's view of the same Conference, see MT, Vol. I, No. 2 , "Report on the First Conference on Mechanical Translation," Erwin Reifler, pp. 23-32. A list of participants in the Conference appears on p. 24 of that article. ducting his research at M.I.T., acted as the coordinator of the groups actively interested in mechanical translations. As part of his work, Dr. Bar-Hillel prepared a summary entitled "Present Interest in Mechanical Translation," listing the individuals actively working on the application of computers and computer techniques to mechanical translation. In 1952 he organized a Conference on Mechanical Translation at M.I.T.This report is concerned with providing a precis of the papers and discussions at the Conference.The Public Session of the Conference on Mechanical Translation was announced by invitations extended by Dr. Yehoshua Bar-Hillel to persons who might be interested in the problems of mechanical translation and, in particular to members of the Conference on Speech Communication which immediately preceded the Conference on Mechanical Translation. At the public session papers were not presented, but short talks were given by each of the five participants outlining their work in the field and their tentative proposals for future work.Dr. Bar-Hillel discussed the need and possibilities for mechanical translation, the need primarily arising in the fields of science and of diplomacy, for analysis of popular periodicals of various countries. Although a person may be versed in the cultural or popular language of several countries, this does not necessarily mean that the same individual is capable of translating scientific treatises originating in the same countries. This is due to the well known fact that each scientific discipline creates its own jargon, assigning very specific meanings to common words of the language, these meanings being peculiar to the particular science itself. There is, therefore, a need for translators who are capable of making meaningful interpretations, not only in the more popular writings, but also in specific areas of scientific research. The volume of material appearing in popular periodicals is appalling in its magnitude and complete scanning of a particular nation's output is virtually impossible as long as human translators must be relied upon. He concluded that it is in these areas that mechanical translation is capable of making a major contribution to society.Prof. Leon Dostert, Director of the Institute of Languages and Linguistics, Georgetown University, Washington, D. C., spoke on the subject of human translation versus machine translation. Prof. Dostert drew on his experience in setting up the translation system employed at the Nuremburg trials in Germany and in working with IBM in the development of the simultaneous translation system used at the UN and other international conferences. In discussing this problem, he made the statement that, except in the very specialized areas discussed by Dr. Bar-Hillel, there is no shortage of human translators, owing apparently to the fact that the current workload is regulated by their availability. The contribution a machine can make is in the processing of the vast amount of material that is currently not even being touched in the specialized fields. He described systems employed in setting up efficient simultaneous translation systems and also rapid printed translations in international gatherings. These systems were remarkably similar in their organization to machine organization for computer application. He confessed that he came to the Conference as a sceptic. (Later in the Conference he became convinced that mechanical translation would be possible.)Dr. Olaf Helmer, Director of Research, Mathematical Division, Rand Corporation, Santa Monica, California, discussed the structure of the problem of mechanical translation. Meanings of particular words and phrases may be idiomatic or may be changed or modified by the context in which they appear. Further, each group of languages has its own syntactical relationships which are peculiar to the group,and most frequently also vary in minor details among members of the same group. The machine must be capable of resolving idiomatic, contextual, and syntactic ambiguities if human editing is to be kept at a minimum and maximum intelligibility is to be achieved. Dr. Helmer discussed schemes that have been tentatively investigated by the Rand Corporation for solving this problem. His conclusion is that high speed general purpose computing machines will be able to handle the main translation task.Dr. Andrew D. Booth, Director, The Electronic Computer Section, Birkbeck College, University of London, discussed the popular misconceptions covered by the question, "How intelligent can a machine translator be ?" The conclusions necessarily were that "intelligence" as applied to machines involves a complete misunderstanding both of intelligence and of machines. No intelligence is required, on the part of the machine at least, in mechanical translation.Dr. James W. Perry, Center of International Studies, M.I.T., discussed machine techniques and index searching and translation. The basis of Dr. Perry's talk was the index searching machine developed by IBM to solve the problem of scanning vast amounts of information and extracting certain specific items. He discussed the development of coding on punched cards in order to employ a machine at maximum efficiency. He concluded on the basis of his acquaintanceship with existing machines and machine techniques that mechanical translation was not only feasible but far closer to realizations than possibly the audience recognized.A period of discussion from the floor followed the presentation of the talks. There was general agreement on the part of both the panel and the audience that mechanical translation was feasible. It was interesting to note that the computer engineers present presented all of the difficulties standing in the way of producing a mechanical translator from the engineering standpoint; the linguist, from his standpoint; and the psychologists and philosophers from the standpoint of their respective disciplines. Each agreed, however, that, if the other two groups did their work, we could in the near future produce adequate and intelligible machine programmed translations. Washington, presented the first two papers of the morning session entitled, "Mechanical Translation with Pre-editing,"and "Writing for Mechanical Translation."The first paper concerned itself with the fact that syntactical relationships differ amongst languages. For ease in programming on a mechanical translator, a source language should be arranged according to the syntax of the target language (language into which the material is being translated). Where this is not possible due to the fact that the syntax is inseparable from the actual word form (such as the dative case in Latin) certain keys, such as capital letters or diacritical marks, can be inserted as recognizable signals for a machine whose input is a print scanning device. Pre-editing then would imply the use of a human editor to rearrange the source language insofar as possible in accordance with the syntax of the target language, and secondly, employment of various inserted signals to notify the machine of syntactical arrangements inseparable from the word form.The second paper, on "Writing for Mechanical Translation," would necessitate the training of all writers, and more particularly their secretaries,in the required conventions for arrangement of an article for translation into a given language. The discussion of these two papers indicated that the use of a pre-editor, rather than educating all authors and all secretaries in techniques of writing for mechanical translations, is far preferable. As a matter of fact, a person skilled in keyboard operation could be readily trained to insert syntactical recognition signals at the time of keying the text into the machine. This, of course, also holds for the preparation of a manuscript for machine scanning.Dr. Yehoshua Bar-Hillel presented a paper on Mechanical Translation employing a post-editor. Since a one-to-one correlation does not exist between meanings of words expressing essentially the same idea in various languages, if a machine operates on a comparison basis only, or even if it is capable of computing syntactical relationship, a multiplicity of words in the target language can be derived for any single word of the source language. For a particular sentence, say of 10 words length, this can easily result in possible combinations of words in the target language extending to several thousands of more or less meaningful combinations. It is necessary, therefore, to incorporate some form of post-editing in order to resolve the ambiguities inherent in this relationship between languages. Dr. Bar-Hillel is much concerned with the tremendously increased demands in terms of machine storage capacity which this situation implies. It is, however, not quite so grave as appears on the surface, since particularly in scientific writings, a vast number of one-to-one correlations do exist.(The subject of glossaries to handle the scientific translations was covered in a later session of the conference.)The fourth paper, "Model English for Mechanical Translation" was presented by Prof. Stuart C. Dodd, Director, Washington Public Opinion Laboratory, University of Washington, Seattle. Dr. Dodd's paper concerned itself with the standardization of English syntax as a means of simplifying the use of English either as a source language or as a target language. A model language, as defined by Dr. Dodd, means any language in which the rules of syntax have been regularized, and in which familiarity of words is a governing criterion. The specific rules used in regularizing a language are itemized in the paper. The examples employed by Dr. Dodd indicate that regularizing, that is, constructing a model language, impaires but very slightly the readability and understandability of the subject matter. In English, at least, regularizing leads only to a certain quaintness of expression somewhat similar to the sentence structure employed by the Quakers.No attempts have been made as yet to regularize languages other than English, but at least for the Romance languages it seems on first view that such regularization can be accomplished.The particular rules of importance to Mechanical Translation are: one word order; one meaning for each word; and one form for each word.The experience gained in using model language at the Washington Public Opinion Laboratory indicates clearly that regularization of a language minimizes the points brought out by Dr. Bar-Hillel. The discussion showed that the conference was in substantial agreement that regularization by use of the concepts of a model language is feasible and directly applicable to the problems of mechanical translation. In particular, so far as the machines to be employed are concerned, the machine men present felt that it could be a decided advantage in reducing the complexity of equipment required.Chairman -A. C. Reynolds, Jr.Prof. Victor A. Oswald, Department of Germanic Languages, University of California, Los Angeles, presented the first paper entitled "Word-by-Word Translation." Prof. Oswald and Dr. Harry D. Huskey, Assistant Director, National Bureau of Standards Institute for Numerical Analysis, University of California, Los Angeles, jointly conducted experiments in the translation of a text in mathematics and another in brain surgery from German into English. The investigation by Dr. Oswald indicated that wordby-word translation from German into English was a virtually impossible task, chiefly because of the fact that German "articles" are also "words." Also, German sentence structure is such that word-by-word translation from German into English becomes virtually meaningless. Initial investigation resulted in a published report entitled, "Proposals for the Mechanical Resolution of German Syntax Patterns."Although word-by-word translation seemed impossible, breaking of the German sentence into a block-by-block formation, in which each block has a certain specific syntactical function, was far more profitable. Regularization of the German language and other languages of similar structure thus appears to be dependent upon such block-by-block analysis. The "Proposals" indicate that machines can be instructed to recognize syntactic connection upon this basis. The second major consideration for block-byblock translation is the problem of recognizing and interpreting the meaning-bearing words within a block. Syntactic connections will almost infallibly identify the word function and hence function recognition can be programmed. Linguistic research, particularly that conducted by Prof. William E. Bull, Department of Spanish, University of California, Los Angeles, (also a participant at the conference) shows clearly that the only meaning-bearing forms that can be isolated are nouns, verbs, adjectives, and possibly adverbs. In general, of these classes, nouns are by far the most useful and used bearers of meaning. No system yet proposed will solve the problem of multiple significance of the meaningbearing words. However, within a specific subject, a meaning-bearing word in general has only one specific meaning. This fact can be utilized to advantage in mechanical translation in which the criterion of meaning is determined by the subject matter being considered. Dr.Oswald proposed to take advantage of this fact by the use of what he termed micro-glossaries. These micro-glossaries would be constructed on the basis of the words most commonly used in specific subjects of interest; one such glossary being constructed for each subject to be translated. Mechanically, this means that two memories would be employed in a machine; one, a most used general vocabulary for the languages being processed; and two, a specific micro-glossary to assign specific meanings to words that would otherwise have a multiplicity of meaning; that is, if all their fields of usage were to be considered simultaneously. The concept of a micro-glossary and the use of blockby-block syntactic recognition in the machine met with favor from all the participants in the conference. The linguists appeared certain that block-by-block syntactic analysis of sentences could be accomplished and likewise were in agreement as to the reduction of ambiguity in the meaning of a word when only one field of interest was to be considered. The engineers present fully recognized the advantage to be gained from the reduction in size of memories growing out of the micro-glossary concept.Dr. Yehoshua Bar-Hillel presented the next paper on "Operational Syntax." No proposal had yet been presented to the conference regarding a means of programming a machine for recognizing syntactic connections. Dr. Bar-Hillel, examining this problem as a problem in symbolic logic, has discovered certain relationships that exist within the syntax of sentence structure. Further, he has discovered that these can be quite readily symbolized in the form of symbolic fractions. A simple multiplication of the fractions, which results in the cancellation of like quantities in the numerator and denominator, results in a unique symbol indicative of the functions of the word block so analyzed. Use of this analysis permits ready recognition of word blocks functioning as nouns, verbs, adjectives, or adverbs.The identification results in the ability to rearrange the syntax of the source language into the syntax of the target language. This is a simple arithmetic operation that can be readily programmed on a machine. The investigations to date have been preliminary, but indicate that the field is limited only by the number of languages which it would be profitable to so analyze.This was a completely new concept to the linguists of the conference who had intuitively felt that such a structure did exist but without the tools of symbolic logic had been unable to isolate the essential features that lead to the exceedingly simply arithmetic operations. The engineers immediately recognized the extreme advantages and the simplicity of the computing loops necessary to give the machine the ability to recognize word block functions and programmed reorganization of sentence structure.Prof. William N. Locke, Department of Modern Languages, M.I.T., presented the third paper on "Mechanical Translation of Printed and Spoken Material." This paper was presented orally only, no copies having been made for distribution.Prof. Locke is interested in the potentiality of using voice input to produce either a voice output or a printed output. He drew on work that has been conducted at the Bell Laboratories, at the Haskins Laboratories, at M.I.T., and elsewhere on the analysis of speech and the recognition of the components that form the spoken word. It appears at the present time that 8 such components uniquely determined a sound. Recognition of these 8 elements leads to the identification of one sound to the exclusion of all other sounds. It was Prof. Locke's contention that a machine could be built to recognize these 8 components and give a unique output (phoneme). The phoneme so constructed could be used with other phonemes to locate a specific unit within the memory whose meaning in the target language would be the same as the meaning in the source language. This of course pre-supposes the utilization of the philosophy in constructing memories as outlined in the previous pages of the conference.The discussion of Prof. Locke's paper was completely speculative since devices capable of so analyzing sounds are not yet in existence and it appears that it will be sometime in the future before such an art can become a science.Dr. Victor A. Oswald presented the first paper, entitled "Microsemantics." This paper continued the analysis that Dr. Oswald had presented on the preceding day in his discussion of word-by-word translation. He was now concerned with the fact that, in general, editing of the subject material would be required both before translation, in the source language, and after translation, in the target language. The problem is to simplify as much as possible the work required in such pre-editing and postediting.Assuming that syntactic considerations could be solved by such an analysis as that proposed by Dr. Bar-Hillel, the work of translation would be very greatly facilitated by the use of specialized glossaries concerned with the specific subject matter of the material being translated. (Dr. Oswald terms this type of glossary a micro-glossary, and the analysis that leads to it, microsemantical investigation.)The data obtained from every sort of linguistic frequency count when arranged according to descending numbers forms a monotonic descending curve. The words of highest frequency drop quite abruptly; words of medium frequency start flattening out; and words of highly specialized meaning that are used but seldom cause the curve to approach the horizontal axis asymptotically. The upper segment of the curve contains the words which are usually found in the normal or everyday vocabulary of a language, and contains about 80 per cent of the actual volume of the material. Unfortunately, these terms consist mainly of articles which convey but little meaning; the meaning-bearing forms, and in particular the nouns, are represented by the tail of the curve. All languages exhibit this characteristic curve. Thus, in order to find those words conveying the major meaning in any text, we are concerned with the tail of the curve rather than the large grouping of words occurring at the beginning of the curve. Considering that this particular section of the curve is representative of a micro-glossary of a specific subject in the language, the words of this section in general will have one and only one meaning.To verify this assumption, Dr. Oswald analyzed nearly a hundred papers in German on the subject of brain surgery. Technical nouns were abstracted from the first article. Additional nouns were added from the second article, and so through the complete series of texts employed. Each succeeding text was chosen from a different field of brain surgery. The amazing fact developed that after the fourth article, the glossary derived covered an average of 80 per cent of all the technical nouns in each succeeding article. From this, he constructed a microglossary that he considers representative of the field of brain surgery in the German language.A similar glossary of non-technical nouns was also compiled from the same series of articles. The frequency curve of the non-technical nouns was the same as that of the technical nouns. In other words, the brain surgeons are not only compelled to choose their technical nouns from a limited vocabulary, but their pattern of communication is so limited by practice and convention that even the range of non-technical nouns is predictable.We may generalize, although perhaps dangerously, that the same phenomenon will appear in all technical fields of a restricted nature.The micro-glossary was employed in programming translations on the SWAC in cooperation with Dr. Harry D. Huskey, Assistant Director, National Bureau of Standards Institute for Numerical Analysis, University of California, Los Angeles. The translations so obtained conveyed the meaning of the original article with correlations of meaning better than 90 per cent, on the assumption that the problems of syntax and contextual modification had previously been solved. Even without this assumption, the translated articles, when presented to a specialist in the field, in the raw un-edited form, conveyed the major portion of the meaning of the original article in the original language.The discussion that followed the paper clearly showed that the linguists working in other languages than German were in complete agreement as to the ease with which such microglossaries could be constructed. The engineers and scientists, from their knowledge of technical articles in their respective fields, indicated that the size of micro-glossaries in these fields would be as small in comparison to the complete vocabulary of a language as Dr. Oswald postulated. All agreed that the use of such micro-glossaries would enormously reduce the amount of memory required in a translating machine.In particular, the discussion centered on isolation of nouns as the major meaning-bearing words of a language. A rough analysis was made of the language being used around the table, and it was quite evident that in general verbs employed in conveying meaning through speech are in the present tense and in the vast majority of cases the verb is a form of the verb "to be." Since information is adequately conveyed by speech, it seemed reasonable to the participants that a translation which would ignore tenses and concentrate on nouns whichin newspaper parlance -convey the who, what, when, where, and how, of a statement, would adequately convey to a post-editor the necessary raw material to be employed in producing a polished translation. Dr. Oswald was congratulated by the group for his work and analysis of this phenomena.Prof. William E. Bull, Department of Spanish, University of California, Los Angeles, presented the second paper entitled "Frequency Problems in Mechanical Translation." Prof. Bull's investigation in Spanish literature paralleled the investigations of Dr. Oswald. Running texts in Spanish literature, which employed a general vocabulary rather than a restricted vocabulary, verify in detail the existence of the same phenomenon in general language as occurred in the restricted field of brain surgery, but Prof. Bull stressed that low frequency, unpredictable terms often carry critically important meaning.Prof. Bull exhibited numerous slides showing the frequency counts of words, the frequency occurrence of particular parts of speech, and the frequency counts of words within the classification of a particular part of speech. He discussed in some detail the problem of determining syntactic connections in Spanish sentences. He also discussed the type of work and the type of personnel required to extend knowledge in this field not only for Spanish but also for other languages of interest.Prof. Bull's paper was in part abstracted from a monograph not yet published. Therefore, he did not present a written paper to the participants of the conference, and this material is at present unavailable.Substantially, Prof. Bull's paper was a verification of the work of Dr. Oswald and indicated the fruitfulness of this approach to the problem of Mechanical Translation. A discussion of the means required to further extend the investigations showed clearly that the analysis could be facilitated by the use of punched cards. Such mechanization can enormously increase our knowledge of language structure, whereas the present handwritten and hand-sorting techniques are far too slow to materially aid in the solution of the problems of mechanical translation. Prof. Bull accepted the suggestion that he investigate the possibilities of employing punched cards as a means of extending the scope of his research.The third paper was presented by Prof. Erwin Reifler and was entitled "General Mechanical Translation and Universal Grammar." Prof. Reifler has inaugurated a new school of linguis-tic investigation which is currently known as "Comparative Semantics." Prof. Reifler has been investigating languages in order to discover such patterns of verbally conveying meaning, underlying the actual words and syntax of a language, as are common to all languages. Such a structure could form a "universal grammar."Mechanical translation poses the following question: "Is it possible to solve the problems of Mechanical Translation in such a way that one and the same preparation of the code text may serve for a Mechanical Translation into many different languages?" The existence of a universal grammar would most assuredly assist in the solution of this problem if such a grammar could be shown to exist. To date, the science of linguistics states that no such universal grammar exists, but linguists do speak of language universals. In particular, many highly interesting cases of parallel development in the evolution of the expression of meaning amongst structurally unrelated languages do exist. The universals may be used to re-adjust the language structure to form what Prof. Reifler terms "adjusted model target languages." This is in line with the recommendation that Prof. Stuart C. Dodd presented in his paper on "Model English." Use of the adjustment clearly simplifies the mechanical translation problem and the engineering re-quired for its solution.The discussion of the paper reinforced the conclusions of the discussion on Prof. Dodd's paper. It is encouraging to note that where Prof. Dodd has restricted his considerations to English and hypothesized extension to other languages, Prof. Reifler, working from a completely different viewpoint and another purpose in mind, arrived at the same conclusions as to the feasibility of regularizing a language and further demonstrated our ability to regularize major language groups of the world. Dr. Huskey reviewed the problems encountered in programming German translations in collaboration with Dr. Oswald. The problems encountered were, to a certain extent, peculiar to the SWAC, which was the machine available for the translation. The basic problems were the construction of a vocabulary for entry into the machine, the derivation of a system of addressing to find particular units in the memory, and the syntactic programming to obtain correct sentence structure in the output of the machine. These problems are basic to any machine translation. In general, the design of the machine will govern the type of programming required. The use of two types of memories seems desirable -the first having short access time and the second, which will contain words of infrequent use, having a longer access time. The arithmetic operations required for the construction of the correct sentence structure will be dependent upon the arithmetic devices provided with the machine. The complexity of the machine, if a machine is constructed for the sole purpose of mechanical translation, will be a function of the degree of accuracy required in the translation. This in turn will be dependent upon the allocation of time for pre-editing the material for machine input and post-editing of the machine output.The second paper was presented by Mr. J. W. Forrester, Director of Digital Computer Laboratory, M.I.T., on the subject of "Problems of Storage and Cost." This also was presented in the form of a talk, no written material being distributed.Mr. Forrester presented no cost items that are not known to computers and business machine engineers. His major purpose was to indicate to the linguists present the cost of the machine that they were proposing. Techniques employing magnetic drums, magnetic tapes, and electrostatic storage devices singly and in combination with one another were presented for consideration. The most economical array consists of an intermediate memory and computing unit of low access time and a large scale memory of long access time. The cost of the machine is dependent on the same considerations as listed by Dr. Huskey.The third paper was presented by Dr. A. Donald Booth, Director, Computation Laboratory, Birkbeck College, London. The title was changed from that listed in the program to "Some Methods of Mechanized Translation," which was written in collaboration with Dr. R. H. Richens of the Biological Laboratories of the University of London. General principles of mechanical translation, as scheduled and pro-grammed on the computer built by Dr. Booth for the University of London, were discussed. The use of punched card machinery was compared with the use of an automatic digital computer. Time comparisons were worked out that favored the use of the automatic digital computing machinery by a time ratio of at least 7 to 1. Examples of translations in the field of genetics from Albanian, Danish, Dutch, Finnish, French, German, Hungarian, Indonesian, Italian, Latin, Latvian, Norwegian, Polish, Portuguese, Rumanian, Spanish, Swedish, Turkish, Arabic, and Japanese were given. Usable translations in each of these cases, despite the limited storage available with Dr. Booth's computer, were obtained. Post-editing was necessary in all cases, however, to produce a readable, although not necessarily more intelligible translation.The fourth paper was presented by Prof. Wm. E. Bull and was concerned with the possible future effect of the concept of mechanical translation on the teaching of foreign languages. Prof. Bull stated that the concept of mechanical translation necessitates a completely new approach to the problem of language teaching. An analogy was drawn between a machine into whose memory a vocabulary had not been incorporated and a student into whose brain such a vocabulary must also be introduced. The approach in teaching syntactic connections to both the machine and to the student in terms of the programming required to obtain syntactically correct constructions from the memory storage was discussed. Prof. Bull reached the conclusions that the same considerations that govern the choice of vocabulary and the use of intermediate and large scale memories in the machine could be advantageously incorporated into the teaching of languages as well as the design of machines for mechanical translation.Dr. Louis N. Ridenour was unfortunately unable to attend the conference, and his paper on "Learning Machines" was not presented.In his place, Prof. James W. Perry, Research Associate, Center for International Studies, M.I.T., presented a paper on "Machine Techniques for Index Searching and for Machine Translation." This paper was an elaboration of the talk that Prof. Perry presented at the opening public session of the conference. To a considerable extent, the concepts in the paper were based on Prof. Perry's experience in setting up coding and indexing systems for hand-sorted punched cards, and also on his experience with the library-cataloging machine developed by IBM. Fundamentally, the same conclusions as to memory and access times were reached by Prof. Perry as had been previously derived by the other participants in the conference.Session VI -June 20, 1952 Chairman -Prof. Wm. E. Bull The closing session of the conference was devoted to a consideration of organization for future research. A seven-man committee was organized at this session to act as coordinators and consultants for further work. The committee is composed of Dr. Yehoshua Bar-Hillel, as chairman; Prof. Leon Dostert, secretary; and Dr. Olaf Helmer, Dr. Harry D. Huskey, Prof. Erwin Reifler, and Mr. A. C. Reynolds, Jr., as members. Dr. A. Donald Booth was placed on the committee as the European representative.In the organization for future research, the conferees were asked to what degree they were interested in future work and in which areas they wished to participate.Dr. Booth will continue with the work he has already started with Dr. R. H. Richens at the University of London.Prof. Bull is interested in the field of linguistic problems of translation and as part of his research activity will continue with his study of the Spanish language. He is not concerned with mechanical translation as such, but recognizes the necessity for, and the value of, his linguistic work in reaching this goal.Dr. Dodd will continue his work in the studies of regularizing languages and determine the degree of extension possible in languages other than English.Prof. Dostert intends to work actively, through the Institute of Languages and Linguistics, Georgetown University, in the derivation of principles for the use of machines in translation.Dr. Olaf Helmer stated that the Rand Corporation is interested from the theoretical viewpoint, but in all probability at the present time will confine itself only to theoretical work as secondary to its work on computers.Dr. Huskey had no comment other than that he would continue to collaborate with Prof. Oswald.Prof. Oswald is interested in extending the concept of micro-glossaries and in the study of syntactic relations. He intends to continue work in the programming of translation for machines.Prof. Reifler is extremely interested in demonstrating the existence of universals in gram-mar, and in applying these universals to the problem of mechanical translation.Dr. Bar-Hillel will continue his basic research in symbolic logic and its applications to the field of mechanical translation.Dr. Jerome B. Wiesner, speaking for the M.I.T. staff present, stated that the research laboratory at M.I.T. is very much interested in the application of computer techniques to the problem of mechanical translation and that if a concrete program was formulated, financial support could quite conceivably be forthcoming from the Research Laboratory.Mr. Duncan Harkin of the Department of Defense stated that the Department of Defense was vitally interested in this problem and, like Dr. Wiesner, if a concrete proposal for such a translation and subsequent demonstration could be formulated, the Department of Defense would be prepared to give financial backing.Mr. Reynolds stated that IBM was interested in the application of its present punched card techniques and its computers to this problem and as such would participate on the basis of exchange of theoretical information with the members of the conference.The conference closed on a note of optimism regarding the potentialities now known to be physically present in the concept of mechanical translation.
null
null
null
null
Main paper: : The concept of mechanical translation originated in two areas, the first being cryptographic work conducted by various governments during the late war, and the second being the successful inauguration and employment of the simultaneous translation schemes presently employed by the UN and other internation conferences. Broken down into basic essentials, translation consists of memory scanning for identification of meaning in two different symbolic systems, called languages, and simultaneous editing by the translator to convert the syntactical relationships of the language being translated to those of the translated language. Of these, the memory scanning is definitely paralleled in computer techniques. If one to one correlations in meaning existed between words of different languages, programming on existing computers would be completely successful. Syntactical relationships and shading of meaning by the context of the words makes the problem of mechanization exceedingly difficult in the absence of a mechanical means of converting from one syntax to another.Much work was stimulated by a memorandum, Translation, written by Dr. Warren Weaver of the Rockefeller Foundation.which was distributed to a selected group of linguists, psychologists, computer engineers, and philosophers. Dr. Yehoshua Bar-Hillel, acting under a grant from the Rockefeller Foundation and then con-* For a linguist's view of the same Conference, see MT, Vol. I, No. 2 , "Report on the First Conference on Mechanical Translation," Erwin Reifler, pp. 23-32. A list of participants in the Conference appears on p. 24 of that article. ducting his research at M.I.T., acted as the coordinator of the groups actively interested in mechanical translations. As part of his work, Dr. Bar-Hillel prepared a summary entitled "Present Interest in Mechanical Translation," listing the individuals actively working on the application of computers and computer techniques to mechanical translation. In 1952 he organized a Conference on Mechanical Translation at M.I.T.This report is concerned with providing a precis of the papers and discussions at the Conference.The Public Session of the Conference on Mechanical Translation was announced by invitations extended by Dr. Yehoshua Bar-Hillel to persons who might be interested in the problems of mechanical translation and, in particular to members of the Conference on Speech Communication which immediately preceded the Conference on Mechanical Translation. At the public session papers were not presented, but short talks were given by each of the five participants outlining their work in the field and their tentative proposals for future work.Dr. Bar-Hillel discussed the need and possibilities for mechanical translation, the need primarily arising in the fields of science and of diplomacy, for analysis of popular periodicals of various countries. Although a person may be versed in the cultural or popular language of several countries, this does not necessarily mean that the same individual is capable of translating scientific treatises originating in the same countries. This is due to the well known fact that each scientific discipline creates its own jargon, assigning very specific meanings to common words of the language, these meanings being peculiar to the particular science itself. There is, therefore, a need for translators who are capable of making meaningful interpretations, not only in the more popular writings, but also in specific areas of scientific research. The volume of material appearing in popular periodicals is appalling in its magnitude and complete scanning of a particular nation's output is virtually impossible as long as human translators must be relied upon. He concluded that it is in these areas that mechanical translation is capable of making a major contribution to society.Prof. Leon Dostert, Director of the Institute of Languages and Linguistics, Georgetown University, Washington, D. C., spoke on the subject of human translation versus machine translation. Prof. Dostert drew on his experience in setting up the translation system employed at the Nuremburg trials in Germany and in working with IBM in the development of the simultaneous translation system used at the UN and other international conferences. In discussing this problem, he made the statement that, except in the very specialized areas discussed by Dr. Bar-Hillel, there is no shortage of human translators, owing apparently to the fact that the current workload is regulated by their availability. The contribution a machine can make is in the processing of the vast amount of material that is currently not even being touched in the specialized fields. He described systems employed in setting up efficient simultaneous translation systems and also rapid printed translations in international gatherings. These systems were remarkably similar in their organization to machine organization for computer application. He confessed that he came to the Conference as a sceptic. (Later in the Conference he became convinced that mechanical translation would be possible.)Dr. Olaf Helmer, Director of Research, Mathematical Division, Rand Corporation, Santa Monica, California, discussed the structure of the problem of mechanical translation. Meanings of particular words and phrases may be idiomatic or may be changed or modified by the context in which they appear. Further, each group of languages has its own syntactical relationships which are peculiar to the group,and most frequently also vary in minor details among members of the same group. The machine must be capable of resolving idiomatic, contextual, and syntactic ambiguities if human editing is to be kept at a minimum and maximum intelligibility is to be achieved. Dr. Helmer discussed schemes that have been tentatively investigated by the Rand Corporation for solving this problem. His conclusion is that high speed general purpose computing machines will be able to handle the main translation task.Dr. Andrew D. Booth, Director, The Electronic Computer Section, Birkbeck College, University of London, discussed the popular misconceptions covered by the question, "How intelligent can a machine translator be ?" The conclusions necessarily were that "intelligence" as applied to machines involves a complete misunderstanding both of intelligence and of machines. No intelligence is required, on the part of the machine at least, in mechanical translation.Dr. James W. Perry, Center of International Studies, M.I.T., discussed machine techniques and index searching and translation. The basis of Dr. Perry's talk was the index searching machine developed by IBM to solve the problem of scanning vast amounts of information and extracting certain specific items. He discussed the development of coding on punched cards in order to employ a machine at maximum efficiency. He concluded on the basis of his acquaintanceship with existing machines and machine techniques that mechanical translation was not only feasible but far closer to realizations than possibly the audience recognized.A period of discussion from the floor followed the presentation of the talks. There was general agreement on the part of both the panel and the audience that mechanical translation was feasible. It was interesting to note that the computer engineers present presented all of the difficulties standing in the way of producing a mechanical translator from the engineering standpoint; the linguist, from his standpoint; and the psychologists and philosophers from the standpoint of their respective disciplines. Each agreed, however, that, if the other two groups did their work, we could in the near future produce adequate and intelligible machine programmed translations. Washington, presented the first two papers of the morning session entitled, "Mechanical Translation with Pre-editing,"and "Writing for Mechanical Translation."The first paper concerned itself with the fact that syntactical relationships differ amongst languages. For ease in programming on a mechanical translator, a source language should be arranged according to the syntax of the target language (language into which the material is being translated). Where this is not possible due to the fact that the syntax is inseparable from the actual word form (such as the dative case in Latin) certain keys, such as capital letters or diacritical marks, can be inserted as recognizable signals for a machine whose input is a print scanning device. Pre-editing then would imply the use of a human editor to rearrange the source language insofar as possible in accordance with the syntax of the target language, and secondly, employment of various inserted signals to notify the machine of syntactical arrangements inseparable from the word form.The second paper, on "Writing for Mechanical Translation," would necessitate the training of all writers, and more particularly their secretaries,in the required conventions for arrangement of an article for translation into a given language. The discussion of these two papers indicated that the use of a pre-editor, rather than educating all authors and all secretaries in techniques of writing for mechanical translations, is far preferable. As a matter of fact, a person skilled in keyboard operation could be readily trained to insert syntactical recognition signals at the time of keying the text into the machine. This, of course, also holds for the preparation of a manuscript for machine scanning.Dr. Yehoshua Bar-Hillel presented a paper on Mechanical Translation employing a post-editor. Since a one-to-one correlation does not exist between meanings of words expressing essentially the same idea in various languages, if a machine operates on a comparison basis only, or even if it is capable of computing syntactical relationship, a multiplicity of words in the target language can be derived for any single word of the source language. For a particular sentence, say of 10 words length, this can easily result in possible combinations of words in the target language extending to several thousands of more or less meaningful combinations. It is necessary, therefore, to incorporate some form of post-editing in order to resolve the ambiguities inherent in this relationship between languages. Dr. Bar-Hillel is much concerned with the tremendously increased demands in terms of machine storage capacity which this situation implies. It is, however, not quite so grave as appears on the surface, since particularly in scientific writings, a vast number of one-to-one correlations do exist.(The subject of glossaries to handle the scientific translations was covered in a later session of the conference.)The fourth paper, "Model English for Mechanical Translation" was presented by Prof. Stuart C. Dodd, Director, Washington Public Opinion Laboratory, University of Washington, Seattle. Dr. Dodd's paper concerned itself with the standardization of English syntax as a means of simplifying the use of English either as a source language or as a target language. A model language, as defined by Dr. Dodd, means any language in which the rules of syntax have been regularized, and in which familiarity of words is a governing criterion. The specific rules used in regularizing a language are itemized in the paper. The examples employed by Dr. Dodd indicate that regularizing, that is, constructing a model language, impaires but very slightly the readability and understandability of the subject matter. In English, at least, regularizing leads only to a certain quaintness of expression somewhat similar to the sentence structure employed by the Quakers.No attempts have been made as yet to regularize languages other than English, but at least for the Romance languages it seems on first view that such regularization can be accomplished.The particular rules of importance to Mechanical Translation are: one word order; one meaning for each word; and one form for each word.The experience gained in using model language at the Washington Public Opinion Laboratory indicates clearly that regularization of a language minimizes the points brought out by Dr. Bar-Hillel. The discussion showed that the conference was in substantial agreement that regularization by use of the concepts of a model language is feasible and directly applicable to the problems of mechanical translation. In particular, so far as the machines to be employed are concerned, the machine men present felt that it could be a decided advantage in reducing the complexity of equipment required.Chairman -A. C. Reynolds, Jr.Prof. Victor A. Oswald, Department of Germanic Languages, University of California, Los Angeles, presented the first paper entitled "Word-by-Word Translation." Prof. Oswald and Dr. Harry D. Huskey, Assistant Director, National Bureau of Standards Institute for Numerical Analysis, University of California, Los Angeles, jointly conducted experiments in the translation of a text in mathematics and another in brain surgery from German into English. The investigation by Dr. Oswald indicated that wordby-word translation from German into English was a virtually impossible task, chiefly because of the fact that German "articles" are also "words." Also, German sentence structure is such that word-by-word translation from German into English becomes virtually meaningless. Initial investigation resulted in a published report entitled, "Proposals for the Mechanical Resolution of German Syntax Patterns."Although word-by-word translation seemed impossible, breaking of the German sentence into a block-by-block formation, in which each block has a certain specific syntactical function, was far more profitable. Regularization of the German language and other languages of similar structure thus appears to be dependent upon such block-by-block analysis. The "Proposals" indicate that machines can be instructed to recognize syntactic connection upon this basis. The second major consideration for block-byblock translation is the problem of recognizing and interpreting the meaning-bearing words within a block. Syntactic connections will almost infallibly identify the word function and hence function recognition can be programmed. Linguistic research, particularly that conducted by Prof. William E. Bull, Department of Spanish, University of California, Los Angeles, (also a participant at the conference) shows clearly that the only meaning-bearing forms that can be isolated are nouns, verbs, adjectives, and possibly adverbs. In general, of these classes, nouns are by far the most useful and used bearers of meaning. No system yet proposed will solve the problem of multiple significance of the meaningbearing words. However, within a specific subject, a meaning-bearing word in general has only one specific meaning. This fact can be utilized to advantage in mechanical translation in which the criterion of meaning is determined by the subject matter being considered. Dr.Oswald proposed to take advantage of this fact by the use of what he termed micro-glossaries. These micro-glossaries would be constructed on the basis of the words most commonly used in specific subjects of interest; one such glossary being constructed for each subject to be translated. Mechanically, this means that two memories would be employed in a machine; one, a most used general vocabulary for the languages being processed; and two, a specific micro-glossary to assign specific meanings to words that would otherwise have a multiplicity of meaning; that is, if all their fields of usage were to be considered simultaneously. The concept of a micro-glossary and the use of blockby-block syntactic recognition in the machine met with favor from all the participants in the conference. The linguists appeared certain that block-by-block syntactic analysis of sentences could be accomplished and likewise were in agreement as to the reduction of ambiguity in the meaning of a word when only one field of interest was to be considered. The engineers present fully recognized the advantage to be gained from the reduction in size of memories growing out of the micro-glossary concept.Dr. Yehoshua Bar-Hillel presented the next paper on "Operational Syntax." No proposal had yet been presented to the conference regarding a means of programming a machine for recognizing syntactic connections. Dr. Bar-Hillel, examining this problem as a problem in symbolic logic, has discovered certain relationships that exist within the syntax of sentence structure. Further, he has discovered that these can be quite readily symbolized in the form of symbolic fractions. A simple multiplication of the fractions, which results in the cancellation of like quantities in the numerator and denominator, results in a unique symbol indicative of the functions of the word block so analyzed. Use of this analysis permits ready recognition of word blocks functioning as nouns, verbs, adjectives, or adverbs.The identification results in the ability to rearrange the syntax of the source language into the syntax of the target language. This is a simple arithmetic operation that can be readily programmed on a machine. The investigations to date have been preliminary, but indicate that the field is limited only by the number of languages which it would be profitable to so analyze.This was a completely new concept to the linguists of the conference who had intuitively felt that such a structure did exist but without the tools of symbolic logic had been unable to isolate the essential features that lead to the exceedingly simply arithmetic operations. The engineers immediately recognized the extreme advantages and the simplicity of the computing loops necessary to give the machine the ability to recognize word block functions and programmed reorganization of sentence structure.Prof. William N. Locke, Department of Modern Languages, M.I.T., presented the third paper on "Mechanical Translation of Printed and Spoken Material." This paper was presented orally only, no copies having been made for distribution.Prof. Locke is interested in the potentiality of using voice input to produce either a voice output or a printed output. He drew on work that has been conducted at the Bell Laboratories, at the Haskins Laboratories, at M.I.T., and elsewhere on the analysis of speech and the recognition of the components that form the spoken word. It appears at the present time that 8 such components uniquely determined a sound. Recognition of these 8 elements leads to the identification of one sound to the exclusion of all other sounds. It was Prof. Locke's contention that a machine could be built to recognize these 8 components and give a unique output (phoneme). The phoneme so constructed could be used with other phonemes to locate a specific unit within the memory whose meaning in the target language would be the same as the meaning in the source language. This of course pre-supposes the utilization of the philosophy in constructing memories as outlined in the previous pages of the conference.The discussion of Prof. Locke's paper was completely speculative since devices capable of so analyzing sounds are not yet in existence and it appears that it will be sometime in the future before such an art can become a science.Dr. Victor A. Oswald presented the first paper, entitled "Microsemantics." This paper continued the analysis that Dr. Oswald had presented on the preceding day in his discussion of word-by-word translation. He was now concerned with the fact that, in general, editing of the subject material would be required both before translation, in the source language, and after translation, in the target language. The problem is to simplify as much as possible the work required in such pre-editing and postediting.Assuming that syntactic considerations could be solved by such an analysis as that proposed by Dr. Bar-Hillel, the work of translation would be very greatly facilitated by the use of specialized glossaries concerned with the specific subject matter of the material being translated. (Dr. Oswald terms this type of glossary a micro-glossary, and the analysis that leads to it, microsemantical investigation.)The data obtained from every sort of linguistic frequency count when arranged according to descending numbers forms a monotonic descending curve. The words of highest frequency drop quite abruptly; words of medium frequency start flattening out; and words of highly specialized meaning that are used but seldom cause the curve to approach the horizontal axis asymptotically. The upper segment of the curve contains the words which are usually found in the normal or everyday vocabulary of a language, and contains about 80 per cent of the actual volume of the material. Unfortunately, these terms consist mainly of articles which convey but little meaning; the meaning-bearing forms, and in particular the nouns, are represented by the tail of the curve. All languages exhibit this characteristic curve. Thus, in order to find those words conveying the major meaning in any text, we are concerned with the tail of the curve rather than the large grouping of words occurring at the beginning of the curve. Considering that this particular section of the curve is representative of a micro-glossary of a specific subject in the language, the words of this section in general will have one and only one meaning.To verify this assumption, Dr. Oswald analyzed nearly a hundred papers in German on the subject of brain surgery. Technical nouns were abstracted from the first article. Additional nouns were added from the second article, and so through the complete series of texts employed. Each succeeding text was chosen from a different field of brain surgery. The amazing fact developed that after the fourth article, the glossary derived covered an average of 80 per cent of all the technical nouns in each succeeding article. From this, he constructed a microglossary that he considers representative of the field of brain surgery in the German language.A similar glossary of non-technical nouns was also compiled from the same series of articles. The frequency curve of the non-technical nouns was the same as that of the technical nouns. In other words, the brain surgeons are not only compelled to choose their technical nouns from a limited vocabulary, but their pattern of communication is so limited by practice and convention that even the range of non-technical nouns is predictable.We may generalize, although perhaps dangerously, that the same phenomenon will appear in all technical fields of a restricted nature.The micro-glossary was employed in programming translations on the SWAC in cooperation with Dr. Harry D. Huskey, Assistant Director, National Bureau of Standards Institute for Numerical Analysis, University of California, Los Angeles. The translations so obtained conveyed the meaning of the original article with correlations of meaning better than 90 per cent, on the assumption that the problems of syntax and contextual modification had previously been solved. Even without this assumption, the translated articles, when presented to a specialist in the field, in the raw un-edited form, conveyed the major portion of the meaning of the original article in the original language.The discussion that followed the paper clearly showed that the linguists working in other languages than German were in complete agreement as to the ease with which such microglossaries could be constructed. The engineers and scientists, from their knowledge of technical articles in their respective fields, indicated that the size of micro-glossaries in these fields would be as small in comparison to the complete vocabulary of a language as Dr. Oswald postulated. All agreed that the use of such micro-glossaries would enormously reduce the amount of memory required in a translating machine.In particular, the discussion centered on isolation of nouns as the major meaning-bearing words of a language. A rough analysis was made of the language being used around the table, and it was quite evident that in general verbs employed in conveying meaning through speech are in the present tense and in the vast majority of cases the verb is a form of the verb "to be." Since information is adequately conveyed by speech, it seemed reasonable to the participants that a translation which would ignore tenses and concentrate on nouns whichin newspaper parlance -convey the who, what, when, where, and how, of a statement, would adequately convey to a post-editor the necessary raw material to be employed in producing a polished translation. Dr. Oswald was congratulated by the group for his work and analysis of this phenomena.Prof. William E. Bull, Department of Spanish, University of California, Los Angeles, presented the second paper entitled "Frequency Problems in Mechanical Translation." Prof. Bull's investigation in Spanish literature paralleled the investigations of Dr. Oswald. Running texts in Spanish literature, which employed a general vocabulary rather than a restricted vocabulary, verify in detail the existence of the same phenomenon in general language as occurred in the restricted field of brain surgery, but Prof. Bull stressed that low frequency, unpredictable terms often carry critically important meaning.Prof. Bull exhibited numerous slides showing the frequency counts of words, the frequency occurrence of particular parts of speech, and the frequency counts of words within the classification of a particular part of speech. He discussed in some detail the problem of determining syntactic connections in Spanish sentences. He also discussed the type of work and the type of personnel required to extend knowledge in this field not only for Spanish but also for other languages of interest.Prof. Bull's paper was in part abstracted from a monograph not yet published. Therefore, he did not present a written paper to the participants of the conference, and this material is at present unavailable.Substantially, Prof. Bull's paper was a verification of the work of Dr. Oswald and indicated the fruitfulness of this approach to the problem of Mechanical Translation. A discussion of the means required to further extend the investigations showed clearly that the analysis could be facilitated by the use of punched cards. Such mechanization can enormously increase our knowledge of language structure, whereas the present handwritten and hand-sorting techniques are far too slow to materially aid in the solution of the problems of mechanical translation. Prof. Bull accepted the suggestion that he investigate the possibilities of employing punched cards as a means of extending the scope of his research.The third paper was presented by Prof. Erwin Reifler and was entitled "General Mechanical Translation and Universal Grammar." Prof. Reifler has inaugurated a new school of linguis-tic investigation which is currently known as "Comparative Semantics." Prof. Reifler has been investigating languages in order to discover such patterns of verbally conveying meaning, underlying the actual words and syntax of a language, as are common to all languages. Such a structure could form a "universal grammar."Mechanical translation poses the following question: "Is it possible to solve the problems of Mechanical Translation in such a way that one and the same preparation of the code text may serve for a Mechanical Translation into many different languages?" The existence of a universal grammar would most assuredly assist in the solution of this problem if such a grammar could be shown to exist. To date, the science of linguistics states that no such universal grammar exists, but linguists do speak of language universals. In particular, many highly interesting cases of parallel development in the evolution of the expression of meaning amongst structurally unrelated languages do exist. The universals may be used to re-adjust the language structure to form what Prof. Reifler terms "adjusted model target languages." This is in line with the recommendation that Prof. Stuart C. Dodd presented in his paper on "Model English." Use of the adjustment clearly simplifies the mechanical translation problem and the engineering re-quired for its solution.The discussion of the paper reinforced the conclusions of the discussion on Prof. Dodd's paper. It is encouraging to note that where Prof. Dodd has restricted his considerations to English and hypothesized extension to other languages, Prof. Reifler, working from a completely different viewpoint and another purpose in mind, arrived at the same conclusions as to the feasibility of regularizing a language and further demonstrated our ability to regularize major language groups of the world. Dr. Huskey reviewed the problems encountered in programming German translations in collaboration with Dr. Oswald. The problems encountered were, to a certain extent, peculiar to the SWAC, which was the machine available for the translation. The basic problems were the construction of a vocabulary for entry into the machine, the derivation of a system of addressing to find particular units in the memory, and the syntactic programming to obtain correct sentence structure in the output of the machine. These problems are basic to any machine translation. In general, the design of the machine will govern the type of programming required. The use of two types of memories seems desirable -the first having short access time and the second, which will contain words of infrequent use, having a longer access time. The arithmetic operations required for the construction of the correct sentence structure will be dependent upon the arithmetic devices provided with the machine. The complexity of the machine, if a machine is constructed for the sole purpose of mechanical translation, will be a function of the degree of accuracy required in the translation. This in turn will be dependent upon the allocation of time for pre-editing the material for machine input and post-editing of the machine output.The second paper was presented by Mr. J. W. Forrester, Director of Digital Computer Laboratory, M.I.T., on the subject of "Problems of Storage and Cost." This also was presented in the form of a talk, no written material being distributed.Mr. Forrester presented no cost items that are not known to computers and business machine engineers. His major purpose was to indicate to the linguists present the cost of the machine that they were proposing. Techniques employing magnetic drums, magnetic tapes, and electrostatic storage devices singly and in combination with one another were presented for consideration. The most economical array consists of an intermediate memory and computing unit of low access time and a large scale memory of long access time. The cost of the machine is dependent on the same considerations as listed by Dr. Huskey.The third paper was presented by Dr. A. Donald Booth, Director, Computation Laboratory, Birkbeck College, London. The title was changed from that listed in the program to "Some Methods of Mechanized Translation," which was written in collaboration with Dr. R. H. Richens of the Biological Laboratories of the University of London. General principles of mechanical translation, as scheduled and pro-grammed on the computer built by Dr. Booth for the University of London, were discussed. The use of punched card machinery was compared with the use of an automatic digital computer. Time comparisons were worked out that favored the use of the automatic digital computing machinery by a time ratio of at least 7 to 1. Examples of translations in the field of genetics from Albanian, Danish, Dutch, Finnish, French, German, Hungarian, Indonesian, Italian, Latin, Latvian, Norwegian, Polish, Portuguese, Rumanian, Spanish, Swedish, Turkish, Arabic, and Japanese were given. Usable translations in each of these cases, despite the limited storage available with Dr. Booth's computer, were obtained. Post-editing was necessary in all cases, however, to produce a readable, although not necessarily more intelligible translation.The fourth paper was presented by Prof. Wm. E. Bull and was concerned with the possible future effect of the concept of mechanical translation on the teaching of foreign languages. Prof. Bull stated that the concept of mechanical translation necessitates a completely new approach to the problem of language teaching. An analogy was drawn between a machine into whose memory a vocabulary had not been incorporated and a student into whose brain such a vocabulary must also be introduced. The approach in teaching syntactic connections to both the machine and to the student in terms of the programming required to obtain syntactically correct constructions from the memory storage was discussed. Prof. Bull reached the conclusions that the same considerations that govern the choice of vocabulary and the use of intermediate and large scale memories in the machine could be advantageously incorporated into the teaching of languages as well as the design of machines for mechanical translation.Dr. Louis N. Ridenour was unfortunately unable to attend the conference, and his paper on "Learning Machines" was not presented.In his place, Prof. James W. Perry, Research Associate, Center for International Studies, M.I.T., presented a paper on "Machine Techniques for Index Searching and for Machine Translation." This paper was an elaboration of the talk that Prof. Perry presented at the opening public session of the conference. To a considerable extent, the concepts in the paper were based on Prof. Perry's experience in setting up coding and indexing systems for hand-sorted punched cards, and also on his experience with the library-cataloging machine developed by IBM. Fundamentally, the same conclusions as to memory and access times were reached by Prof. Perry as had been previously derived by the other participants in the conference.Session VI -June 20, 1952 Chairman -Prof. Wm. E. Bull The closing session of the conference was devoted to a consideration of organization for future research. A seven-man committee was organized at this session to act as coordinators and consultants for further work. The committee is composed of Dr. Yehoshua Bar-Hillel, as chairman; Prof. Leon Dostert, secretary; and Dr. Olaf Helmer, Dr. Harry D. Huskey, Prof. Erwin Reifler, and Mr. A. C. Reynolds, Jr., as members. Dr. A. Donald Booth was placed on the committee as the European representative.In the organization for future research, the conferees were asked to what degree they were interested in future work and in which areas they wished to participate.Dr. Booth will continue with the work he has already started with Dr. R. H. Richens at the University of London.Prof. Bull is interested in the field of linguistic problems of translation and as part of his research activity will continue with his study of the Spanish language. He is not concerned with mechanical translation as such, but recognizes the necessity for, and the value of, his linguistic work in reaching this goal.Dr. Dodd will continue his work in the studies of regularizing languages and determine the degree of extension possible in languages other than English.Prof. Dostert intends to work actively, through the Institute of Languages and Linguistics, Georgetown University, in the derivation of principles for the use of machines in translation.Dr. Olaf Helmer stated that the Rand Corporation is interested from the theoretical viewpoint, but in all probability at the present time will confine itself only to theoretical work as secondary to its work on computers.Dr. Huskey had no comment other than that he would continue to collaborate with Prof. Oswald.Prof. Oswald is interested in extending the concept of micro-glossaries and in the study of syntactic relations. He intends to continue work in the programming of translation for machines.Prof. Reifler is extremely interested in demonstrating the existence of universals in gram-mar, and in applying these universals to the problem of mechanical translation.Dr. Bar-Hillel will continue his basic research in symbolic logic and its applications to the field of mechanical translation.Dr. Jerome B. Wiesner, speaking for the M.I.T. staff present, stated that the research laboratory at M.I.T. is very much interested in the application of computer techniques to the problem of mechanical translation and that if a concrete program was formulated, financial support could quite conceivably be forthcoming from the Research Laboratory.Mr. Duncan Harkin of the Department of Defense stated that the Department of Defense was vitally interested in this problem and, like Dr. Wiesner, if a concrete proposal for such a translation and subsequent demonstration could be formulated, the Department of Defense would be prepared to give financial backing.Mr. Reynolds stated that IBM was interested in the application of its present punched card techniques and its computers to this problem and as such would participate on the basis of exchange of theoretical information with the members of the conference.The conference closed on a note of optimism regarding the potentialities now known to be physically present in the concept of mechanical translation. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
872
0.008028
null
null
null
null
null
null
null
null
524607d74ad17c1ef4fbd9dc1108232db2570d1e
243627545
null
Translation
The attached memorandum on translation from one language to another, and on the possibility of contributing to this process by the use of modern computing devices of very high speed, capacity, and logical flexibility, has been written with one hope only -that it might possibly serve in some small way as a stimulus to someone else, who would have the techniques, the knowledge, and the imagination to do something about it. I have worried a good deal about the probable naivete of the ideas here presented; but the subject seems to me so important that I am willing to expose my ignorance, hoping that it will be slightly shielded by my intentions.
{ "name": [ "Weaver, Warren" ], "affiliation": [ null ] }
null
null
Proceedings of the Conference on Mechanical Translation
1952-06-01
0
0
null
There is no need to do more than mention the obvious fact that a multiplicity of language impedes cultural interchange between the peoples of the earth, and is a serious deterrent to international understanding.The present memorandum, assuming the validity and importance of this fact, contains some comments and suggestions bearing on the possibility of contributing at least something to the solution of the world-wide translation problem through the use of electronic computers of great capacity, flexibility, and speed.The suggestions of this memorandum will surely be incomplete and naïve, and may well be patently silly to an expert in the field -for the author is certainly not such.During the war a distinguished mathematician whom we will call P, an ex-German who had spent some time at the University of Istanbul and had learned Turkish there, told W.W. the following story.A mathematical colleague, knowing that P had an amateur interest in cryptography, came to P one morning, stated that he had worked out a deciphering technique, and asked P to cook up some coded message on which he might try his scheme. P wrote out in Turkish a message containing about 100 words; simplified it by replacing the Turkish letters ç, ğ, ĭ, ö, ş, and ü by c, g, i, o, s, and u respectively; and then, using something more complicated than a simple substitution cipher, reduced the message to a column of five digit numbers. The next day (and the time required is significant) the colleague brought his result back, and remarked that they had apparently not had success. But the sequence of letters he reported, when properly broken up into words, and when mildly corrected (not enough correction being required really to bother anyone who knew the language well), turned out to be the original message in Turkish.The most important point, at least for present purposes, is that the decoding was done by someone who did not know Turkish, and did not know that the message was in Turkish. One remembers, by contrast, the well-known instance in World War I when it took our cryptographic forces weeks or months to determine that a captured message was coded from Japanese; and then took them a relatively short time to decipher it, once they knew what the language was.During the war, when the whole field of cryptography was so secret, it did not seem discreet to inquire concerning details of this story; but one could hardly avoid guessing that this process made use of frequencies of letters, letter combinations, intervals between letters and letter combinations, letter patterns, etc., which are to some significant degree independent of the language used. This at once leads one to suppose that, in the manifold instances in which man has invented and developed languages, there are certain invariant properties which are, again not precisely but to some statistically useful degree, common to all languages. This may be, for all I know, a famous theorem of philology.Indeed the well-known bow-wow, woof-woof, etc. theories of Müller and others, for the origin of languages, would of course lead one to expect common features in all languages, due to their essentially similar mechanism of development. And, in any event, there are obvious reasons which make the supposition a likely one. All languages -at least all the ones under consideration here -were invented and developed by men; and all men, and perhaps at different times. One would expect wide superficial differences; but it seems very reasonable to expect that certain basic, and probably very non-obvious, aspects be common to all the developments. It is just a little like observing that trees differ very widely in many characteristics, and yet there are basic common characteristics -certain essential qualities of "tree-ness," -that all trees share, whether they grow in Poland, or Ceylon, or Colombia. Furthermore (and this is the important point) a South American has, in general, no difficulty in recognizing that a Norwegian tree is a tree.The idea of basic common elements in all languages later received support from a remark which the mathematician and logician Reichenbach made to W.W. Reichenbach also spent some time in Istanbul, and like many of the German scholars who went there, he was perplexed and irritated by the Turkish language. The grammar of that language seemed to him so grotesque that eventually he was stimulated to study its logical structure. This, in turn, led him to become interested in the logical structure of the grammar of several other languages; and quite unaware of W.W.'s interest in the subject, Reichenbach remarked, "I was amazed to discover that for (apparently) widely varying languages, the basic logical structures have important common features." Reichenbach said he was publishing this, and would send the material to W.W.; but nothing has ever appeared. "One thing I wanted to ask you about is this. A most serious problem, for UNESCO and for the constructive and peaceful future of the planet, is the problem of translation, as it unavoidably affects the communication between peoples. Huxley has recently told me that they are appalled by the magnitude and the importance of the translation job."Recognizing fully, even though necessarily vaguely, the semantic difficulties because of multiple meanings, etc., I have wondered if it were unthinkable to design a computer which would translate. Even if it would translate only scientific material (where the semantic difficulties are very notably less), and even if it did produce an inelegant (but intelligible) result, it would seem to me worth while."Also knowing nothing official about, but having guessed and inferred considerable about, powerful new mechanized methods in cryptography -methods which I believe succeed even when one does not know what language has been codedone naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography. When I look at an article in Russian, I say "This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode."Have you ever thought about this? As a linguist and expert on computers, do you think it is worth thinking about?" Professor Wiener, in a letter dated April 30, 1947, said in reply:"Second -as to the problem of mechanical translation, I frankly am afraid the boundaries of words in different languages are too vague and the emotional and international connotations are too extensive to make any quasi mechanical translation scheme very hopeful. I will admit that basic English seems to indicate that we can go further than we have generally done in the mechanization of speech, but you must remember that in certain respects basic English is the reverse of mechanical and throws upon such words as 'get,' a burden, which is much greater than most words carry in conventional English. At the present time, the mechanization of language, beyond such a stage as the design of photoelectric reading opportunities for the blind, seems very premature. By the way, I have been fascinated by McCulloch's work on such apparatus, and, as you probably know, he finds the wiring diagram of apparatus of this kind turns out to be surprisingly like the microscopic analogy of the visual cortex in the brain."To this, W.W. replied on May 9, 1947: "I am disappointed but not surprised by your comments on the translation problem. The difficulty you mention concerning Basic seems to me to have a rather easy answer. It is, of course, true that Basic puts multiple use on an action verb such as 'get.' But even so, the two-word combinations such as 'get up,' 'get over,' 'get back,' etc., are, in Basic, not really very numerous. Suppose we take a vocabulary of 2,000 words, and admit for good measure all the two-word combinations as if they were single words. The vocabulary is still only four million: and that is not so formidable a number to a modern computer, is it?" Thus this attempt to interest Wiener, who seemed so ideally equipped to consider the problem, failed to produce any real result. This must in fact be accepted as exceedingly discouraging, for if there are any real possibilities, one would expect Wiener to be just the person to develop them.The idea has, however, been seriously considered elsewhere.The first instance known to W.W., subsequent to his own notion about it, was described in a memorandum dated February 12, 1948, written by ; but only with the problem of mechanizing a dictionary. Their proposal then was that one first "sense" the letters of a word, and have the machine see whether or not its memory contains precisely the word in question. If so, the machine simply produces the translation (which is the rub; of course "the" translation doesn't exist) of this word. If this exact word is not contained in the memory, then the machine discards the last letter of the word, and tries over. If this fails, it discards another letter, and tries again. After it has found the largest initial combination of letters which is in the dictionary, it "looks up" the whole discarded portion in a special "grammatical annex" of the dictionary. Thus confronted by "running," it might find "run" and then find out what the ending (n) ing does to "run."Thus their interest was, at least at that time, confined to the problem of the mechanization of a dictionary which in a reasonably efficient way would handle all forms of all words. W.W. has no more recent news of this affair.Very recently the newspapers have carried stories of the use of one of the California computers as a translator. The published reports do not indicate much more than a word-into-word sort of translation, and there has been no indication, at least that W.W. has seen, of the proposed manner of handling the problems of multiple meaning, context, word order, etc.This last named attempt, or planned attempt, has already drawn forth inevitable scorn; Mr. Max Zeldner, in a letter to the Herald Tribune on June 13, 1949, stating that the most you could expect of a machine translation of the fifty-five Hebrew words which form the 23rd Psalm would start out "Lord my shepherd no I will lack," and would close "But good and kindness he will chase me all days of my life; and I shall rest in the house of Lord to length days." Mr. Zeldner points out that a great Hebrew poet once said that translation "is like kissing your sweetheart through a veil." It is, in fact, amply clear that a translation procedure that does little more than handle a one-to-one correspondence of words can not hope to be useful for problems of "literary" translation, in which style is important, and in which the problems of idiom, multiple meanings, etc., are frequent.Even this very restricted type of translation may, however, very well have important use. Large volumes of technical material might, for example, be usefully, even if not at all elegantly, handled this way. Technical writing is unfortunately not always straight-forward and simple in style; but at least the problem of multiple meaning is enormously simpler. In mathematics, to take what is probably the easiest example, one can very nearly say that each word, within the general context of a mathematical article, has one and only one meaning,The foregoing remarks about computer translation schemes which have been reported, do not however seem to W.W. to give an appropriately hopeful indication of what the future possibilities may be. Those possibilities should doubtless be indicated by persons who have special knowledge of languages and of their comparative anatomy. But again at the risk of being foolishly naïve, it seems interesting to indicate four types of attack, on levels of increasing sophistication.First, let us think of a way in which the problem of multiple meaning can, in principle at least, be solved. If one examines the words in a book, one at a time as through an opaque mask with a hole in it one word wide, then it is obviously impossible to determine, one at a time, the meaning of the words. "Fast" may mean "rapid"; or it may mean "motionless"; and there is no way of telling which. This is a question concerning the statistical semantic character of language which could certainly be answered, at least in some interesting and perhaps in a useful way. Clearly N varies with the type of writing in question. It may be zero for an article known to be about a specific mathematical subject. It may be very low for chemistry, physics, engineering, etc. If N were equal to 5, and the article or book in question were on some sociological subject, would there be a probability of .95 that the choice of meaning would be correct 98% of the time? Doubtless not: but a statement of this sort could be made, and values of N could be determined that would meet given demands.Ambiguity, moreover, attaches primarily to nouns, verbs, and adjectives; and actually (at least so I suppose) to relatively few nouns, verbs, and adjectives. Here again is a good subject for study concerning the statistical semantic character of languages. But one can imagine using a value of N that varies from word to word, is zero for "he," "the," etc., and which needs to be large only rather occasionally. Or would it determine unique meaning in a satisfactory fraction of cases, to examine not the 2N adjacent words, but perhaps the 2N adjacent nouns? What choice of adjacent words maximizes the probability of correct choice of meaning, and at the same time leads to a small value of N?Thus one is led to the concept of a translation process in which, in determining meaning for a word, account is taken of the immediate (2N word) context. It would hardly be practical to do this by means of a generalized dictionary which contains all possible phases 2N+1 words long: for the number of such phases is horrifying, even to a modern electronic computer. But it does seen likely that some reasonable way could be found of using the micro-context to settle the difficult cases of ambiguity.A more general basis for hoping that a computer could be designed which would cope with a useful part of the problem of translation is to be Probably only Shannon himself, at this stage, can be a good judge of the possibilities in this direction; but as was expressed in W.W.'s original letter to Wiener, it is very tempting to say that a book written in Chinese is simply a book written in English which was coded into the "Chinese code." If we have useful methods for solving almost any cryptographic problem, may it not be that with proper interpretation we already have useful methods for translation?* For a very simplified version, see "The Mathematics of Communication," by Warren Weaver, Scientific American, Vol. 181, No. 1, July 1949, pp. 11-15 . Shannon's original papers, as published in the Bell System Technical Journal, and a longer and more detailed interpretation by W.W., are about to appear as a memoir on communication, published by the University of Illinois Press. A book by Shannon on this subject is also to appear soon.Warren WeaverCarlsbad, New Mexico July 15, 1949
null
null
null
null
Main paper: translation l) preliminary remarks: There is no need to do more than mention the obvious fact that a multiplicity of language impedes cultural interchange between the peoples of the earth, and is a serious deterrent to international understanding.The present memorandum, assuming the validity and importance of this fact, contains some comments and suggestions bearing on the possibility of contributing at least something to the solution of the world-wide translation problem through the use of electronic computers of great capacity, flexibility, and speed.The suggestions of this memorandum will surely be incomplete and naïve, and may well be patently silly to an expert in the field -for the author is certainly not such.During the war a distinguished mathematician whom we will call P, an ex-German who had spent some time at the University of Istanbul and had learned Turkish there, told W.W. the following story.A mathematical colleague, knowing that P had an amateur interest in cryptography, came to P one morning, stated that he had worked out a deciphering technique, and asked P to cook up some coded message on which he might try his scheme. P wrote out in Turkish a message containing about 100 words; simplified it by replacing the Turkish letters ç, ğ, ĭ, ö, ş, and ü by c, g, i, o, s, and u respectively; and then, using something more complicated than a simple substitution cipher, reduced the message to a column of five digit numbers. The next day (and the time required is significant) the colleague brought his result back, and remarked that they had apparently not had success. But the sequence of letters he reported, when properly broken up into words, and when mildly corrected (not enough correction being required really to bother anyone who knew the language well), turned out to be the original message in Turkish.The most important point, at least for present purposes, is that the decoding was done by someone who did not know Turkish, and did not know that the message was in Turkish. One remembers, by contrast, the well-known instance in World War I when it took our cryptographic forces weeks or months to determine that a captured message was coded from Japanese; and then took them a relatively short time to decipher it, once they knew what the language was.During the war, when the whole field of cryptography was so secret, it did not seem discreet to inquire concerning details of this story; but one could hardly avoid guessing that this process made use of frequencies of letters, letter combinations, intervals between letters and letter combinations, letter patterns, etc., which are to some significant degree independent of the language used. This at once leads one to suppose that, in the manifold instances in which man has invented and developed languages, there are certain invariant properties which are, again not precisely but to some statistically useful degree, common to all languages. This may be, for all I know, a famous theorem of philology.Indeed the well-known bow-wow, woof-woof, etc. theories of Müller and others, for the origin of languages, would of course lead one to expect common features in all languages, due to their essentially similar mechanism of development. And, in any event, there are obvious reasons which make the supposition a likely one. All languages -at least all the ones under consideration here -were invented and developed by men; and all men, and perhaps at different times. One would expect wide superficial differences; but it seems very reasonable to expect that certain basic, and probably very non-obvious, aspects be common to all the developments. It is just a little like observing that trees differ very widely in many characteristics, and yet there are basic common characteristics -certain essential qualities of "tree-ness," -that all trees share, whether they grow in Poland, or Ceylon, or Colombia. Furthermore (and this is the important point) a South American has, in general, no difficulty in recognizing that a Norwegian tree is a tree.The idea of basic common elements in all languages later received support from a remark which the mathematician and logician Reichenbach made to W.W. Reichenbach also spent some time in Istanbul, and like many of the German scholars who went there, he was perplexed and irritated by the Turkish language. The grammar of that language seemed to him so grotesque that eventually he was stimulated to study its logical structure. This, in turn, led him to become interested in the logical structure of the grammar of several other languages; and quite unaware of W.W.'s interest in the subject, Reichenbach remarked, "I was amazed to discover that for (apparently) widely varying languages, the basic logical structures have important common features." Reichenbach said he was publishing this, and would send the material to W.W.; but nothing has ever appeared. "One thing I wanted to ask you about is this. A most serious problem, for UNESCO and for the constructive and peaceful future of the planet, is the problem of translation, as it unavoidably affects the communication between peoples. Huxley has recently told me that they are appalled by the magnitude and the importance of the translation job."Recognizing fully, even though necessarily vaguely, the semantic difficulties because of multiple meanings, etc., I have wondered if it were unthinkable to design a computer which would translate. Even if it would translate only scientific material (where the semantic difficulties are very notably less), and even if it did produce an inelegant (but intelligible) result, it would seem to me worth while."Also knowing nothing official about, but having guessed and inferred considerable about, powerful new mechanized methods in cryptography -methods which I believe succeed even when one does not know what language has been codedone naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography. When I look at an article in Russian, I say "This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode."Have you ever thought about this? As a linguist and expert on computers, do you think it is worth thinking about?" Professor Wiener, in a letter dated April 30, 1947, said in reply:"Second -as to the problem of mechanical translation, I frankly am afraid the boundaries of words in different languages are too vague and the emotional and international connotations are too extensive to make any quasi mechanical translation scheme very hopeful. I will admit that basic English seems to indicate that we can go further than we have generally done in the mechanization of speech, but you must remember that in certain respects basic English is the reverse of mechanical and throws upon such words as 'get,' a burden, which is much greater than most words carry in conventional English. At the present time, the mechanization of language, beyond such a stage as the design of photoelectric reading opportunities for the blind, seems very premature. By the way, I have been fascinated by McCulloch's work on such apparatus, and, as you probably know, he finds the wiring diagram of apparatus of this kind turns out to be surprisingly like the microscopic analogy of the visual cortex in the brain."To this, W.W. replied on May 9, 1947: "I am disappointed but not surprised by your comments on the translation problem. The difficulty you mention concerning Basic seems to me to have a rather easy answer. It is, of course, true that Basic puts multiple use on an action verb such as 'get.' But even so, the two-word combinations such as 'get up,' 'get over,' 'get back,' etc., are, in Basic, not really very numerous. Suppose we take a vocabulary of 2,000 words, and admit for good measure all the two-word combinations as if they were single words. The vocabulary is still only four million: and that is not so formidable a number to a modern computer, is it?" Thus this attempt to interest Wiener, who seemed so ideally equipped to consider the problem, failed to produce any real result. This must in fact be accepted as exceedingly discouraging, for if there are any real possibilities, one would expect Wiener to be just the person to develop them.The idea has, however, been seriously considered elsewhere.The first instance known to W.W., subsequent to his own notion about it, was described in a memorandum dated February 12, 1948, written by ; but only with the problem of mechanizing a dictionary. Their proposal then was that one first "sense" the letters of a word, and have the machine see whether or not its memory contains precisely the word in question. If so, the machine simply produces the translation (which is the rub; of course "the" translation doesn't exist) of this word. If this exact word is not contained in the memory, then the machine discards the last letter of the word, and tries over. If this fails, it discards another letter, and tries again. After it has found the largest initial combination of letters which is in the dictionary, it "looks up" the whole discarded portion in a special "grammatical annex" of the dictionary. Thus confronted by "running," it might find "run" and then find out what the ending (n) ing does to "run."Thus their interest was, at least at that time, confined to the problem of the mechanization of a dictionary which in a reasonably efficient way would handle all forms of all words. W.W. has no more recent news of this affair.Very recently the newspapers have carried stories of the use of one of the California computers as a translator. The published reports do not indicate much more than a word-into-word sort of translation, and there has been no indication, at least that W.W. has seen, of the proposed manner of handling the problems of multiple meaning, context, word order, etc.This last named attempt, or planned attempt, has already drawn forth inevitable scorn; Mr. Max Zeldner, in a letter to the Herald Tribune on June 13, 1949, stating that the most you could expect of a machine translation of the fifty-five Hebrew words which form the 23rd Psalm would start out "Lord my shepherd no I will lack," and would close "But good and kindness he will chase me all days of my life; and I shall rest in the house of Lord to length days." Mr. Zeldner points out that a great Hebrew poet once said that translation "is like kissing your sweetheart through a veil." It is, in fact, amply clear that a translation procedure that does little more than handle a one-to-one correspondence of words can not hope to be useful for problems of "literary" translation, in which style is important, and in which the problems of idiom, multiple meanings, etc., are frequent.Even this very restricted type of translation may, however, very well have important use. Large volumes of technical material might, for example, be usefully, even if not at all elegantly, handled this way. Technical writing is unfortunately not always straight-forward and simple in style; but at least the problem of multiple meaning is enormously simpler. In mathematics, to take what is probably the easiest example, one can very nearly say that each word, within the general context of a mathematical article, has one and only one meaning,The foregoing remarks about computer translation schemes which have been reported, do not however seem to W.W. to give an appropriately hopeful indication of what the future possibilities may be. Those possibilities should doubtless be indicated by persons who have special knowledge of languages and of their comparative anatomy. But again at the risk of being foolishly naïve, it seems interesting to indicate four types of attack, on levels of increasing sophistication.First, let us think of a way in which the problem of multiple meaning can, in principle at least, be solved. If one examines the words in a book, one at a time as through an opaque mask with a hole in it one word wide, then it is obviously impossible to determine, one at a time, the meaning of the words. "Fast" may mean "rapid"; or it may mean "motionless"; and there is no way of telling which. This is a question concerning the statistical semantic character of language which could certainly be answered, at least in some interesting and perhaps in a useful way. Clearly N varies with the type of writing in question. It may be zero for an article known to be about a specific mathematical subject. It may be very low for chemistry, physics, engineering, etc. If N were equal to 5, and the article or book in question were on some sociological subject, would there be a probability of .95 that the choice of meaning would be correct 98% of the time? Doubtless not: but a statement of this sort could be made, and values of N could be determined that would meet given demands.Ambiguity, moreover, attaches primarily to nouns, verbs, and adjectives; and actually (at least so I suppose) to relatively few nouns, verbs, and adjectives. Here again is a good subject for study concerning the statistical semantic character of languages. But one can imagine using a value of N that varies from word to word, is zero for "he," "the," etc., and which needs to be large only rather occasionally. Or would it determine unique meaning in a satisfactory fraction of cases, to examine not the 2N adjacent words, but perhaps the 2N adjacent nouns? What choice of adjacent words maximizes the probability of correct choice of meaning, and at the same time leads to a small value of N?Thus one is led to the concept of a translation process in which, in determining meaning for a word, account is taken of the immediate (2N word) context. It would hardly be practical to do this by means of a generalized dictionary which contains all possible phases 2N+1 words long: for the number of such phases is horrifying, even to a modern electronic computer. But it does seen likely that some reasonable way could be found of using the micro-context to settle the difficult cases of ambiguity.A more general basis for hoping that a computer could be designed which would cope with a useful part of the problem of translation is to be Probably only Shannon himself, at this stage, can be a good judge of the possibilities in this direction; but as was expressed in W.W.'s original letter to Wiener, it is very tempting to say that a book written in Chinese is simply a book written in English which was coded into the "Chinese code." If we have useful methods for solving almost any cryptographic problem, may it not be that with proper interpretation we already have useful methods for translation?* For a very simplified version, see "The Mathematics of Communication," by Warren Weaver, Scientific American, Vol. 181, No. 1, July 1949, pp. 11-15 . Shannon's original papers, as published in the Bell System Technical Journal, and a longer and more detailed interpretation by W.W., are about to appear as a memoir on communication, published by the University of Illinois Press. A book by Shannon on this subject is also to appear soon.Warren WeaverCarlsbad, New Mexico July 15, 1949 Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
872
0
null
null
null
null
null
null
null
null
99001a274718e0a23c849257892a6dd36ab92e70
244077696
null
Organisation and Method in Mechanical Translation Work
Certain postulates are posited as a basis for the orientation and organization of research in mechanical translation. They are the following: 1. The essential problem of mechanical translation is the establishment of acceptable correlation between the signs of one system (the source language) and those of another (the target language). 2. The signs of natural language, unlike the symbols of such systems as mathematics or chemistry, may be incomplete and multivalent. 3. The basic problem is the establishment of codes of systematic affixes to confer com pleteness and fixity in order to achieve acceptable correlation. 4. These systems of affixes must be such as to reflect the operations of translation and be programmable. Based on these postulates, a group research program calls for certain organization and methods. Those in the Georgetown research project are as follows: 1. The recognition that at this phase of the research the primary problem is one of linguistic and translation analysis. 2. The essential direction of the research is placed in the hands of a group of scientific linguists of diverse competences. 3. The linguists meet regularly in a seminar in which specific problems are presented by a member of the group for discussion, review, and comments by the other members. As a result of the examination of specific problems, certain conclusions are formulated, some of them preliminary in character. 4. Under the guidance of the committee of linguists a group of research assistants with at least Master's standing in linguistics, who participate in the seminar, carry out the detailed research based on such conclusions, in conjunction with a group of bilingual translation analysts. 5. The translation analysis is focused on material already translated in the field of chemistry. From this corpus, the material
{ "name": [ "Dostert, L. E." ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Mechanical Translation
1956-10-01
0
0
null
1. The essential problem of mechanical translation is the establishment of acceptable correlation between the signs of one system (the source language) and those of another (the target language). 2. The signs of natural language, unlike the symbols of such systems as mathematics or chemistry, may be incomplete and multivalent. 3. The basic problem is the establishment of codes of systematic affixes to confer com pleteness and fixity in order to achieve acceptable correlation. 4. These systems of affixes must be such as to reflect the operations of translation and be programmable. Based on these postulates, a group research program calls for certain organization and methods. Those in the Georgetown research project are as follows: 1. The recognition that at this phase of the research the primary problem is one of linguistic and translation analysis. 2. The essential direction of the research is placed in the hands of a group of scientific linguists of diverse competences. 3. The linguists meet regularly in a seminar in which specific problems are presented by a member of the group for discussion, review, and comments by the other members. As a result of the examination of specific problems, certain conclusions are formulated, some of them preliminary in character. 4. Under the guidance of the committee of linguists a group of research assistants with at least Master's standing in linguistics, who participate in the seminar, carry out the detailed research based on such conclusions, in conjunction with a group of bilingual translation analysts. 5. The translation analysis is focused on material already translated in the field of chemistry. From this corpus, the material is analyzed systematically for eventual coding on three broad levels conjointly: 1) lexical, 2) morphological, and 3) syntactic. 6. The lexical material is culled from the actual translated text. The decision items within a given context are identified. The contextual cue or cues to the choice decision are indicated. 7. In addition to establishing solutions for lexical multivalence, procedures are being developed to handle problems in morphology and syntax as they arise in the material. At this stage in the research only preliminary formulations exist. 8. In carding the lexical and grammatical data, an attempt is made to symbolize the categorization by a code as follows:The seminar will work as required by emerging experience with the assistance of consultants in the fields of coding techniques, computational techniques, symbolic logic, and mathematics.
null
null
null
null
Main paper: : 1. The essential problem of mechanical translation is the establishment of acceptable correlation between the signs of one system (the source language) and those of another (the target language). 2. The signs of natural language, unlike the symbols of such systems as mathematics or chemistry, may be incomplete and multivalent. 3. The basic problem is the establishment of codes of systematic affixes to confer com pleteness and fixity in order to achieve acceptable correlation. 4. These systems of affixes must be such as to reflect the operations of translation and be programmable. Based on these postulates, a group research program calls for certain organization and methods. Those in the Georgetown research project are as follows: 1. The recognition that at this phase of the research the primary problem is one of linguistic and translation analysis. 2. The essential direction of the research is placed in the hands of a group of scientific linguists of diverse competences. 3. The linguists meet regularly in a seminar in which specific problems are presented by a member of the group for discussion, review, and comments by the other members. As a result of the examination of specific problems, certain conclusions are formulated, some of them preliminary in character. 4. Under the guidance of the committee of linguists a group of research assistants with at least Master's standing in linguistics, who participate in the seminar, carry out the detailed research based on such conclusions, in conjunction with a group of bilingual translation analysts. 5. The translation analysis is focused on material already translated in the field of chemistry. From this corpus, the material is analyzed systematically for eventual coding on three broad levels conjointly: 1) lexical, 2) morphological, and 3) syntactic. 6. The lexical material is culled from the actual translated text. The decision items within a given context are identified. The contextual cue or cues to the choice decision are indicated. 7. In addition to establishing solutions for lexical multivalence, procedures are being developed to handle problems in morphology and syntax as they arise in the material. At this stage in the research only preliminary formulations exist. 8. In carding the lexical and grammatical data, an attempt is made to symbolize the categorization by a code as follows:The seminar will work as required by emerging experience with the assistance of consultants in the fields of coding techniques, computational techniques, symbolic logic, and mathematics. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
820
0
null
null
null
null
null
null
null
null
42b86f3b83f075155e3a16733126f9eaf68748c5
34116601
null
The Requirements of Lexical Storage
Lexical Search In recent studies of Machine Translation a good deal of attention has been paid to translation, but very little to machine. There seems to be a feeling the machine will be more or less like existing computers. Such an assumption must be taken with caution.
{ "name": [ "King, Gilbert W." ], "affiliation": [ null ] }
null
null
Research in Machine Translation
1957-04-01
0
0
null
There are two ways to carry out computations on a machine. One is to construct the required result by algorithms; for example, the quantity sin x can be calculated by a repetitive formula equivalent to a power series. The other is to rely heavily on table look-up. In present-day computers the latter method is almost extinct, and in Mechanical Translation we must strive as much as possible toward algorithmic methods.Inasmuch as it seems impossible to construct the meaning of a word from its spelling or phonemes, except in the few cases of onomatopoeia, Mechanical Translation must always rely heavily on table look-up rather than algorithmic methods. Furthermore, a word not only has its dictionary meaning, but also the adhesion of a great deal of psychological and unexpressed descriptive material. A "sack" and a "coffin2 are both "containers", but it would take a paragraph to modify the word "container" to make it mean either "sack" or "coffin". Thus in order for the machine to choose the most appropriate word of this category, we must store away additional material with each word to aid, to the degree of sophistication required, in the ultimate selection.So although we expect to look up meanings associated with words, we do not wish to have an automatic dictionary, but to de-emphasize this approach, and try to introduce as many algorithmic techniques making use of context as possible. Thus we should consider "lexical search" rather than "dictionary look-up".The extent of the lexical search is determined not only by the theory, but by practical limitations. We now know that Mechanical Translation is possible, probably to as high a degree of refinement as we wish, so what are our objectives now? Are we to pursue Mechanical Translation as an academic stunt? Do we expect to turn out useful translations, but presume they will always be crude and inelegant? Are we to provide a means to translate a specific field such as science or technology, or all types of literature?The first is not enough, the last beyond our capabilities now. But the second is possible in 1958. In fact our objective should be to translate scientific or technical material in accurate readable form, with one proviso. Such an effort would be of great value to the nation, only if it can be done as fast as foreign presses print the material. The problem of lexical search is what is known in the computer field as a "real-time problem".No hardware yet exists to carry out Mechanical Translation in real time. The current output of the leading nations is of the order of 3 x 10 6 pages per year, or 10 9 words per year. In the next year or two we may expect text readers to be developed which will be able to read printed material at the rate of 1000 characters/sec. With 10 7 sec in a working year and 6 characters/word, this amounts to 1.5 x 10 9 words/year, of the order of magnitude of the rate of publication. Thus we can expect the rate of input to the machine to be adequate.The corresponding rate at which the lexical search must be carried on is 10 9 words/year or 100 words/sec. Thus the first requirement on this memory unit of the machine (which we shall call Store I) is that it must have 10 millisec random access time to every entry. It will take one-fifth of a second to look up all the words in an average sentence.The size of the store will depend on the number of words in a language, and the amount of lexical material to be associated with each word in an entry. There are some 6 x 10 4 words in a dictionary. However, at the present state of translation theory we can hardly afford to neglect the clues offered by inflexional forms, so the total number of source words which must be in the store will be more like 10 6 . At present we average about 250 bits (6 bits to define a character) in an entry, and more sophisticated translators will require about 10 3 bits. Thus the second requirement of the store is that its capacity must ultimately be about 10 9 bits.There is a third design parameter of the store which must be established to make the translating system efficient. Access to an entry has been established at 10 milliseconds; the size of the entry at 10 3 bits. This material must be read out in a reasonable fraction, say 10%, of the access time. Thus the third requirement of the store is that its read-out rate must be 10 6 bits/sec.The problem of lexical storage involves more than the mere storage and access to lexical material. A good translation also involves the interrelation of the lexical material found on the basis of syntactics and semantics. The first disgorgement of the store is only raw material, on which a logical unit of the machine has to work. (Here "logic" means that mathematical or symbolic logic which can actually be done with a computer. Some "logical" operations are purely housekeeping details of the mechanical operations of the computer.) Rough estimates based on current theories of Mechanical Translation would indicate that some 10 4 logical operations may be required per sentence to straighten out the disgorged material into a good translation. Even if only a fraction of these operations were requested for further look-up (as many theories demand), the restriction of producing output as fast as material is fed in makes it imperative that no further look-ups in the large store are permitted during logical processing.This means that the disgorged material on the first look-up (from Store I) should be necessary and sufficient for analysis of the sentence (or paragraph). In other words, the output of the first look-up operation creates a "microglossary" sufficient for the analysis of the sentence. This selection from Store I should be dumped in a fast memory (called Store II) for logical processing. With 20 words per sentence on the average, 10 3 bits output/word the requirement on capacity for the intermediate memory is of the order of 10 5 bits (100 thousand-bit computer "words").We have seen that the rate of flow, from source through input equipment and in table look-up in Store I, are all well matched at 0.2 sec per 20-word sentence. High-speed memories of 10 5 bits capacity are currently available with a 10 microsecond random access. Hence 0.2 + 10 -5 or 2 x 10 4 logical operations (computertype) may be made with the microglossary. Since some 20 computertype housekeeping operations are normally required for one purely logical operation (e.g. a comparison of endings), about 10 3 of the -81-latter are permitted per sentence. Of these perhaps 10 2 may be further table look-ups (in the fast memory). This facility seems adequate for current Mechanical Translation theories.This scheme of setting up a microglossary for each sentence imposes not only the above physical requirements on the intermediate memory, but also begins to define the logical elements necessary in the entries.At this point in the machine it is actually unnecessary, and is in fact premature, to have any translation into the target language.We may cover most of the theoretical approaches by defining the contents of the entry in Store I as clues. That is, given a sequence of words in the source materialS l S 2 ....S i .....the first operation is to look up in Store I lexical information concerning the words S i (or word sequences S i , S i+1 . . . S i+k ). The output will be a sequence of termsS 1 A 1 B 1 , S 2 A 2 B 2 , .... S i A i B i ....where A i refers to characters (possibly binary) giving syntactical information (such as the part of speech), and B i refers to characters giving semantic information, e.g. "this is a word from physics"; but not the translation.The sequences (S i A i B i ) form an expanded sentence, and form the microglossary in the intermediate fast memory of the logic portion of the computer. Here the sequences A i , B i are examined and a new set of characters C i are constructed and assigned to each S i . Note that the i'th C i , assigned to S i , is in fact a function of all the preceding and succeeding A j 's and B j 's (called the "local" or "minor" context). The determiners (A's and B's) for the C's may not only be in the sentence, but possibly (especially for pronouns) lie in previous sentences, or even of the title (field), called the "Major Context".These logical operations will consist of two groups. The first will be a syntactical analysis of the sequence A 1 , A 2 . . . A i (without the S i 's or B i 's). This is like a schoolboy's diagramming of the sentence, -82-which finds the relations between words. According to the Cambridge Language Research Group this analysis can be made by algebraic lattice theory, which is highly algorithmic.To give a very elementary example from French let all nouns have an A=a, all verbs an A=ß and the word S=le have an A=a+ß. Here the plus symbol is the logical "or" operation. There will be as many terms in A as there are multiple meanings for the S. Then the syntactical analysis of "le" followed by a noun would involve the Boolean multiplication, which is easy to mechanize, (a+ß)(a) = a The result, a, would constitute a character of the C for "le", so that the output SC for "le" would be lea. The augmented word lea has a unique meaning "the". Note the actual meanings of the augmented words S i C i are not yet at hand, and are to be found by a third operation in the machine, to be described below.In the case of "le" followed by a verb, the multiplication is (a+ß)(ß) = ß and the output SC for "le" would now be leß. The augmented word "le" has the unique meaning "it". (The other meaning "him" would be assigned to "lea", derived from other A's.)A more complicated example would be the phrase ". . . penetrée d'abord de ..." in which the logical operations on the A's for the four words (d'abord not being treated here as an idiom) should show that "de" not "d'" is modified by the "penetrée" (and is ultimately to be translated as "by" not "of").According to the MIT group these operations will require another series of table look-ups. The storage involved probably does not require high capacity, but will require fast access, and be similar to Store II.When the permissible connections between words has been established by purely syntactical analysis, by means of the A's, a second series of operation, involving the B's, is carried out.Consider the following three elementary examples in French. 1) . . . le livre est à lui . . . 2) . . . il est pour travailler . . . 3) ... pour . . . In sentence 1) "est" has a specific meaning "belongs", the clue for this selection being "à", which has its normal meaning "to". In sentence 2) "est" has its most probable meaning "is", but "pour" is to mean "about to" here. In sentence 3) "pour" is to have its most probable meaning "for". We shall not complicate matters by giving sentences where "à" is controlled by other words giving it meanings other than "to", but remember this in the formulation. To handle the multiple meanings for the three words est, à and pour, whose clues are specific words elsewhere in the sentence, rather than purely syntactical, we assign to these words B's which are logical sums of characters a, b, c . . . The output of the logical unit is then a sequence ofS i C l , S 2 C 2 ... S i C i ...The point here is that we are no longer concerned with raw words S i of the source language, but augmented words S i C i , and these augmented words, if our method of construction of the C i 's is adequate, have a unique meaning.At this point in the machine we should have then solved the multiple-meaning problem with the aid of the syntactical and semantic context.We now come to the final stage of the machine, which again is a memory look-up operation. We enter with the individual augmented words S i C i and find a single target equivalent T i . (Note S i C i may stand for a string of words S k . . . S m , from which some S k C k have no target equivalent.)The statement that the machine has a second look-up in a large store for each word does not violate our precept that time does not permit more than one look-up, because this operation is on another store, and can be done in the interval when the preparatory look-up for the next sentence is going on. (It is reasonable to suppose the intermediate Store II is flexible enough to be accepting S i A i B i from the first memory for the following sentence while simultaneously supplying S i C i for look-up in the last memory for the sentence in hand.)Nevertheless the speed of the last large memory (Store III) must be such as not to delay the overall flow of information through the system.Since the logical operations have only made a one-to-one correspondence between the S i A i B i and S i C i , the number of look-ups for the sentence remains the same. Thus the requirement on the last store in regard to access time is the same as for Store I.In simple Mechanical Translation theories, on the average there are 3 multiple meanings for each source language word, the number of entries in Store III will be three times that of Store I. Further the length of the address, S i C i , will be about twice the length of the address S i used in Store I. On the other hand the information sought is only a simple target equivalent, averaging 6 characters, or less than 50 bits. The length of an entry will thus be about 150 bits. The total capacity of Store III will be about one-third that of Store I. Nevertheless, in view of the rudimentary state of the theory, for the following reasons one should consider Store III as having essentially the same capacity as Store I.It seems that a more advanced theory of Mechanical Translation, or more accurately, of mechanically understanding the written word, could be developed along these lines. The semantic information B i associated with each input word S i in the first lexical search, could be elaborated in great detail; so much so that the output of the logical unit could dispense with the symbols, S i , of the source words, and be merely a string of C i 's, C 1 C 2 ...C i ... This presupposes that the B i 's, and the analysis of relationships by means of the A i 's, are sufficiently detailed that the sequence of C i 's has retained all the content and relationships the whole idea, in some coded form related to symbolic logic. In this event Store III would be a kind of thesaurus, for which the input is a sequence of symbols, C i , associating in a Boolean function a large number of ideas and relations which must be stated in the output, as determined from the initial contextual analysis; and for it we wish the machine to choose the most appropriate word. This word is not necessarily the one we would find in a dictionary, nor is it a synonym, but a particularly cogent word for the idea in the particular context. In passing we remark that the C i 's themselves constitute a language analysis to symbolic logic or the proposed "ruly English", but are unsatisfactory output in themselves as they do not convey the richness and desirable ambiguity (after Empson) which makes ordinary languages sophisticated means of communication. In short the thesaurus reattaches to the primitive C i 's the psychological content and background description that makes languages.In order to point out that the effort spent on both the theory and hardware for Mechanical Translation is of value not only in itself, but for the larger problem of information retrieval, we may point out that in the above system the output T i from Store III may indeed be the same language as the input S, so that the machine translates English into better English. Or T i may be the more primitive English used by librarians and indexers, so that the system could be used for classifying, indexing and abstracting.There is an important point in imagining the construction of local context and introduction of the thesaurus in contrast with a dictionary. Inasmuch as the C i 's are determined from the local context, which, if the material is worth translating, should have some novel combinations of ideas, we cannot expect all possible C i 's to be listed with an S in Store III. That is we do not necessarily have unique addresses to the entries of Store III. Hence we must arrange to locate not necessarily a specific C i , but a best match. There are various ways of defining "best"; one is, recognizing C i to be essentially a Boolean function, to find a C i which dominates C i in the sense of lattice theory, i.e.A system such as this will have to be introduced even in simpler Mechanical Translation schemes, to handle typographical errors and grammatical errors on the part of the original author.The Mechanical Translation system consists then of three parts, first a high capacity millisecond-access store of lexical information concerning the source language; second, a low-capacity microsecondaccess store for logical processing of lexical information into augmented words for selection; and third, another high-capacity millisecond-access store of thesaural information concerning the target language. The whole system must operate in real time.
null
null
null
null
Main paper: : There are two ways to carry out computations on a machine. One is to construct the required result by algorithms; for example, the quantity sin x can be calculated by a repetitive formula equivalent to a power series. The other is to rely heavily on table look-up. In present-day computers the latter method is almost extinct, and in Mechanical Translation we must strive as much as possible toward algorithmic methods.Inasmuch as it seems impossible to construct the meaning of a word from its spelling or phonemes, except in the few cases of onomatopoeia, Mechanical Translation must always rely heavily on table look-up rather than algorithmic methods. Furthermore, a word not only has its dictionary meaning, but also the adhesion of a great deal of psychological and unexpressed descriptive material. A "sack" and a "coffin2 are both "containers", but it would take a paragraph to modify the word "container" to make it mean either "sack" or "coffin". Thus in order for the machine to choose the most appropriate word of this category, we must store away additional material with each word to aid, to the degree of sophistication required, in the ultimate selection.So although we expect to look up meanings associated with words, we do not wish to have an automatic dictionary, but to de-emphasize this approach, and try to introduce as many algorithmic techniques making use of context as possible. Thus we should consider "lexical search" rather than "dictionary look-up".The extent of the lexical search is determined not only by the theory, but by practical limitations. We now know that Mechanical Translation is possible, probably to as high a degree of refinement as we wish, so what are our objectives now? Are we to pursue Mechanical Translation as an academic stunt? Do we expect to turn out useful translations, but presume they will always be crude and inelegant? Are we to provide a means to translate a specific field such as science or technology, or all types of literature?The first is not enough, the last beyond our capabilities now. But the second is possible in 1958. In fact our objective should be to translate scientific or technical material in accurate readable form, with one proviso. Such an effort would be of great value to the nation, only if it can be done as fast as foreign presses print the material. The problem of lexical search is what is known in the computer field as a "real-time problem".No hardware yet exists to carry out Mechanical Translation in real time. The current output of the leading nations is of the order of 3 x 10 6 pages per year, or 10 9 words per year. In the next year or two we may expect text readers to be developed which will be able to read printed material at the rate of 1000 characters/sec. With 10 7 sec in a working year and 6 characters/word, this amounts to 1.5 x 10 9 words/year, of the order of magnitude of the rate of publication. Thus we can expect the rate of input to the machine to be adequate.The corresponding rate at which the lexical search must be carried on is 10 9 words/year or 100 words/sec. Thus the first requirement on this memory unit of the machine (which we shall call Store I) is that it must have 10 millisec random access time to every entry. It will take one-fifth of a second to look up all the words in an average sentence.The size of the store will depend on the number of words in a language, and the amount of lexical material to be associated with each word in an entry. There are some 6 x 10 4 words in a dictionary. However, at the present state of translation theory we can hardly afford to neglect the clues offered by inflexional forms, so the total number of source words which must be in the store will be more like 10 6 . At present we average about 250 bits (6 bits to define a character) in an entry, and more sophisticated translators will require about 10 3 bits. Thus the second requirement of the store is that its capacity must ultimately be about 10 9 bits.There is a third design parameter of the store which must be established to make the translating system efficient. Access to an entry has been established at 10 milliseconds; the size of the entry at 10 3 bits. This material must be read out in a reasonable fraction, say 10%, of the access time. Thus the third requirement of the store is that its read-out rate must be 10 6 bits/sec.The problem of lexical storage involves more than the mere storage and access to lexical material. A good translation also involves the interrelation of the lexical material found on the basis of syntactics and semantics. The first disgorgement of the store is only raw material, on which a logical unit of the machine has to work. (Here "logic" means that mathematical or symbolic logic which can actually be done with a computer. Some "logical" operations are purely housekeeping details of the mechanical operations of the computer.) Rough estimates based on current theories of Mechanical Translation would indicate that some 10 4 logical operations may be required per sentence to straighten out the disgorged material into a good translation. Even if only a fraction of these operations were requested for further look-up (as many theories demand), the restriction of producing output as fast as material is fed in makes it imperative that no further look-ups in the large store are permitted during logical processing.This means that the disgorged material on the first look-up (from Store I) should be necessary and sufficient for analysis of the sentence (or paragraph). In other words, the output of the first look-up operation creates a "microglossary" sufficient for the analysis of the sentence. This selection from Store I should be dumped in a fast memory (called Store II) for logical processing. With 20 words per sentence on the average, 10 3 bits output/word the requirement on capacity for the intermediate memory is of the order of 10 5 bits (100 thousand-bit computer "words").We have seen that the rate of flow, from source through input equipment and in table look-up in Store I, are all well matched at 0.2 sec per 20-word sentence. High-speed memories of 10 5 bits capacity are currently available with a 10 microsecond random access. Hence 0.2 + 10 -5 or 2 x 10 4 logical operations (computertype) may be made with the microglossary. Since some 20 computertype housekeeping operations are normally required for one purely logical operation (e.g. a comparison of endings), about 10 3 of the -81-latter are permitted per sentence. Of these perhaps 10 2 may be further table look-ups (in the fast memory). This facility seems adequate for current Mechanical Translation theories.This scheme of setting up a microglossary for each sentence imposes not only the above physical requirements on the intermediate memory, but also begins to define the logical elements necessary in the entries.At this point in the machine it is actually unnecessary, and is in fact premature, to have any translation into the target language.We may cover most of the theoretical approaches by defining the contents of the entry in Store I as clues. That is, given a sequence of words in the source materialS l S 2 ....S i .....the first operation is to look up in Store I lexical information concerning the words S i (or word sequences S i , S i+1 . . . S i+k ). The output will be a sequence of termsS 1 A 1 B 1 , S 2 A 2 B 2 , .... S i A i B i ....where A i refers to characters (possibly binary) giving syntactical information (such as the part of speech), and B i refers to characters giving semantic information, e.g. "this is a word from physics"; but not the translation.The sequences (S i A i B i ) form an expanded sentence, and form the microglossary in the intermediate fast memory of the logic portion of the computer. Here the sequences A i , B i are examined and a new set of characters C i are constructed and assigned to each S i . Note that the i'th C i , assigned to S i , is in fact a function of all the preceding and succeeding A j 's and B j 's (called the "local" or "minor" context). The determiners (A's and B's) for the C's may not only be in the sentence, but possibly (especially for pronouns) lie in previous sentences, or even of the title (field), called the "Major Context".These logical operations will consist of two groups. The first will be a syntactical analysis of the sequence A 1 , A 2 . . . A i (without the S i 's or B i 's). This is like a schoolboy's diagramming of the sentence, -82-which finds the relations between words. According to the Cambridge Language Research Group this analysis can be made by algebraic lattice theory, which is highly algorithmic.To give a very elementary example from French let all nouns have an A=a, all verbs an A=ß and the word S=le have an A=a+ß. Here the plus symbol is the logical "or" operation. There will be as many terms in A as there are multiple meanings for the S. Then the syntactical analysis of "le" followed by a noun would involve the Boolean multiplication, which is easy to mechanize, (a+ß)(a) = a The result, a, would constitute a character of the C for "le", so that the output SC for "le" would be lea. The augmented word lea has a unique meaning "the". Note the actual meanings of the augmented words S i C i are not yet at hand, and are to be found by a third operation in the machine, to be described below.In the case of "le" followed by a verb, the multiplication is (a+ß)(ß) = ß and the output SC for "le" would now be leß. The augmented word "le" has the unique meaning "it". (The other meaning "him" would be assigned to "lea", derived from other A's.)A more complicated example would be the phrase ". . . penetrée d'abord de ..." in which the logical operations on the A's for the four words (d'abord not being treated here as an idiom) should show that "de" not "d'" is modified by the "penetrée" (and is ultimately to be translated as "by" not "of").According to the MIT group these operations will require another series of table look-ups. The storage involved probably does not require high capacity, but will require fast access, and be similar to Store II.When the permissible connections between words has been established by purely syntactical analysis, by means of the A's, a second series of operation, involving the B's, is carried out.Consider the following three elementary examples in French. 1) . . . le livre est à lui . . . 2) . . . il est pour travailler . . . 3) ... pour . . . In sentence 1) "est" has a specific meaning "belongs", the clue for this selection being "à", which has its normal meaning "to". In sentence 2) "est" has its most probable meaning "is", but "pour" is to mean "about to" here. In sentence 3) "pour" is to have its most probable meaning "for". We shall not complicate matters by giving sentences where "à" is controlled by other words giving it meanings other than "to", but remember this in the formulation. To handle the multiple meanings for the three words est, à and pour, whose clues are specific words elsewhere in the sentence, rather than purely syntactical, we assign to these words B's which are logical sums of characters a, b, c . . . The output of the logical unit is then a sequence ofS i C l , S 2 C 2 ... S i C i ...The point here is that we are no longer concerned with raw words S i of the source language, but augmented words S i C i , and these augmented words, if our method of construction of the C i 's is adequate, have a unique meaning.At this point in the machine we should have then solved the multiple-meaning problem with the aid of the syntactical and semantic context.We now come to the final stage of the machine, which again is a memory look-up operation. We enter with the individual augmented words S i C i and find a single target equivalent T i . (Note S i C i may stand for a string of words S k . . . S m , from which some S k C k have no target equivalent.)The statement that the machine has a second look-up in a large store for each word does not violate our precept that time does not permit more than one look-up, because this operation is on another store, and can be done in the interval when the preparatory look-up for the next sentence is going on. (It is reasonable to suppose the intermediate Store II is flexible enough to be accepting S i A i B i from the first memory for the following sentence while simultaneously supplying S i C i for look-up in the last memory for the sentence in hand.)Nevertheless the speed of the last large memory (Store III) must be such as not to delay the overall flow of information through the system.Since the logical operations have only made a one-to-one correspondence between the S i A i B i and S i C i , the number of look-ups for the sentence remains the same. Thus the requirement on the last store in regard to access time is the same as for Store I.In simple Mechanical Translation theories, on the average there are 3 multiple meanings for each source language word, the number of entries in Store III will be three times that of Store I. Further the length of the address, S i C i , will be about twice the length of the address S i used in Store I. On the other hand the information sought is only a simple target equivalent, averaging 6 characters, or less than 50 bits. The length of an entry will thus be about 150 bits. The total capacity of Store III will be about one-third that of Store I. Nevertheless, in view of the rudimentary state of the theory, for the following reasons one should consider Store III as having essentially the same capacity as Store I.It seems that a more advanced theory of Mechanical Translation, or more accurately, of mechanically understanding the written word, could be developed along these lines. The semantic information B i associated with each input word S i in the first lexical search, could be elaborated in great detail; so much so that the output of the logical unit could dispense with the symbols, S i , of the source words, and be merely a string of C i 's, C 1 C 2 ...C i ... This presupposes that the B i 's, and the analysis of relationships by means of the A i 's, are sufficiently detailed that the sequence of C i 's has retained all the content and relationships the whole idea, in some coded form related to symbolic logic. In this event Store III would be a kind of thesaurus, for which the input is a sequence of symbols, C i , associating in a Boolean function a large number of ideas and relations which must be stated in the output, as determined from the initial contextual analysis; and for it we wish the machine to choose the most appropriate word. This word is not necessarily the one we would find in a dictionary, nor is it a synonym, but a particularly cogent word for the idea in the particular context. In passing we remark that the C i 's themselves constitute a language analysis to symbolic logic or the proposed "ruly English", but are unsatisfactory output in themselves as they do not convey the richness and desirable ambiguity (after Empson) which makes ordinary languages sophisticated means of communication. In short the thesaurus reattaches to the primitive C i 's the psychological content and background description that makes languages.In order to point out that the effort spent on both the theory and hardware for Mechanical Translation is of value not only in itself, but for the larger problem of information retrieval, we may point out that in the above system the output T i from Store III may indeed be the same language as the input S, so that the machine translates English into better English. Or T i may be the more primitive English used by librarians and indexers, so that the system could be used for classifying, indexing and abstracting.There is an important point in imagining the construction of local context and introduction of the thesaurus in contrast with a dictionary. Inasmuch as the C i 's are determined from the local context, which, if the material is worth translating, should have some novel combinations of ideas, we cannot expect all possible C i 's to be listed with an S in Store III. That is we do not necessarily have unique addresses to the entries of Store III. Hence we must arrange to locate not necessarily a specific C i , but a best match. There are various ways of defining "best"; one is, recognizing C i to be essentially a Boolean function, to find a C i which dominates C i in the sense of lattice theory, i.e.A system such as this will have to be introduced even in simpler Mechanical Translation schemes, to handle typographical errors and grammatical errors on the part of the original author.The Mechanical Translation system consists then of three parts, first a high capacity millisecond-access store of lexical information concerning the source language; second, a low-capacity microsecondaccess store for logical processing of lexical information into augmented words for selection; and third, another high-capacity millisecond-access store of thesaural information concerning the target language. The whole system must operate in real time. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
814
0
null
null
null
null
null
null
null
null
826fb686f66a4416310bd5c35f049afb688c737c
41645107
null
Grouping and Dependency Theories
Immediate-constituent analysis and dependency analysis (two theories of syntactic description) are based, respectively, on the topologies of grouping and of trees. A correspondence between structures of the two types is defined, and the two topologies are compared, mainly in terms of their empirical applications. The two common methods of describing sentence structure (at the syntactic level) are immediate-constituent analysis and dependency analysis. The former, also known as phrase-structure analysis, is most often used by American linguists, but the latter is taught in high school. Phrase-structure theories underlie all MT systems being developed in the United States, except that of The RAND Corporation, while Soviet work on MT uses both theories. 2 The analysis presented in Figure la illustrates one method, that in Figure 1b the other. H. Hiž has recently presented a formal theory of grouping as a basis for immediate-constituent analysis, 3 the present paper defines a correspondence between groupings and the trees which are a basis for dependency analysis. Study of the correspondence reveals some similarities and differences between the methods; each has unique advantages in the study of syntax.
{ "name": [ "Hays, David G." ], "affiliation": [ null ] }
null
null
Proceedings of the National Symposium on Machine Translation
1960-02-01
0
23
null
1 I am grateful to Jane Pyne, H. Hiž, A. Madansky, and T. W. Mullikin for their criticisms and suggestions, which have helped substantially in the long, slow development of the material presented here. None is to be blamed for remaining errors. 2 For examples of Soviet work using dependency theory, see the abstracts by O. S. Kulagina, I.I. Revzin, T. N. Moloshnaya, and Z.M. Volotskaya, Ye. V.Paducheva, I. N. Shelimova, A. L. Shumilina, in Abstracts of the Conference on Machine Translation, (May 15-21, 1958) , translated by U. S. Joint Publications Research Service, Washington, D. C. , JPRS/ DC-241, July 22, 1958. 3 H. Hiž, "Steps toward Grammatical Recognition", Preprints of the International Conference for Standards on a Common Language for Machine Searching and Translation, Cleveland, Sept. 6-12, 1959 . Trees, on the other hand, arise by the introduction of a partial ordering over the elements of a set. Given two elements, either they do not compare at all, or one depends on the other, directly or indirectly; we call indirect dependence derivation--thus, for example, the object of a preposition derives from the word on which the preposition depends, and an adjective modifying the subject of a verb derives from the verb. If there is a unique element from which all others derive, and if no element depends on two others, the partial SYNTAX ordering can be displayed by a branching diagram, or tree. 1.( (*) (*) (*) (*) ) 2. ((*) (*) ((*) (*) ) ) 3. ( ( (*) (*) ) ( (*) (*) ) ) 4. ( ( (*) (*) (*) ) (*) ) 5. ( ( ( (*) (*) ) (*) ) (*) ) FIGURE 2 PARENTHETIC EXPRESSIONSIf a tree and a p. e. are to serve as alternative models for the same sentence, they must have equal numbers of elements. In Figure 2 are presented the five distinct p. e. 's, order being disregarded, that have four elements each. Four distinct trees can be drawn with four nodes each; they are shown in Figure 3 . We take it for granted, with types and order disregarded, that the following sentences are all modeled by p. e. 3:FIGURE 3 TREES CONTAINING 4 NODESHe ate lunch slowly. John ate green apples.That little boy ate. Very good children eat.(The underscorings correspond to p. e. 's. ) Although, these sentences have the same grouping structure, their trees are different. "He ate lunch slowly" is described by tree A, "John ate green apples" by tree B, "That little boy ate" by tree C, and "Very good children eat" by tree D. Obviously, tree structures capture something of syntax that is lost by grouping.On the other hand, it is easy to construct a set of sentences with a fixed tree structure and various groupings. For example, these two sentences are described by tree B:Little John ate breakfast.He ate his breakfast.The first has grouping 3, the second grouping 5. Grouping, therefore, captures something of syntax that is lost by tree structures. The correspondence is illustrated, for trees with 4 nodes, in Table 1. The marginal labels in the table are taken from Figures 2 and 3. TABLE 1 Correspondence matrix: In such a situation, the immediate-constituent analyst would always group elements 1 and 4, then built them into a larger structure.ParentheticConstructive rules can be given for going from a p. e. to all corresponding trees, and vice versa. "candy" goes into VP one time but not the other.ing behind the grammatic transformation is reflected more clearly, as we believe, by two trees, in which "children" and "candy" are dependents of "love" in both active and passive forms of the sentence.Again, consider the naming of phrases. An adjective plus a noun form a noun phrase, and an adverb plus an adjective form an adjective phrase. The naming singles out an element of each phrase, as does the topology of a tree.Grouping -e. g., ((A)(N)) -does not.Neither parenthetic expressions nor trees capture all that the linguist wants to say about sentences. Beginning with either, he requires ancillary apparatus to complete his description. What is natural and inherent in one theory has to be appended to the other;immediate-constituent analysis introduces phrase names 4 to handle a property of language that is reflected in inherent properties of 4 Several MT systems have been projected in which sequences of sentence elements of given types are replaced by phrase units of given types, until the sentence is reduced to N-VP = S. Cf. Victor A. Oswald and Stuart L. Fletcher, "Proposals for the Mechanical Resolution of German Syntax Patterns", Modern Language Forum, vol. 36, No. 3-4, 1951, pp. 1-24.
null
null
null
null
Main paper: : 1 I am grateful to Jane Pyne, H. Hiž, A. Madansky, and T. W. Mullikin for their criticisms and suggestions, which have helped substantially in the long, slow development of the material presented here. None is to be blamed for remaining errors. 2 For examples of Soviet work using dependency theory, see the abstracts by O. S. Kulagina, I.I. Revzin, T. N. Moloshnaya, and Z.M. Volotskaya, Ye. V.Paducheva, I. N. Shelimova, A. L. Shumilina, in Abstracts of the Conference on Machine Translation, (May 15-21, 1958) , translated by U. S. Joint Publications Research Service, Washington, D. C. , JPRS/ DC-241, July 22, 1958. 3 H. Hiž, "Steps toward Grammatical Recognition", Preprints of the International Conference for Standards on a Common Language for Machine Searching and Translation, Cleveland, Sept. 6-12, 1959 . Trees, on the other hand, arise by the introduction of a partial ordering over the elements of a set. Given two elements, either they do not compare at all, or one depends on the other, directly or indirectly; we call indirect dependence derivation--thus, for example, the object of a preposition derives from the word on which the preposition depends, and an adjective modifying the subject of a verb derives from the verb. If there is a unique element from which all others derive, and if no element depends on two others, the partial SYNTAX ordering can be displayed by a branching diagram, or tree. 1.( (*) (*) (*) (*) ) 2. ((*) (*) ((*) (*) ) ) 3. ( ( (*) (*) ) ( (*) (*) ) ) 4. ( ( (*) (*) (*) ) (*) ) 5. ( ( ( (*) (*) ) (*) ) (*) ) FIGURE 2 PARENTHETIC EXPRESSIONSIf a tree and a p. e. are to serve as alternative models for the same sentence, they must have equal numbers of elements. In Figure 2 are presented the five distinct p. e. 's, order being disregarded, that have four elements each. Four distinct trees can be drawn with four nodes each; they are shown in Figure 3 . We take it for granted, with types and order disregarded, that the following sentences are all modeled by p. e. 3:FIGURE 3 TREES CONTAINING 4 NODESHe ate lunch slowly. John ate green apples.That little boy ate. Very good children eat.(The underscorings correspond to p. e. 's. ) Although, these sentences have the same grouping structure, their trees are different. "He ate lunch slowly" is described by tree A, "John ate green apples" by tree B, "That little boy ate" by tree C, and "Very good children eat" by tree D. Obviously, tree structures capture something of syntax that is lost by grouping.On the other hand, it is easy to construct a set of sentences with a fixed tree structure and various groupings. For example, these two sentences are described by tree B:Little John ate breakfast.He ate his breakfast.The first has grouping 3, the second grouping 5. Grouping, therefore, captures something of syntax that is lost by tree structures. The correspondence is illustrated, for trees with 4 nodes, in Table 1. The marginal labels in the table are taken from Figures 2 and 3. TABLE 1 Correspondence matrix: In such a situation, the immediate-constituent analyst would always group elements 1 and 4, then built them into a larger structure.ParentheticConstructive rules can be given for going from a p. e. to all corresponding trees, and vice versa. "candy" goes into VP one time but not the other.ing behind the grammatic transformation is reflected more clearly, as we believe, by two trees, in which "children" and "candy" are dependents of "love" in both active and passive forms of the sentence.Again, consider the naming of phrases. An adjective plus a noun form a noun phrase, and an adverb plus an adjective form an adjective phrase. The naming singles out an element of each phrase, as does the topology of a tree.Grouping -e. g., ((A)(N)) -does not.Neither parenthetic expressions nor trees capture all that the linguist wants to say about sentences. Beginning with either, he requires ancillary apparatus to complete his description. What is natural and inherent in one theory has to be appended to the other;immediate-constituent analysis introduces phrase names 4 to handle a property of language that is reflected in inherent properties of 4 Several MT systems have been projected in which sequences of sentence elements of given types are replaced by phrase units of given types, until the sentence is reduced to N-VP = S. Cf. Victor A. Oswald and Stuart L. Fletcher, "Proposals for the Mechanical Resolution of German Syntax Patterns", Modern Language Forum, vol. 36, No. 3-4, 1951, pp. 1-24. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
780
0.029487
null
null
null
null
null
null
null
null
cfeffb24d9d90495ac75e028893e0768e601ef30
244077632
null
Linguistic Research at the {RAND} Corporation
This paper describes postediting rules for description of funclion in context, work on computational routines for semi-automatic analysis, the concept of idiom-in-structure, and two broad problems on which work is just beginning at RAND: grammatic transformation and distributional semantics. The latter problems are especially important for automatic indexing, abstracting, and text searching.
{ "name": [ "Hays, David G." ], "affiliation": [ null ] }
null
null
Proceedings of the National Symposium on Machine Translation
1960-02-01
1
0
null
We are spending this winter in writing a major report. After nearly three years of research, and after processing a quartermillion running words of text, we find that we have a lot to say. In the self-description that we furnished the National Science Foundation for its most recent survey of MT studies, 2 we expressed our hope that we would have a completed system in operation during the summer of 1959. Our hope was fulfilled, in a manner of speaking, and we set out to describe what we had accomplished. As we write, however, we find that clear exposition highlights our every weakness.Our writing is therefore interlarded with efforts to eliminate some of the flaws. We are not the first to discover that writing a book takes longer than a reasonable man would dare guess, nor will we be the last.Our table of contents has been revised several times. At first it was a sketch of what Russian-English MT requires: (1) a method of 1 Other members of the RAND project include Kenneth E. Harper and Dean S. Worth (both of UCLA, consultants); Andrew S. Kozak, Dolores V. Mohr, Joan H. Pustula, and Barbara J. Scott (linguistic technicians); Theodore W. Ziehe, Hugh S. Kelly, and Charles H. Smith (programmers). The plural pronoun in the text includes these persons, but errors should be attributed to the author alone.2 Madeline M. Henderson, and Nancy Ripple, editors, Current Research in Scientific Documentation, No. 5 (October, 1959) , Washington, D. C. : National Science Foundation, Office of Science Information Service, Session 1-CURRENT RESEARCH semi-automatic lexicographic and grammatic research; (2) analytic and synthetic algorithms, more or less independent of the languages chosen for input and output; (3) a descriptive grammar of Russian physics; (4) a Russian-English physics glossary. Now we have dropped (3) and (4), despite their importance, because the grammar and glossary that we have are incomplete. The algorithm is only a mirror, reflecting a descriptive grammar; the main products of MT research belong to linguistics, and a much smaller fraction to computer programming. Nevertheless, we have not processed enough text to justify our publishing a Russian grammar or dictionary. Several million running words of text are required as a suitable basis. This paper is a sampling of our intended book. As readers of our reports already know, our method of research consists of processing text and analyzing the results. 3 We translate a new corpus mechanically, postedit it, and see what the posteditors have added.In this sampling I wish to emphasize the last two steps: postediting and analysis. We have been forced to take more pains with these steps than with the others because they have proved more difficult to organize to our satisfaction. Accordingly, we have been slow in reporting them. 4 As for substantive results, I will mention only the concept of idiom-in-structure, a concept that turns up in several places in the literature without ever getting quite the explication that it demands. Finally, I will introduce two topics that will soon occupy most of our time (as we expect), grammatic transformations and distributional semantics.After some trial and error, we reached the conclusion that postediting must include structural description of the input text.In the usual discussion, the editor is said to polish the rough 3 Edmundson, H. P., and David G. Hays, "Research Methodology for Machine Translation." Mechanical Translation, vol. 5, no. 1 (July, 1958) , pp. 8-15. translation; 5 we do not gainsay that requirement, but we think it leaves an unnecessarily difficult job for the analyst who comes after the editor in the research process.Our conception of syntactic structure has grown from month to month, and it is certainly not static at this time. We first thought of structure as a set of connections among the words in a sentence;Yngve, among others, urged us to go further, and Hiž's paper 6 at the Cleveland Standards Conference states, more clearly than we had ever put them, the reasons for more refined description. Roughly speaking, connection structure is an inadequate framework because it does not differentiate among dependents of the same governor. Yet two dependents of a single governor often have different functions; to reflect such differences, we turned to the grammar code. 7In our syntactic theory, both the subject and the object of a We are now at the point of looking for good analyses of functions l i t this new level; almost ready to attempt our own analyses if we cannot find them in the literature; about to establish notational conventions for the posteditors to use, so that the posteditor can describe syntactic structures to this degree of precision. When the editors have processed enough text, we will be prepared to attempt automatic resolutions, but not before. Factoring simplifies both the MT routine and analysis; here we are interested in the latter. After a corpus has been edited, the analyst orders certain listings. Concordances, or exhaustive listings of text by lexical and grammatic properties, are used, but we also use selective listings. Let us take specific examples.First, consider the word что = "that" or "which". We know from standard grammars 13 and our own experience that что has two distinct functions. This word is sometimes a subordinate conjunction (91% of its occurrences in our text) and sometimes a relative pronoun (9%). It is a conjunction only when it introduces a clause serving (i) as the object of a verb in a lexically limited class, (ii) as the object of an adjective or noun derived from such a verb, or (iii) in apposition with such a noun. The verbs, adjectives, and nouns belonging to this class are marked in our glossary (in the grammar code). That is to say, they are marked as soon as they are seen to govern что as a subordinate conjunction (with English equivalent "that"). It would be vain to try to mark them in advance; comparing the list we now have with Ushakov's dictionary, 14 Joan Pustula found that only half of the words on our list are marked as having the property in question.When a word first occurs as governor of a noun clause introduced by что, it lacks the mark that would allow the sentence- If he dares to omit verification, the whole process, from postediting to glossary updating, can be automatic.Identifying new governors of что = "that" is an example of adding new items to known categories. New categories remain to be discovered; automation of that process is more challenging. establishing translation algorithms with no more than parallel texts for input. Although such a program will probably be designed eventually, we believe that we are pursuing a more productive course for the present, and a more efficient policy for "known" languages.Our method, in summary, consists of textual analysis. The posteditors serve as informants in a restricted sense; they supply "correct" sentence-structure analyses and "correct" translations, to be used as objectives. Some of these semi-idioms consist of governor and preposition:зависимостъ от = "depend on", состоятъ из = "consist of", etc.Kenneth E. Harper, using the RAND corpora, has made a detailed study of this phenomenon in Russian physics; and the same idea turns up in standard grammars 18 (where the term "lexically closed", or "limited" is applied), in Russian work on MT, 19 and in Bar-Hillel's discussion 20 of discontinuous constituents in English (e.g. , "give up candy" or "give it up"). Other semi-idioms consist of verb and direct object, or verb and modifying phrase: играть роль = "play a role", иметь в виду = "have in view".The frozenness of these expressions is often important for one reason or another. Selection of the English equivalent is frequently determined for one member of the combination by the other; Harper's objective has been to produce accurate translations of Russian prepositions. Syntactic functions can be influenced by the fact of combination; иметь governs что -clauses only when it is combined with в виду. Sometimes the fixity of the combination clarifies a structural ambiguity; we found that the subject and object of a verb, when they cannot be differentiated morphologically, can be distinguished by word order except when one of them is frozen in combination with the verb. 21 Thus we found имеет место правило = "a rule occurs" in our text, but nowhere did we find the order verb-objectsubject when object and subject were morphologically identical and neither was closely associated with the verb.Idiom-recognition routines, upon finding an occurrence in text that could be the first element of an idiom, look at the next following occurrence to see if it continues the same idiom, and so proceed.If an occurrence intervenes that does not belong to the idiom, recognition fails. Sentence-structure-determination routines necessarily span longer sections of each sentence. When sentencestructure determination is completed successfully, the components of the semi-idioms described above stand in frozen relation to each other; their sequence is free, but their structural connections are fixed. We propose, therefore, to establish a list of idioms-instructure. The routine to recognize them would operate after sentence structure had been determined, and it would follow structural connections. On finding an occurrence that can be the dominant element of an idiom-in-structure, the routine would test the dependent occurrences to see whether one of them continued the idiom.As we have shown, others discuss these semi-idioms, but we feel that our analogy with the ordinary idiom routine will be more efficient than the alternatives that are now in use.The concept of transformations is now familiar, but there are no exhaustive lists of the transformations used in any natural language.Making these lists is a matter for empirical research; as yet no operational definition has been suggested that is entirely satisfactory as a basis for semi-automatic data analysis. Probably Harris came closest in his original paper on the subject. 22 Confronted with the high cost of language-data processing, however, he immediately dismissed his own suggestion. The cost of automatic processing is rapidly decreasing, and it will decrease much faster when automatic print readers are generally available. As our systems for sentencestructure determination (algorithms, dictionaries, and grammars)are perfected, so that posteditors have less to add, we can foresee empirical studies based on tens of millions of running words of text.Anticipating that time, we propose to undertake preliminary empirical searches for transformations, using the Russian physics text that we have or can acquire.In this study, we will test an operational definition of transformation that is appropriate to our dependency theory of syntax.Abstractly, a transformation is defined as a pair of dependency types, linking different grammatic types, but equivalent in meaning. 22 Harris, Zellig S., "Discourse Analysis", Language, vol. 28, no. 1 (1952) , pp. 1-30. a statistical measure. If d1 and d2 are transformationally equivalent, then in a large corpus many (but not all) of the word pairs connected by d1 should also appear connected by d2 . We need measures to tell us whether the observations that we make support the hypothesis of equivalence to an adequate degree. The difficulties go on without end for as far as we can now see.We anticipate a long and interesting task in the development of empirical methods for research on transformation.Session 1-CURRENT RESEARCH Our other task for the immediate future is a study of distributional semantics. Harris put it this way: "If the environments of A are always different in some regular way from the environments of B, we state some relation between A and В depending on this regular type of difference ... If A and В have almost identical environments except chiefly for sentences which contain both, we say they are synonyms ... If A and В have some environments in common and some not, . . . we say that they have different meanings, the amount of meaning difference corresponding roughly to the amount of difference in their environments" 24 . We hope to follow this program as far as it can take us, changing it a little to adapt it to a dependency theory of sentence structure.Harper is getting started with a study of verbs, separating those with only animate subjects in our text, those with only inanimate subjects, and those with both. That step is only a beginning, and where it will lead is unknown. One thing seems certain: we will use distributional semantics to establish word classes, apply those word classes in studying transformation, and use transformation analysis in the study of semantic distribution.We are writing a major report of our results to date. We are anxious to promote automatic programming for the sake of easier analyses of the material we are collecting. And we are fretting under the MT label.The report will show how we translate Russian physics text into English, and it will contain both samples of the output and measures of our efficiency and effectiveness. The system that we use can be improved, and we hope that we and others will improve it. The codematching method of Parker-Rhodes, Lukjanow, Garvin, 25 et. al. , would improve our system considerably, and either the Lamb-Session 1-CURRENT RESEARCH Jacobsen 26 or the Ziehe-Kelly 27 glossary-lookup method has to be built in. The most important means of improvement, of course, is enlargement of the textual base of the dictionary and grammar.Automatic programming is often regarded as a panacea and used to cure problems that could better be attacked by explication.There is a limit to our powers of explication, however, and MIMIC is a token of Kelly's success in advancing automatic programming in the RAND MT project. Its first use, as he explains in this Symposium, 28 is for output construction. We also plan to use it in Netting up data-reduction operations, and eventually in programming transformational manipulations of our text. To serve these purposes, MIMIC must grow.Until late 1959 we accepted the label "MT", but two months ago we petitioned for a change. Our new titles are linguistic research and automatic language-data processing. These phrases cover MT, but they allow scope for other applications and for basic research.Machine translation is no doubt the easiest form of automatic language-data processing, but it is probably one of the least important. We are taking the first steps toward a revolutionary change in methods of handling every kind of natural-language material. The several branches of applied linguistics have so much in common that their mutual self-isolation would be disastrous. The name of our journal, the name of our society if one is established, the scope of our invitation lists when we meet, and all other definitions of our field should be broadened-never narrowed. In 10 years we will find that MT is too routine to be interesting to ourselves or to others. Applied linguistic research is endless.Harper, Kenneth E. , David G. Hays, and Barbara J. Scott, Studies in Machine Translation -8: Manual for Postediting Russian Text, Santa Monica, Calif. : The RAND Corporation Paper P-1624, Revised, 7 November 1959-Cf. Hays, David G. , "Order of Subject and Object in Scientific Russian when other Differentia are Lacking", Mechanical Translation, in press. 9 Academy of Sciences of the USSR, Grammatika Russkogo Yazyka, Vol. II, pp. 234-241.10 Jesperson, Otto, Growth and Structure of the English Language , Garden City, N.Y.: Doubleday (Anchor Books), p. 193. 11 Loc. cit. 12 Worth, Dean S. , "Transform Analysis ofRussian Instrumental Constructions", Word, vol. 14, no. 2/3 (Aug. -Dec., 1958), pp. 247-290.E.g.; Unbegaun, В. O., Russian Grammar, Oxford; Clarendon Press, 1957, pp. 128 and 277.Ushakov, D. N. , Tolkovyj Slovar' Russkogo Yazyka, Moscow 1935-1940.Solomonoff, R. J., The Mechanization of LinguisticLearning, Cambridge, Mass., Zator Co. , Report No. ZTB-125, April, 1959.E.g., Academy of Sciences of the USSR, Grammatika Russkogo Yazyka, Vol. II, pp. 173-176. 19 Belokrinitskaya, S. S., "Principles in Compiling a German-Russian Glossary of Polysemants for Machine Translation", in Abstracts of the Conference on Machine Translation, May 15-21, 1958, translated by U. S. Joint Publications Research Service JPRS/DC-241. 20 Bar-Hillel, Yehoshua, Report on the State of Machine Translation in the United States and Great Britain, Jerusalem, Israel: Hebrew University, Feb. l5, 1959.Op. cit., fn. 8Thе use of transformations in establishing word families is proposed by Z. M. Volotskaya, "Word Formation in Conversion of Intermediary Language into Output Language", in op. cit. , fn. 16.Harris, Zellig S. , "Distributional Structure", Word, vol. 10 (1954), pp. 146-162. The quoted passage is on p. 157.25 Parker-Rhodes, A.F., "An Algebraic Thesaurus", presented at an International Conference on Mechanical Translation,Cambridge, Mass., Oct. 15-20, 1956. Ariadne Lukjanow and Paul Garvin have (independently) communicated their interest in code-matching techniques, in private conversations.Sydney M. Lamb and William H. Jacobsen, Jr. , personal communications.27 Kelly, Hugh S., and Theodore W. Ziehe, "Glossary Lookup Made Easy", in this Symposium.28 Kelly, Hugh S., "MIMIC: A Translator for English Coding", in this Symposium.
null
null
null
null
Main paper: introduction: We are spending this winter in writing a major report. After nearly three years of research, and after processing a quartermillion running words of text, we find that we have a lot to say. In the self-description that we furnished the National Science Foundation for its most recent survey of MT studies, 2 we expressed our hope that we would have a completed system in operation during the summer of 1959. Our hope was fulfilled, in a manner of speaking, and we set out to describe what we had accomplished. As we write, however, we find that clear exposition highlights our every weakness.Our writing is therefore interlarded with efforts to eliminate some of the flaws. We are not the first to discover that writing a book takes longer than a reasonable man would dare guess, nor will we be the last.Our table of contents has been revised several times. At first it was a sketch of what Russian-English MT requires: (1) a method of 1 Other members of the RAND project include Kenneth E. Harper and Dean S. Worth (both of UCLA, consultants); Andrew S. Kozak, Dolores V. Mohr, Joan H. Pustula, and Barbara J. Scott (linguistic technicians); Theodore W. Ziehe, Hugh S. Kelly, and Charles H. Smith (programmers). The plural pronoun in the text includes these persons, but errors should be attributed to the author alone.2 Madeline M. Henderson, and Nancy Ripple, editors, Current Research in Scientific Documentation, No. 5 (October, 1959) , Washington, D. C. : National Science Foundation, Office of Science Information Service, Session 1-CURRENT RESEARCH semi-automatic lexicographic and grammatic research; (2) analytic and synthetic algorithms, more or less independent of the languages chosen for input and output; (3) a descriptive grammar of Russian physics; (4) a Russian-English physics glossary. Now we have dropped (3) and (4), despite their importance, because the grammar and glossary that we have are incomplete. The algorithm is only a mirror, reflecting a descriptive grammar; the main products of MT research belong to linguistics, and a much smaller fraction to computer programming. Nevertheless, we have not processed enough text to justify our publishing a Russian grammar or dictionary. Several million running words of text are required as a suitable basis. This paper is a sampling of our intended book. As readers of our reports already know, our method of research consists of processing text and analyzing the results. 3 We translate a new corpus mechanically, postedit it, and see what the posteditors have added.In this sampling I wish to emphasize the last two steps: postediting and analysis. We have been forced to take more pains with these steps than with the others because they have proved more difficult to organize to our satisfaction. Accordingly, we have been slow in reporting them. 4 As for substantive results, I will mention only the concept of idiom-in-structure, a concept that turns up in several places in the literature without ever getting quite the explication that it demands. Finally, I will introduce two topics that will soon occupy most of our time (as we expect), grammatic transformations and distributional semantics.After some trial and error, we reached the conclusion that postediting must include structural description of the input text.In the usual discussion, the editor is said to polish the rough 3 Edmundson, H. P., and David G. Hays, "Research Methodology for Machine Translation." Mechanical Translation, vol. 5, no. 1 (July, 1958) , pp. 8-15. translation; 5 we do not gainsay that requirement, but we think it leaves an unnecessarily difficult job for the analyst who comes after the editor in the research process.Our conception of syntactic structure has grown from month to month, and it is certainly not static at this time. We first thought of structure as a set of connections among the words in a sentence;Yngve, among others, urged us to go further, and Hiž's paper 6 at the Cleveland Standards Conference states, more clearly than we had ever put them, the reasons for more refined description. Roughly speaking, connection structure is an inadequate framework because it does not differentiate among dependents of the same governor. Yet two dependents of a single governor often have different functions; to reflect such differences, we turned to the grammar code. 7In our syntactic theory, both the subject and the object of a We are now at the point of looking for good analyses of functions l i t this new level; almost ready to attempt our own analyses if we cannot find them in the literature; about to establish notational conventions for the posteditors to use, so that the posteditor can describe syntactic structures to this degree of precision. When the editors have processed enough text, we will be prepared to attempt automatic resolutions, but not before. Factoring simplifies both the MT routine and analysis; here we are interested in the latter. After a corpus has been edited, the analyst orders certain listings. Concordances, or exhaustive listings of text by lexical and grammatic properties, are used, but we also use selective listings. Let us take specific examples.First, consider the word что = "that" or "which". We know from standard grammars 13 and our own experience that что has two distinct functions. This word is sometimes a subordinate conjunction (91% of its occurrences in our text) and sometimes a relative pronoun (9%). It is a conjunction only when it introduces a clause serving (i) as the object of a verb in a lexically limited class, (ii) as the object of an adjective or noun derived from such a verb, or (iii) in apposition with such a noun. The verbs, adjectives, and nouns belonging to this class are marked in our glossary (in the grammar code). That is to say, they are marked as soon as they are seen to govern что as a subordinate conjunction (with English equivalent "that"). It would be vain to try to mark them in advance; comparing the list we now have with Ushakov's dictionary, 14 Joan Pustula found that only half of the words on our list are marked as having the property in question.When a word first occurs as governor of a noun clause introduced by что, it lacks the mark that would allow the sentence- If he dares to omit verification, the whole process, from postediting to glossary updating, can be automatic.Identifying new governors of что = "that" is an example of adding new items to known categories. New categories remain to be discovered; automation of that process is more challenging. establishing translation algorithms with no more than parallel texts for input. Although such a program will probably be designed eventually, we believe that we are pursuing a more productive course for the present, and a more efficient policy for "known" languages.Our method, in summary, consists of textual analysis. The posteditors serve as informants in a restricted sense; they supply "correct" sentence-structure analyses and "correct" translations, to be used as objectives. Some of these semi-idioms consist of governor and preposition:зависимостъ от = "depend on", состоятъ из = "consist of", etc.Kenneth E. Harper, using the RAND corpora, has made a detailed study of this phenomenon in Russian physics; and the same idea turns up in standard grammars 18 (where the term "lexically closed", or "limited" is applied), in Russian work on MT, 19 and in Bar-Hillel's discussion 20 of discontinuous constituents in English (e.g. , "give up candy" or "give it up"). Other semi-idioms consist of verb and direct object, or verb and modifying phrase: играть роль = "play a role", иметь в виду = "have in view".The frozenness of these expressions is often important for one reason or another. Selection of the English equivalent is frequently determined for one member of the combination by the other; Harper's objective has been to produce accurate translations of Russian prepositions. Syntactic functions can be influenced by the fact of combination; иметь governs что -clauses only when it is combined with в виду. Sometimes the fixity of the combination clarifies a structural ambiguity; we found that the subject and object of a verb, when they cannot be differentiated morphologically, can be distinguished by word order except when one of them is frozen in combination with the verb. 21 Thus we found имеет место правило = "a rule occurs" in our text, but nowhere did we find the order verb-objectsubject when object and subject were morphologically identical and neither was closely associated with the verb.Idiom-recognition routines, upon finding an occurrence in text that could be the first element of an idiom, look at the next following occurrence to see if it continues the same idiom, and so proceed.If an occurrence intervenes that does not belong to the idiom, recognition fails. Sentence-structure-determination routines necessarily span longer sections of each sentence. When sentencestructure determination is completed successfully, the components of the semi-idioms described above stand in frozen relation to each other; their sequence is free, but their structural connections are fixed. We propose, therefore, to establish a list of idioms-instructure. The routine to recognize them would operate after sentence structure had been determined, and it would follow structural connections. On finding an occurrence that can be the dominant element of an idiom-in-structure, the routine would test the dependent occurrences to see whether one of them continued the idiom.As we have shown, others discuss these semi-idioms, but we feel that our analogy with the ordinary idiom routine will be more efficient than the alternatives that are now in use.The concept of transformations is now familiar, but there are no exhaustive lists of the transformations used in any natural language.Making these lists is a matter for empirical research; as yet no operational definition has been suggested that is entirely satisfactory as a basis for semi-automatic data analysis. Probably Harris came closest in his original paper on the subject. 22 Confronted with the high cost of language-data processing, however, he immediately dismissed his own suggestion. The cost of automatic processing is rapidly decreasing, and it will decrease much faster when automatic print readers are generally available. As our systems for sentencestructure determination (algorithms, dictionaries, and grammars)are perfected, so that posteditors have less to add, we can foresee empirical studies based on tens of millions of running words of text.Anticipating that time, we propose to undertake preliminary empirical searches for transformations, using the Russian physics text that we have or can acquire.In this study, we will test an operational definition of transformation that is appropriate to our dependency theory of syntax.Abstractly, a transformation is defined as a pair of dependency types, linking different grammatic types, but equivalent in meaning. 22 Harris, Zellig S., "Discourse Analysis", Language, vol. 28, no. 1 (1952) , pp. 1-30. a statistical measure. If d1 and d2 are transformationally equivalent, then in a large corpus many (but not all) of the word pairs connected by d1 should also appear connected by d2 . We need measures to tell us whether the observations that we make support the hypothesis of equivalence to an adequate degree. The difficulties go on without end for as far as we can now see.We anticipate a long and interesting task in the development of empirical methods for research on transformation.Session 1-CURRENT RESEARCH Our other task for the immediate future is a study of distributional semantics. Harris put it this way: "If the environments of A are always different in some regular way from the environments of B, we state some relation between A and В depending on this regular type of difference ... If A and В have almost identical environments except chiefly for sentences which contain both, we say they are synonyms ... If A and В have some environments in common and some not, . . . we say that they have different meanings, the amount of meaning difference corresponding roughly to the amount of difference in their environments" 24 . We hope to follow this program as far as it can take us, changing it a little to adapt it to a dependency theory of sentence structure.Harper is getting started with a study of verbs, separating those with only animate subjects in our text, those with only inanimate subjects, and those with both. That step is only a beginning, and where it will lead is unknown. One thing seems certain: we will use distributional semantics to establish word classes, apply those word classes in studying transformation, and use transformation analysis in the study of semantic distribution.We are writing a major report of our results to date. We are anxious to promote automatic programming for the sake of easier analyses of the material we are collecting. And we are fretting under the MT label.The report will show how we translate Russian physics text into English, and it will contain both samples of the output and measures of our efficiency and effectiveness. The system that we use can be improved, and we hope that we and others will improve it. The codematching method of Parker-Rhodes, Lukjanow, Garvin, 25 et. al. , would improve our system considerably, and either the Lamb-Session 1-CURRENT RESEARCH Jacobsen 26 or the Ziehe-Kelly 27 glossary-lookup method has to be built in. The most important means of improvement, of course, is enlargement of the textual base of the dictionary and grammar.Automatic programming is often regarded as a panacea and used to cure problems that could better be attacked by explication.There is a limit to our powers of explication, however, and MIMIC is a token of Kelly's success in advancing automatic programming in the RAND MT project. Its first use, as he explains in this Symposium, 28 is for output construction. We also plan to use it in Netting up data-reduction operations, and eventually in programming transformational manipulations of our text. To serve these purposes, MIMIC must grow.Until late 1959 we accepted the label "MT", but two months ago we petitioned for a change. Our new titles are linguistic research and automatic language-data processing. These phrases cover MT, but they allow scope for other applications and for basic research.Machine translation is no doubt the easiest form of automatic language-data processing, but it is probably one of the least important. We are taking the first steps toward a revolutionary change in methods of handling every kind of natural-language material. The several branches of applied linguistics have so much in common that their mutual self-isolation would be disastrous. The name of our journal, the name of our society if one is established, the scope of our invitation lists when we meet, and all other definitions of our field should be broadened-never narrowed. In 10 years we will find that MT is too routine to be interesting to ourselves or to others. Applied linguistic research is endless.Harper, Kenneth E. , David G. Hays, and Barbara J. Scott, Studies in Machine Translation -8: Manual for Postediting Russian Text, Santa Monica, Calif. : The RAND Corporation Paper P-1624, Revised, 7 November 1959-Cf. Hays, David G. , "Order of Subject and Object in Scientific Russian when other Differentia are Lacking", Mechanical Translation, in press. 9 Academy of Sciences of the USSR, Grammatika Russkogo Yazyka, Vol. II, pp. 234-241.10 Jesperson, Otto, Growth and Structure of the English Language , Garden City, N.Y.: Doubleday (Anchor Books), p. 193. 11 Loc. cit. 12 Worth, Dean S. , "Transform Analysis ofRussian Instrumental Constructions", Word, vol. 14, no. 2/3 (Aug. -Dec., 1958), pp. 247-290.E.g.; Unbegaun, В. O., Russian Grammar, Oxford; Clarendon Press, 1957, pp. 128 and 277.Ushakov, D. N. , Tolkovyj Slovar' Russkogo Yazyka, Moscow 1935-1940.Solomonoff, R. J., The Mechanization of LinguisticLearning, Cambridge, Mass., Zator Co. , Report No. ZTB-125, April, 1959.E.g., Academy of Sciences of the USSR, Grammatika Russkogo Yazyka, Vol. II, pp. 173-176. 19 Belokrinitskaya, S. S., "Principles in Compiling a German-Russian Glossary of Polysemants for Machine Translation", in Abstracts of the Conference on Machine Translation, May 15-21, 1958, translated by U. S. Joint Publications Research Service JPRS/DC-241. 20 Bar-Hillel, Yehoshua, Report on the State of Machine Translation in the United States and Great Britain, Jerusalem, Israel: Hebrew University, Feb. l5, 1959.Op. cit., fn. 8Thе use of transformations in establishing word families is proposed by Z. M. Volotskaya, "Word Formation in Conversion of Intermediary Language into Output Language", in op. cit. , fn. 16.Harris, Zellig S. , "Distributional Structure", Word, vol. 10 (1954), pp. 146-162. The quoted passage is on p. 157.25 Parker-Rhodes, A.F., "An Algebraic Thesaurus", presented at an International Conference on Mechanical Translation,Cambridge, Mass., Oct. 15-20, 1956. Ariadne Lukjanow and Paul Garvin have (independently) communicated their interest in code-matching techniques, in private conversations.Sydney M. Lamb and William H. Jacobsen, Jr. , personal communications.27 Kelly, Hugh S., and Theodore W. Ziehe, "Glossary Lookup Made Easy", in this Symposium.28 Kelly, Hugh S., "MIMIC: A Translator for English Coding", in this Symposium. Appendix:
null
null
null
null
{ "paperhash": [ "edmundson|research_methodology_for_machine_translation" ], "title": [ "Research methodology for machine translation" ], "abstract": [ "The general approach used at The RAND Corporation is that of convergence by successive refinements. The philosophy that underlies this approach is empirical. Statistical data are collected from careful translation of actual Russian text, analyzed, and used to improve the program. Text preparation, glossary development, translation, and analysis are described." ], "authors": [ { "name": [ "H. P. Edmundson", "D. G. Hays" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "17600291" ], "intents": [ [] ], "isInfluential": [ false ] }
null
780
0
null
null
null
null
null
null
null
null
02190c10c526b103a65549726f5c7906b6b60cd9
244077657
null
{MT} at the {M}assachusetts Institute of Technology
Mechanical translation has had a long history at M.I.T. Shortly after the Warren Weaver memorandum of 1949, Yehoshua Bar-Hillel became the first full-time worker in the field. He contributed many of the early ideas and will be well remembered for this. He organized the first conference on mechanical translation, held at M.I.T. in June of 1952. It was an international conference, and although there were only 18 persons registered, nearly everyone interested in MT in the world at that time was there. Of those 18 people, 4 are on the program of this conference, Leon Dostert, Victor Oswald, Erwin Reifler, and myself. The number of people here today gives a measure of how the field has grown in the intervening 7-1/2 years.
{ "name": [ "Yngve, Victor H." ], "affiliation": [ null ] }
null
null
Proceedings of the National Symposium on Machine Translation
1960-02-01
0
0
null
The reports or proceedings of both these conferences were published in the journal Mechanical Translation.This journal was founded at M.I.T. in 1954 when it became obvious that there was a need for better communication between those interested in MT and to prevent needless duplication of effort. The journal has continued to grow. The first volume contained 57 pages. The current volume, volume five, will contain well over twice that number. Starting with the next volume we will abandon the electric typewriter and photooffset format, and go to letter press. This will give us a more attractive journal, will allow it to expand naturally, and will speed up the process of publication. We feel at M.I.T. that we are holding the journal in trust until the field comes of age. When the field has grown to the point where it becomes desirable to found a professional society, the journal can become its official organ.Let us now turn to the research on mechanical translation at M.I.T. The group at M.I.T. has always stressed a basic, long-range Session 2:CURRENT RESEARCH approach to the problem. We are placing an emphasis on completeness where completeness is possible and on the attempt to find out how to do a complete job where completeness is not now possible.We are not looking for short-cut methods that might yield partially adequate translations at an early date, an important goal pursued by other groups. Instead we are looking for methods that will be capable of yielding fully adequate results wherever they apply. We are thus seeking definitive solutions that will constitute permanent advances in the field rather than ad hoc or temporary solutions that may eventually have to be discarded because they are not compatible with improved systems.The framework within which we are working was described about a year and a half ago in Mechanical Translation. 2 There were two main points in that paper. The first one was concerned with the aspect of completeness and with the point that it is essential for us to understand and use as much as possible of the syntax of the languages being translated. For many years the M.I.T. group has been working in the field of syntax. The other point in the paper was that it is possible, and perhaps necessary, to divide the problem of mechanical translation into six parts, each one fairly independent of the others. We are pleased that other groups are also adopting this same split, because we think it has a lot of merit. between the problems concerning the input language only, the problems concerning the output language only, and the problems concerning the two languages simultaneously. We thus conceive of a three-step translation routine. The first step is recognition of the structure of the input text, the second step is the selection of the structure for the output text that will give the best translational equivalence, and the third step is the production of the actual output text from the specification of its structure. Again, the advantages of this split of the program are great.Here, particularly, there is a great simplification in keeping the monolingual phenomena separate from the bilingual translation phenomena. The result is an increased clarity of the issues. They are easier to cope with separately.Since we start with input-language text and go through a threestep program resulting in output-language text, there are two intermediate encoded forms of the message, the coded form of the message that passes from the first or recognition step to the second or structuretransfer step, and the coded form of the message that passes from this structure transfer step to the third or text-production step. These two forms of the message we call specifiers. These specifiers are in no way to be considered as intermediate languages or universal languages.The specifier that passes from the recognition step to the structuretransfer step is an explicit representation of the structure of the input text in terms of the categories appropriate to the input language.It is merely a recoding of the input text with everything of importance made explicit. Similarly, the specifier that passes from the structuretransfer step to the text-production step is an explicit coded form of the structure of the output text. The first of these is the COMIT system, a powerful programming aid which enables the linguist to do his own programming without the difficulties inherent in working through the intermediary of a professional programmer. The system will be described in a later talk. The other tool that is being provided is a method of handling large quantities of text that can be obtained from the publishing industry in the form of punched paper tape. This system of programs, which allows the computer to search through text for particular words or groups of words, is an invaluable aid to the linguist in his study of the structure of languages since it gives him ready access to his data.The programming of the COMIT system is completed and the final check-out is in progress. We expect that it will be available for use soon. The programming has been done in a cooperative arrangement with the M. I. T. Computation Center.When the COMIT system is finished, it will be made generally available.It is hoped that the availability of the system, will materially increase the productivity not only of our own group but of many others as well.We have already been using the COMIT notation extensively in mechanical translation research at M.I.T. even though programs cannot yet be run. We have used it to write down in an unambiguous fashion our ideas on translation. This has aided greatly in clarifying our own thoughts and in communicating them to each other.We have come to realize that without an adequate notational system, research becomes very difficult.The other set of programs, for handling large quantities of text, has now been completed and is already in use. adverb; the different behavior of subject and object clauses; the phrase structure of the active and the passive with the "by" phrase;the reversal of order of direct and indirect object; the shifting of the position of the separable verb particle; the function of the anticipatory "it"; the first position of the interrogative pronoun; the discontinuous nature of adjectival and adverbial phrases; the position of certain adverbs before the article; the fact that when the genitive marker follows its noun phrase, it is an affix " 's", and when it precedes it is a separate word "of"; and that derivational affixes are suffixes, and prepositions, articles, and conjunctions are separate words.This work will be published soon in the Proceedings of the American Philosophical Society.So you see that the mechanical translation research at M.I. T.is proceeding simultaneously on a number of fronts, and that some progress is being made toward a solution of the very difficult problems facing us in the development of mechanical translation to the point where mankind can count on it as a reliable means of bridging the language barriers.132This work was supported in part by the National Science Foundation, and in part by the U.S. Army (Signal Corps), the U.S. Air Force (Office of Scientific Research, Air Research and Development Command), and the U.S. Navy (Office of Naval Research)."A Framework for SyntacticTranslation", Mechanical Translation, vol. IV, no. 3.
null
null
null
null
Main paper: : The reports or proceedings of both these conferences were published in the journal Mechanical Translation.This journal was founded at M.I.T. in 1954 when it became obvious that there was a need for better communication between those interested in MT and to prevent needless duplication of effort. The journal has continued to grow. The first volume contained 57 pages. The current volume, volume five, will contain well over twice that number. Starting with the next volume we will abandon the electric typewriter and photooffset format, and go to letter press. This will give us a more attractive journal, will allow it to expand naturally, and will speed up the process of publication. We feel at M.I.T. that we are holding the journal in trust until the field comes of age. When the field has grown to the point where it becomes desirable to found a professional society, the journal can become its official organ.Let us now turn to the research on mechanical translation at M.I.T. The group at M.I.T. has always stressed a basic, long-range Session 2:CURRENT RESEARCH approach to the problem. We are placing an emphasis on completeness where completeness is possible and on the attempt to find out how to do a complete job where completeness is not now possible.We are not looking for short-cut methods that might yield partially adequate translations at an early date, an important goal pursued by other groups. Instead we are looking for methods that will be capable of yielding fully adequate results wherever they apply. We are thus seeking definitive solutions that will constitute permanent advances in the field rather than ad hoc or temporary solutions that may eventually have to be discarded because they are not compatible with improved systems.The framework within which we are working was described about a year and a half ago in Mechanical Translation. 2 There were two main points in that paper. The first one was concerned with the aspect of completeness and with the point that it is essential for us to understand and use as much as possible of the syntax of the languages being translated. For many years the M.I.T. group has been working in the field of syntax. The other point in the paper was that it is possible, and perhaps necessary, to divide the problem of mechanical translation into six parts, each one fairly independent of the others. We are pleased that other groups are also adopting this same split, because we think it has a lot of merit. between the problems concerning the input language only, the problems concerning the output language only, and the problems concerning the two languages simultaneously. We thus conceive of a three-step translation routine. The first step is recognition of the structure of the input text, the second step is the selection of the structure for the output text that will give the best translational equivalence, and the third step is the production of the actual output text from the specification of its structure. Again, the advantages of this split of the program are great.Here, particularly, there is a great simplification in keeping the monolingual phenomena separate from the bilingual translation phenomena. The result is an increased clarity of the issues. They are easier to cope with separately.Since we start with input-language text and go through a threestep program resulting in output-language text, there are two intermediate encoded forms of the message, the coded form of the message that passes from the first or recognition step to the second or structuretransfer step, and the coded form of the message that passes from this structure transfer step to the third or text-production step. These two forms of the message we call specifiers. These specifiers are in no way to be considered as intermediate languages or universal languages.The specifier that passes from the recognition step to the structuretransfer step is an explicit representation of the structure of the input text in terms of the categories appropriate to the input language.It is merely a recoding of the input text with everything of importance made explicit. Similarly, the specifier that passes from the structuretransfer step to the text-production step is an explicit coded form of the structure of the output text. The first of these is the COMIT system, a powerful programming aid which enables the linguist to do his own programming without the difficulties inherent in working through the intermediary of a professional programmer. The system will be described in a later talk. The other tool that is being provided is a method of handling large quantities of text that can be obtained from the publishing industry in the form of punched paper tape. This system of programs, which allows the computer to search through text for particular words or groups of words, is an invaluable aid to the linguist in his study of the structure of languages since it gives him ready access to his data.The programming of the COMIT system is completed and the final check-out is in progress. We expect that it will be available for use soon. The programming has been done in a cooperative arrangement with the M. I. T. Computation Center.When the COMIT system is finished, it will be made generally available.It is hoped that the availability of the system, will materially increase the productivity not only of our own group but of many others as well.We have already been using the COMIT notation extensively in mechanical translation research at M.I.T. even though programs cannot yet be run. We have used it to write down in an unambiguous fashion our ideas on translation. This has aided greatly in clarifying our own thoughts and in communicating them to each other.We have come to realize that without an adequate notational system, research becomes very difficult.The other set of programs, for handling large quantities of text, has now been completed and is already in use. adverb; the different behavior of subject and object clauses; the phrase structure of the active and the passive with the "by" phrase;the reversal of order of direct and indirect object; the shifting of the position of the separable verb particle; the function of the anticipatory "it"; the first position of the interrogative pronoun; the discontinuous nature of adjectival and adverbial phrases; the position of certain adverbs before the article; the fact that when the genitive marker follows its noun phrase, it is an affix " 's", and when it precedes it is a separate word "of"; and that derivational affixes are suffixes, and prepositions, articles, and conjunctions are separate words.This work will be published soon in the Proceedings of the American Philosophical Society.So you see that the mechanical translation research at M.I. T.is proceeding simultaneously on a number of fronts, and that some progress is being made toward a solution of the very difficult problems facing us in the development of mechanical translation to the point where mankind can count on it as a reliable means of bridging the language barriers.132This work was supported in part by the National Science Foundation, and in part by the U.S. Army (Signal Corps), the U.S. Air Force (Office of Scientific Research, Air Research and Development Command), and the U.S. Navy (Office of Naval Research)."A Framework for SyntacticTranslation", Mechanical Translation, vol. IV, no. 3. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
780
0
null
null
null
null
null
null
null
null
695dd7dea2c37118c3e65bee2d9196de51d38dc7
244077688
null
Nestings Within the Prepositional Structure
This paper presents a preliminary description of an algorithmic operation to handle nested strings within the prepositional structure in Russian. The prepositional structure is defined as consisting of a preposition with its case-determining requirement at the permitted point of entry, and of a noun or its substitute at the permitted point of exit which satisfies the requirement incurred by the preposition. Example: на мосту "on the bridge" The flow chart shown in Appendix II illustrates in detail the variables and relations, as well as the expected results involved in the computable syntactic analysis of the prepositional structure containing one or more nested strings. The nested strings consist of either weak or strong government structure. The former usually precedes the latter. The type of nested string can be another prepositional structure, a pronominal structure, or a noun structure. The order of the nested strings is fixed at one point. A pronominal case determiner follows the first preposition. A second prepositional case determiner can only occur after the first pronominal case determiner. This algorithmic operation has been worked out on the basis of the data extracted from 50, 000 words of continuous text in the field of organic chemistry. It has then been generalized to the degree permissible by the extracted structural cues.
{ "name": [ "Zarechnak, Michael" ], "affiliation": [ null ] }
null
null
Proceedings of the National Symposium on Machine Translation
1960-02-01
0
0
null
null
null
The prepositional structure is defined as consisting of a preposition with its case-determining requirement at the permitted point of entry, and of a noun or its substitute at the permitted point of exit which satisfies the requirement incurred by the preposition.на мосту "on the bridge"The flow chart shown in Appendix II illustrates in detail the variables and relations, as well as the expected results involved in the computable syntactic analysis of the prepositional structure containing one or more nested strings. The nested strings consist of either weak or strong government structure. The former usually precedes the latter. The type of nested string can be another prepositional structure, a pronominal structure, or a noun structure.The order of the nested strings is fixed at one point. A pronominal case determiner follows the first preposition.A second prepositional case determiner can only occur after the first pronominal case determiner.This algorithmic operation has been worked out on the basis of the data extracted from 50, 000 words of continuous text in the field of organic chemistry. It has then been generalized to the degree permissible by the extracted structural cues. The concept of a nested structure is similar to that of the branching of a tree or a graduated series of boxes. The linguistic description of the algorithm is briefly summarized in Appendix III.The algorithm has another application besides that of the proper unloading of the nested structure. Later it serves as an input for the transfer from Russian to English and for the rearrangement operator.In the list of data given in Appendix I, both the Russian and English equivalents are given first as they actually occur, and subsequently, followed by the letter "a", in sequences which are expected to be SYNTAX produced by the computer. The program can be divided into four sections.The nested stretch is determined and extracted by another routine and is sent to the Nesting Program for computation.Section I (000 series in the flow chart)After setting the counters and other necessary housekeeping and initialization operations, the following steps are taken:1. Inquire whether the first item in the stretch is a preposition with a. case determiner.If "no", there is an error. Check for the error and restart the operation. If "yes", check whether the case determiner of the preposition is ambiguous, and if so go to the special routine for the resolution of the ambiguity. Then proceed to Section II below.Section II (100 series in the flow chart)1. Establish the particular case of the case determiner of the preposition.2. Remember the case of the first preposition in one counter, and remember the cases of the following prepositions in another counter.Each time a preposition is satisfied by unloading, erase the counters.3. Check the counters for previously stored weak government, etc. , and unload on the preposition or prepositions with matching cases.Move to the next word and proceed to Section III below.Section III (200 series in the flow chart)1. Check whether the following word is an adjective, adverb, conjunction, particle, or an infinitive verb. If none of these occurs in the first scanning, there is an error which is checked and the subroutine is tested again. During the subsequent scannings, however, if none of the above occurs, a further check is made to determine whether the item is a noun or another preposition.If an adjective was found in Step 1 in this section, a further check is made to determine whether it has a strong case determiner code or a weak one. In either case it is recorded in special separate counters.3. If an infinitive verb was found, its determining factor is remembered in another special counter.4. In all the above cases the routine moves to the next word (except under special circumstances where the modification is prevented by Switch No. 3) and proceeds to Section IV below.Section IV (300 series in the flow chart)1. Check whether the item under consideration is a noun. If "no", set the switches for proper execution and go to the beginning of the routine (Point II in the flow chart) to check on preposition, adjective, etc. If "yes", go to Step 2 below.2. Check whether the noun has a case determiner. If "yes", remember it in a special counter, and in either case go to Step 3.3. Does the case of this noun correspond with that of the first preposition in the stretch? If "yes", unload the code 512x ("x" being the proper case found) into the appropriate preposition and noun. The codes have been explained in the report on Current Research at Georgetown University given at this Symposium in Session 2.2 See examples 3 and 22 in Appendix I.Session 6: SYNTAX
null
null
Main paper: : The prepositional structure is defined as consisting of a preposition with its case-determining requirement at the permitted point of entry, and of a noun or its substitute at the permitted point of exit which satisfies the requirement incurred by the preposition.на мосту "on the bridge"The flow chart shown in Appendix II illustrates in detail the variables and relations, as well as the expected results involved in the computable syntactic analysis of the prepositional structure containing one or more nested strings. The nested strings consist of either weak or strong government structure. The former usually precedes the latter. The type of nested string can be another prepositional structure, a pronominal structure, or a noun structure.The order of the nested strings is fixed at one point. A pronominal case determiner follows the first preposition.A second prepositional case determiner can only occur after the first pronominal case determiner.This algorithmic operation has been worked out on the basis of the data extracted from 50, 000 words of continuous text in the field of organic chemistry. It has then been generalized to the degree permissible by the extracted structural cues. The concept of a nested structure is similar to that of the branching of a tree or a graduated series of boxes. The linguistic description of the algorithm is briefly summarized in Appendix III.The algorithm has another application besides that of the proper unloading of the nested structure. Later it serves as an input for the transfer from Russian to English and for the rearrangement operator.In the list of data given in Appendix I, both the Russian and English equivalents are given first as they actually occur, and subsequently, followed by the letter "a", in sequences which are expected to be SYNTAX produced by the computer. The program can be divided into four sections.The nested stretch is determined and extracted by another routine and is sent to the Nesting Program for computation.Section I (000 series in the flow chart)After setting the counters and other necessary housekeeping and initialization operations, the following steps are taken:1. Inquire whether the first item in the stretch is a preposition with a. case determiner.If "no", there is an error. Check for the error and restart the operation. If "yes", check whether the case determiner of the preposition is ambiguous, and if so go to the special routine for the resolution of the ambiguity. Then proceed to Section II below.Section II (100 series in the flow chart)1. Establish the particular case of the case determiner of the preposition.2. Remember the case of the first preposition in one counter, and remember the cases of the following prepositions in another counter.Each time a preposition is satisfied by unloading, erase the counters.3. Check the counters for previously stored weak government, etc. , and unload on the preposition or prepositions with matching cases.Move to the next word and proceed to Section III below.Section III (200 series in the flow chart)1. Check whether the following word is an adjective, adverb, conjunction, particle, or an infinitive verb. If none of these occurs in the first scanning, there is an error which is checked and the subroutine is tested again. During the subsequent scannings, however, if none of the above occurs, a further check is made to determine whether the item is a noun or another preposition.If an adjective was found in Step 1 in this section, a further check is made to determine whether it has a strong case determiner code or a weak one. In either case it is recorded in special separate counters.3. If an infinitive verb was found, its determining factor is remembered in another special counter.4. In all the above cases the routine moves to the next word (except under special circumstances where the modification is prevented by Switch No. 3) and proceeds to Section IV below.Section IV (300 series in the flow chart)1. Check whether the item under consideration is a noun. If "no", set the switches for proper execution and go to the beginning of the routine (Point II in the flow chart) to check on preposition, adjective, etc. If "yes", go to Step 2 below.2. Check whether the noun has a case determiner. If "yes", remember it in a special counter, and in either case go to Step 3.3. Does the case of this noun correspond with that of the first preposition in the stretch? If "yes", unload the code 512x ("x" being the proper case found) into the appropriate preposition and noun. The codes have been explained in the report on Current Research at Georgetown University given at this Symposium in Session 2.2 See examples 3 and 22 in Appendix I.Session 6: SYNTAX Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
780
0
null
null
null
null
null
null
null
null
0fad3801ac2c71f12e68eb3b26da82c7235d944f
243739969
null
Conclusion
CONCLUSION H.P.Edmundson Planning Research Corporation On behalf of my colleagues, I thank you for the kind remarks. None of this, of course, is possible without such scholars as Professor Yngve.
{ "name": [ "Edmundson, H. P." ], "affiliation": [ null ] }
null
null
Proceedings of the National Symposium on Machine Translation
1960-02-01
0
0
null
null
null
null
null
We are on the frontier of a very exciting interdisciplinary endeavor, and we will see a very steady acceleration in MT efforts in this country. While the past 10 years have gone rather slowly for MT, I predict that the next 10 years will yield significant results in all linguistic data processing.
Main paper: : We are on the frontier of a very exciting interdisciplinary endeavor, and we will see a very steady acceleration in MT efforts in this country. While the past 10 years have gone rather slowly for MT, I predict that the next 10 years will yield significant results in all linguistic data processing. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
780
0
null
null
null
null
null
null
null
null
b8721413dafd14c991cf4e32e08faf32c2d701eb
244077649
null
{C}ambridge Language Research Unit Presentation
After he offered a definition of a lexeme as "the basic unit of the dictionary or lexicon", Professor Lamb made some observations on lexemes in general, and then, turning back to the handout, shifted the discussion to nonce forms (forms coined as combinations of items), and related material on segmentation.
{ "name": [ "Masterman, Margaret and", "Needham, Roger" ], "affiliation": [ null, null ] }
null
null
Proceedings of the Wayne State University Conference of Federally Sponsored Machine Translation Workers
1960-07-01
0
0
null
null
null
null
He added that with the blocking routine, titivation (homograph resolution)is carried on alternatively with bracketing, rather than doing everything in two separate stages.Mr. Needham also described Parker-Rhodes' Rule for Bracketing, and thereafter, proceeded to offer a graphic example of how a dictionary entry is made. He also presented some CLRU handout material in conjunction with his demonstration.In summation, he added that the system could be adapted to another language, the only changes made being in the dictionary and titivation routines. Mr. Needham offered to answer any questions from the floor.-11-Two questions receiving primary attention in the following open discussion period were concerned with scanning technique and the order of precedence to be taken regarding volume of data and awkward cases.It was generally agreed that scanning should be done back and forth, and there remained some mixed feeling about whether or not volumes of data should be taken first, as opposed to the immediate analysis of awkward examples.Wednesday, 20 July, 9:00-10:15 a.m.Dr. Josselson's presentation consisted of a detailed description of the grammar coding scheme which the Wayne group is presently using. He discussed the 'part of speech' categories and the differences between the present and traditional grammar classes.The coding sheet contains information to be used in the process of making translation decisions on both syntactic and semantic levels. In many instances a bit of information in the grammar code applies to a set of words, and a list of words in this set was included in the instructions. Dr. Josselson noted that the lists were in many cases merely a beginning, and that they could and would be expanded. He pointed out that one task for MT investigators is to seek and record examples of linguistic phenomena. He added that the questions asked in the coding format will change on the basis of further syntactic investigation;new categories will appear, and others may turn out to be unnecessary.-12-
null
Main paper: : He added that with the blocking routine, titivation (homograph resolution)is carried on alternatively with bracketing, rather than doing everything in two separate stages.Mr. Needham also described Parker-Rhodes' Rule for Bracketing, and thereafter, proceeded to offer a graphic example of how a dictionary entry is made. He also presented some CLRU handout material in conjunction with his demonstration.In summation, he added that the system could be adapted to another language, the only changes made being in the dictionary and titivation routines. Mr. Needham offered to answer any questions from the floor.-11-Two questions receiving primary attention in the following open discussion period were concerned with scanning technique and the order of precedence to be taken regarding volume of data and awkward cases.It was generally agreed that scanning should be done back and forth, and there remained some mixed feeling about whether or not volumes of data should be taken first, as opposed to the immediate analysis of awkward examples.Wednesday, 20 July, 9:00-10:15 a.m.Dr. Josselson's presentation consisted of a detailed description of the grammar coding scheme which the Wayne group is presently using. He discussed the 'part of speech' categories and the differences between the present and traditional grammar classes.The coding sheet contains information to be used in the process of making translation decisions on both syntactic and semantic levels. In many instances a bit of information in the grammar code applies to a set of words, and a list of words in this set was included in the instructions. Dr. Josselson noted that the lists were in many cases merely a beginning, and that they could and would be expanded. He pointed out that one task for MT investigators is to seek and record examples of linguistic phenomena. He added that the questions asked in the coding format will change on the basis of further syntactic investigation;new categories will appear, and others may turn out to be unnecessary.-12- Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
775
0
null
null
null
null
null
null
null
null
c40cc26b91d594dcace63c2985421c721be954c7
244077678
null
Centro di Cibernetica di {M}ilano Presentation
WAYNE STATE UNIVERSITY PRESENTATION JANIOTIS Miss Janiotis briefly discussed a 709 interpretive subroutine for machine translation problems (a description and flowcharts appear in the Wayne handout). She answered several questions and then proceeded to discuss nominal, prepositional, and governing modifier blocking routines, as they appear in the Wayne handout. She noted that the blocking rou-
{ "name": [ "Ceccato, Silvio" ], "affiliation": [ null ] }
null
null
Proceedings of the Wayne State University Conference of Federally Sponsored Machine Translation Workers
1960-07-01
0
0
null
as they appear in the Wayne handout.She noted that the blocking routines were similar to that which was offered earlier by Mr. Needham of CLRU, under the title of Bracketing.Miss Janiotis elaborated on the Nominal Blocking Routine and the remaining time was spent in open discussion of both Dr. Josselson's and Miss Janiotis' presentations.CENTRO DI CIBERNETICA DI MILANO PRESENTATION Wednesday, 20 July, 10:45-12:00 a.m.Dr. Ceccato prefaced his presentation with an announcement of his three hundred page report which is going through final proofing, and which he offered to mail to all interested conference participants as soon as it is published. He also wished to point out that the work he and his staff are presently doing is neither dictionary-grammar nor syntax oriented, so much as it is directed toward semantic analysis.Dr. Ceccato continued by explaining more specifically that his group is trying to produce what is virtually a thinking machine which will simulate the processes of the human mind. According to Dr. Ceccato, the processes of the human mind involve a series of prescribed and fixed -13-CENTRO DI CIBERNETICA DI MILANO PRESENTATION operations; moreover, the problem at hand could be reduced to two questions that confront the investigator: (1) What is the structure of our thought?and (2) How are we to put a link between our language and our thought?He attempted to clarify his hypothesis further by drawing several diagrams on the blackboard, first presenting the thought process as a product of what he termed the "correlator" and the "correlation", and second, drawing several examples from simple English and Italian phrases, and analyzing them in terms of his thought process box diagram.Dr. Ceccato continued to elaborate on the function of the "correlator", adding parenthetically, that while some languages relied upon form (declension and inflection), others relied upon order (context). But he explained that it was not the language that changed; rather, it was the thought, and for us the correlation is done by the machine.After some remarks about his two levels of language, i.e., the language itself, and those things that operate the language, Dr. Ceccato invited the group to gather around him as he presented and explained graphical data, including coding material and charts.Wednesday, 20 July, 2:00-3:l5 p.m.Mr. Ziehe began the session by discussing the RAND handout Available RAND Linguistic Data. In discussing the text and dictionary he defined:(a) an occurrence as an instance of a form in text (b) a form as a unique sequence of alphabetic characters that is preceded and followed in text by either spaces and/or punctuation (c) a word as the collection of forms that constitute a paradigm -14-
null
null
null
null
Main paper: : as they appear in the Wayne handout.She noted that the blocking routines were similar to that which was offered earlier by Mr. Needham of CLRU, under the title of Bracketing.Miss Janiotis elaborated on the Nominal Blocking Routine and the remaining time was spent in open discussion of both Dr. Josselson's and Miss Janiotis' presentations.CENTRO DI CIBERNETICA DI MILANO PRESENTATION Wednesday, 20 July, 10:45-12:00 a.m.Dr. Ceccato prefaced his presentation with an announcement of his three hundred page report which is going through final proofing, and which he offered to mail to all interested conference participants as soon as it is published. He also wished to point out that the work he and his staff are presently doing is neither dictionary-grammar nor syntax oriented, so much as it is directed toward semantic analysis.Dr. Ceccato continued by explaining more specifically that his group is trying to produce what is virtually a thinking machine which will simulate the processes of the human mind. According to Dr. Ceccato, the processes of the human mind involve a series of prescribed and fixed -13-CENTRO DI CIBERNETICA DI MILANO PRESENTATION operations; moreover, the problem at hand could be reduced to two questions that confront the investigator: (1) What is the structure of our thought?and (2) How are we to put a link between our language and our thought?He attempted to clarify his hypothesis further by drawing several diagrams on the blackboard, first presenting the thought process as a product of what he termed the "correlator" and the "correlation", and second, drawing several examples from simple English and Italian phrases, and analyzing them in terms of his thought process box diagram.Dr. Ceccato continued to elaborate on the function of the "correlator", adding parenthetically, that while some languages relied upon form (declension and inflection), others relied upon order (context). But he explained that it was not the language that changed; rather, it was the thought, and for us the correlation is done by the machine.After some remarks about his two levels of language, i.e., the language itself, and those things that operate the language, Dr. Ceccato invited the group to gather around him as he presented and explained graphical data, including coding material and charts.Wednesday, 20 July, 2:00-3:l5 p.m.Mr. Ziehe began the session by discussing the RAND handout Available RAND Linguistic Data. In discussing the text and dictionary he defined:(a) an occurrence as an instance of a form in text (b) a form as a unique sequence of alphabetic characters that is preceded and followed in text by either spaces and/or punctuation (c) a word as the collection of forms that constitute a paradigm -14- Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
775
0
null
null
null
null
null
null
null
null
3e74f048b6aa5a0b778eee9d00b76231bcc2e20d
244077627
null
Wayne State University Presentation
Two questions receiving primary attention in the following open discussion period were concerned with scanning technique and the order of precedence to be taken regarding volume of data and awkward cases.
{ "name": [ "Josselson, Harry H. and", "Janiotis, Amelia" ], "affiliation": [ null, null ] }
null
null
Proceedings of the Wayne State University Conference of Federally Sponsored Machine Translation Workers
1960-07-01
0
0
null
It was generally agreed that scanning should be done back and forth, and there remained some mixed feeling about whether or not volumes of data should be taken first, as opposed to the immediate analysis of awkward examples.Wednesday, 20 July, 9:00-10:15 a.m.Dr. Josselson's presentation consisted of a detailed description of the grammar coding scheme which the Wayne group is presently using. He discussed the 'part of speech' categories and the differences between the present and traditional grammar classes.The coding sheet contains information to be used in the process of making translation decisions on both syntactic and semantic levels. In many instances a bit of information in the grammar code applies to a set of words, and a list of words in this set was included in the instructions. Dr. Josselson noted that the lists were in many cases merely a beginning, and that they could and would be expanded. He pointed out that one task for MT investigators is to seek and record examples of linguistic phenomena. He added that the questions asked in the coding format will change on the basis of further syntactic investigation;new categories will appear, and others may turn out to be unnecessary. CENTRO DI CIBERNETICA DI MILANO PRESENTATION Wednesday, 20 July, 10:45-12:00 a.m.Dr. Ceccato prefaced his presentation with an announcement of his three hundred page report which is going through final proofing, and which he offered to mail to all interested conference participants as soon as it is published. He also wished to point out that the work he and his staff are presently doing is neither dictionary-grammar nor syntax oriented, so much as it is directed toward semantic analysis.Dr. Ceccato continued by explaining more specifically that his group is trying to produce what is virtually a thinking machine which will simulate the processes of the human mind. According to Dr. Ceccato, the processes of the human mind involve a series of prescribed and fixed -13-
null
null
null
null
Main paper: : It was generally agreed that scanning should be done back and forth, and there remained some mixed feeling about whether or not volumes of data should be taken first, as opposed to the immediate analysis of awkward examples.Wednesday, 20 July, 9:00-10:15 a.m.Dr. Josselson's presentation consisted of a detailed description of the grammar coding scheme which the Wayne group is presently using. He discussed the 'part of speech' categories and the differences between the present and traditional grammar classes.The coding sheet contains information to be used in the process of making translation decisions on both syntactic and semantic levels. In many instances a bit of information in the grammar code applies to a set of words, and a list of words in this set was included in the instructions. Dr. Josselson noted that the lists were in many cases merely a beginning, and that they could and would be expanded. He pointed out that one task for MT investigators is to seek and record examples of linguistic phenomena. He added that the questions asked in the coding format will change on the basis of further syntactic investigation;new categories will appear, and others may turn out to be unnecessary. CENTRO DI CIBERNETICA DI MILANO PRESENTATION Wednesday, 20 July, 10:45-12:00 a.m.Dr. Ceccato prefaced his presentation with an announcement of his three hundred page report which is going through final proofing, and which he offered to mail to all interested conference participants as soon as it is published. He also wished to point out that the work he and his staff are presently doing is neither dictionary-grammar nor syntax oriented, so much as it is directed toward semantic analysis.Dr. Ceccato continued by explaining more specifically that his group is trying to produce what is virtually a thinking machine which will simulate the processes of the human mind. According to Dr. Ceccato, the processes of the human mind involve a series of prescribed and fixed -13- Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
775
0
null
null
null
null
null
null
null
null
0619ea61b6c3173470d53e18c1b3cae6325a1d57
244077683
null
{M}assachusetts Institute of Technology Presentation
Brown had nothing he felt he might offer in the way of linguistic information, in view of the fact that he has spent the past fourteen months concentrating on questions of programming only. A significant product of this fourteen month period is Dr. Brown's "Simulated Linguistic Computer". Dr. Brown presented his handout A Symbolic Language for Programming the Simulated Linguistic Computer, and taking the word 'haut' as an example (Dr. Brown's work has dealt exclusively with French), he discussed and graphically demonstrated an 'up-dating' procedure. A brief question-answer discussion period followed. A question of major concern involved the quantity of text that should be required in order to form positive conclusions. It was generally agreed that it would depend
{ "name": [ "Lieberman, David and", "Yngve, Victor H." ], "affiliation": [ null, null ] }
null
null
Proceedings of the Wayne State University Conference of Federally Sponsored Machine Translation Workers
1960-07-01
0
4
null
the Simulated Linguistic Computer, and taking the word 'haut' as an example (Dr. Brown's work has dealt exclusively with French), he discussed and graphically demonstrated an 'up-dating' procedure.A brief question-answer discussion period followed. A question of major concern involved the quantity of text that should be required in order to form positive conclusions. It was generally agreed that it would depend upon both the amount of attention directed toward the text, and the extent to which one would rigidly adhere to established categories. There was also general agreement with Mrs. Masterman's comment that it was essential for the message to be preserved, that one could not determine what had been missed in translation by simply reading output.MASSACHUSETTS INSTITUTE OF TECHNOLOGY PRESENTATION Tuesday, 19 July, 10:45-12:00 a.m LIEBERMAN Dr. Lieberman presented a handout concerning a search routine, prepared by Ken Knowlton. Dr. Lieberman offered some general statistical information about the routine. He said that the input for this routine must be punched in a specific manner, which is worked out by the U.S. Patent Office and M.I.T.He further explained that each occurrence is given an integral number of machine words and that as many as one hundred items could be searched for at one time. He demonstrated that several requests might be satisfied by one text sequence.Search time for scanning 200,000 words of text is about ten minutes plus 0.2 seconds for each encounter if context is to be printed out.The source material which was used included:100,000 words each of 1) Associated Press Material, 2) German Newspapers, 3) Patent Office Material.Some general discussion of the handout text ensued.Dr. Yngve initially offered some general comments about COMIT. He added that the program was to be distributed through SHARE. He then presented his approach, with particular emphasis centered around the 'depth phenomenon'and subsequent phrase structure. He treated related questions such as: how such memory is needed for specific procedures; e.g., expansion of the sentence into (a) subject and (b) predicate.He proceeded with the presentation, offering a definition of the 'depth of a node' as being "the number of right branches required to go from that node back to the top". In estimating the size of a temporary memory, he suggested that a memory of about seven items is needed for producing English.He added that one result of the depth phenomenon is that we now have a definite reason for explaining why some sentences are awkward.Dr. Yngve then discussed unordered phrase-structure rules, adding that a grammar of this kind can be constructed, as is implied in the M.I.T. handout.He then presented some sample output from a COMIT program, designed to generate -7-MASSACHUSETTS INSTITUTE OF TECHNOLOGY PRESENTATION sentences at random. He explained that the program was text-oriented, that he had used a childrens book, Engineer Small, which, with its forty word vocabulary, was understandably limited. The product result is output without initial input. He made a point of emphasizing the fact that the advantage of this linguistic system was its simplicity.-8-
null
null
null
null
Main paper: : the Simulated Linguistic Computer, and taking the word 'haut' as an example (Dr. Brown's work has dealt exclusively with French), he discussed and graphically demonstrated an 'up-dating' procedure.A brief question-answer discussion period followed. A question of major concern involved the quantity of text that should be required in order to form positive conclusions. It was generally agreed that it would depend upon both the amount of attention directed toward the text, and the extent to which one would rigidly adhere to established categories. There was also general agreement with Mrs. Masterman's comment that it was essential for the message to be preserved, that one could not determine what had been missed in translation by simply reading output.MASSACHUSETTS INSTITUTE OF TECHNOLOGY PRESENTATION Tuesday, 19 July, 10:45-12:00 a.m LIEBERMAN Dr. Lieberman presented a handout concerning a search routine, prepared by Ken Knowlton. Dr. Lieberman offered some general statistical information about the routine. He said that the input for this routine must be punched in a specific manner, which is worked out by the U.S. Patent Office and M.I.T.He further explained that each occurrence is given an integral number of machine words and that as many as one hundred items could be searched for at one time. He demonstrated that several requests might be satisfied by one text sequence.Search time for scanning 200,000 words of text is about ten minutes plus 0.2 seconds for each encounter if context is to be printed out.The source material which was used included:100,000 words each of 1) Associated Press Material, 2) German Newspapers, 3) Patent Office Material.Some general discussion of the handout text ensued.Dr. Yngve initially offered some general comments about COMIT. He added that the program was to be distributed through SHARE. He then presented his approach, with particular emphasis centered around the 'depth phenomenon'and subsequent phrase structure. He treated related questions such as: how such memory is needed for specific procedures; e.g., expansion of the sentence into (a) subject and (b) predicate.He proceeded with the presentation, offering a definition of the 'depth of a node' as being "the number of right branches required to go from that node back to the top". In estimating the size of a temporary memory, he suggested that a memory of about seven items is needed for producing English.He added that one result of the depth phenomenon is that we now have a definite reason for explaining why some sentences are awkward.Dr. Yngve then discussed unordered phrase-structure rules, adding that a grammar of this kind can be constructed, as is implied in the M.I.T. handout.He then presented some sample output from a COMIT program, designed to generate -7-MASSACHUSETTS INSTITUTE OF TECHNOLOGY PRESENTATION sentences at random. He explained that the program was text-oriented, that he had used a childrens book, Engineer Small, which, with its forty word vocabulary, was understandably limited. The product result is output without initial input. He made a point of emphasizing the fact that the advantage of this linguistic system was its simplicity.-8- Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
775
0.005161
null
null
null
null
null
null
null
null
4b89f0a2b1404e3257a1301b66ebea61240d9c9d
913028
null
Random generation of {E}nglish sentences
THIS paper reports the results of writing and running a program which constructs English sentences. The sentences are chosen at random by the program from among those English sentences that conform to certain rules of sentence structure. This work is a continuation of a line of research begun several years ago.
{ "name": [ "Yngve, Victor H." ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Machine Translation and Applied Language Analysis
1961-09-01
5
21
null
IN the paper "A Framework for Syntactic Translation", 1 it was proposed that a translation routine could be divided into six logically separate parts. There was a horizontal division into three steps: sentence analysis, transfer of structure, and sentence synthesis; and there was a vertical division into the operational parts, or routines proper, and the parts that contained all the necessary knowledge of the structures of the languages involved and their interrelation. It was hoped that to divide is to conquer.In the work reported here we are concerned with just two of the six parts of a translation routine -the sentence-synthesis routine and the grammar, which is eventually to contain as complete a set of rules for English sentence structure as possible.Of the various possible forms for writing a grammar, the generative 2 form seems to have the most to recommend it. A generative grammar is a grammar written in a manner analogous to a deductive system. Its main advantage is that it offers a relatively easy method of dealing with the difficult problems posed by the multiple function of words and constructions. It also seems plausible that a generative type of grammar is not necessarily confined in its use to a sentence synthesis routine, but could be used equally well with a sentence recognition routine. 3 A number of forms of generative grammar have been explored. The original transformational type of grammar has been abandoned because it cannot be mechanized by a finite device, because of the difficulty of assigning a phrase structure to the result of a transformation, and for several other lesser technical reasons.The type of grammar and sentence synthesis mechanism finally chosen have been described in detail elsewhere. 4 The grammar consists of a finite set of phrase-structure rules that can be applied one at a time by the sentence synthesis mechanism. The rules of the grammar form an unordered set. The order in which they are applied is determined by what is needed next as the words of the unfolding sentence are produced in their natural order, that is left-to-right according to the English orthographic convention.At present there are three types of rules allowed in the grammar. They are applied by the mechanism as follows.Rules of the type A = B + C means that a construction or form A is to be replaced by its functional parts or constituents B and C. An example would be SENTENCE = SUBJECT + PREDICATE. When A is replaced by B + C, B, the first element on the right, will be treated as the element on the left when the next rule is applied, but C, the second element on the right, is placed in a temporary storage organized on a last in -first out principle. previous principle of last in -first out. The C is placed in a position immediately behind the constituent that would be next out so that for rules of this third type the principle is that the last in is given second priority. An example would be VERB-PHRASE = VERB+ ... +ADVERB as in "He called her up." Even if the grammar consists of a finite set of rules, the mechanism can in general produce sentences from an infinite set, that is the grammar might impose no limit on the length of sentences. This would be the case if there were recursive loops in the grammar in such a way that certain rules could be reapplied an unlimited number of times. There appears in fact to be no limit to the length of English sentences.Similarly the grammar might impose no limit on the number of symbols stored at any time in the temporary memory. If such were the case, the mechanism would not be physically realizable because it would need an infinite temporary memory. An examination of English sentences appeared to show that a small temporary memory, capable of holding no more than about seven symbols, was adequate. This led to the hypothesis 4 that English and probably all languages possess grammars that impose the limitation that no more than about seven items need ever be stored in the temporary memory. A phrase-structure grammar with this restriction is equivalent to a finitestate device. Many of the complications of English syntax can be understood as the means for imposing such a restriction.A need has thus developed for a relatively complete grammar of English. It is needed for use in the translation routine. It is needed in order to test more carefully on English the hypothesis that a grammar with the predicted restriction is adequate. It is also needed in order to explore further certain additional questions about the structure of grammars.The grammar of a language cannot be written down immediately in its final form. It must be discovered. And as the various parts of it are discovered and written down, they must be tested. The testing of a grammar for adequacy is not easy. The aid of a computer seems indispenslble.A set of rules that purports to represent the grammar of a language, or part of it, partitions the set of all strings of characters into two mutually exclusive subsets, the subset containing those strings that it can generate, and the subset containing those strings that it cannot generate. If the set of rules is adequate as the grammar of a language, then the set of strings that the grammar can produce will all be recognized by native speakers as belonging to the language, and the set of strings that the grammar cannot produce will be recognised by native speakers as not belonging to the language.It is, of course, recognized that in many border-line cases, native speakers are unsure, and disagree as to whether a string is part of the language or not. But even if native speakers were always sure, and always agreed among themselves, a complete validation of a grammar would be impossible for the simple reason that the sets of strings to be tested are infinite sets. We are forced to fall back on a sampling procedure. A random sample of the set of strings generated by the grammar can be pro-(98026) duced by a computer program and examined by native speakers. In addition, sentences that are found to occur naturally can be checked to see if they are produced by the grammar. At a later stage, part of this process can be mechanized by the use of a recognition routine.For the first stage of writing and validating a grammar of English, it was decided to start with the simple, straightforward language of a carefully selected children's book, 5 and write the rules necessary to generate its 161 sentences. More complicated material would be turned to later. It was soon evident, however, that it would be too difficult to write the rules for the whole book before testing any of them. Attention was therefore directed to the first ten sentences, which provide a surprisingly wide linguistic diversity. These ten sentences are as follows:A set of tentative rules were written down that could produce these ten sentences and many other similar sentences. Among the trivial shortcoming of the grammar, it must be pointed out that the article A is not changed to AN before a vowel, and the plural S is not changed to ES after FIRE-BOX. Routines to do these things are straightforward and well understood. They will be added later.The sentence producing routine and the grammar were coded in COMIT 6 . A copy is appended. The full power of the subscript operations available in COMIT was not used in this program but will be needed for the next. The first run after the initial program check-out revealed an error in one of the grammar rules. This was corrected and a run of 100 sentences produced an output deck of cards which were then used to print the appended set of random sentences. The output sentences were for the most part quite grammatical, though of course nonsensical.An examination of the output sentences reveals a number of interesting points for further investigation. Most of these involve the coordination structures. In several of the sentences, the same item appears more than once in a series. There may be grammatical restrictions here. Also, it appears difficult to coordinate such diverse types of singular noun phrases as *ENGINEER *SMALL, WATER, THE BOILER, BOILERS, *SMALL AND IT. These items already represent different constructions in the grammar, but are shown together for purposes of coordination.This raises a delicate point as where to draw the line between what is grammatical and what is not grammatical, a question that is further pointed up by such sentences as *WHEN HE IS OILED, HE IS POLISHED. In the original sentence, OILED and POLISHED refer to the engine, and are used in their literal sense. In the above sentence, they would normally be construed in a different way. This fact argues in favor of setting up a classification of animate and inanimate, which could be used to restrict adjectives and nouns. BIG and LITTLE could apply to both groups of nouns, but OILED and POLISHED would have to be entered twice with different meanings. The trouble with all this is that the restriction is really semantic and not grammatical. The sentence can be construed in its literal sense, although this is admittedly a bit far-fetched.
null
null
null
null
Main paper: : IN the paper "A Framework for Syntactic Translation", 1 it was proposed that a translation routine could be divided into six logically separate parts. There was a horizontal division into three steps: sentence analysis, transfer of structure, and sentence synthesis; and there was a vertical division into the operational parts, or routines proper, and the parts that contained all the necessary knowledge of the structures of the languages involved and their interrelation. It was hoped that to divide is to conquer.In the work reported here we are concerned with just two of the six parts of a translation routine -the sentence-synthesis routine and the grammar, which is eventually to contain as complete a set of rules for English sentence structure as possible.Of the various possible forms for writing a grammar, the generative 2 form seems to have the most to recommend it. A generative grammar is a grammar written in a manner analogous to a deductive system. Its main advantage is that it offers a relatively easy method of dealing with the difficult problems posed by the multiple function of words and constructions. It also seems plausible that a generative type of grammar is not necessarily confined in its use to a sentence synthesis routine, but could be used equally well with a sentence recognition routine. 3 A number of forms of generative grammar have been explored. The original transformational type of grammar has been abandoned because it cannot be mechanized by a finite device, because of the difficulty of assigning a phrase structure to the result of a transformation, and for several other lesser technical reasons.The type of grammar and sentence synthesis mechanism finally chosen have been described in detail elsewhere. 4 The grammar consists of a finite set of phrase-structure rules that can be applied one at a time by the sentence synthesis mechanism. The rules of the grammar form an unordered set. The order in which they are applied is determined by what is needed next as the words of the unfolding sentence are produced in their natural order, that is left-to-right according to the English orthographic convention.At present there are three types of rules allowed in the grammar. They are applied by the mechanism as follows.Rules of the type A = B + C means that a construction or form A is to be replaced by its functional parts or constituents B and C. An example would be SENTENCE = SUBJECT + PREDICATE. When A is replaced by B + C, B, the first element on the right, will be treated as the element on the left when the next rule is applied, but C, the second element on the right, is placed in a temporary storage organized on a last in -first out principle. previous principle of last in -first out. The C is placed in a position immediately behind the constituent that would be next out so that for rules of this third type the principle is that the last in is given second priority. An example would be VERB-PHRASE = VERB+ ... +ADVERB as in "He called her up." Even if the grammar consists of a finite set of rules, the mechanism can in general produce sentences from an infinite set, that is the grammar might impose no limit on the length of sentences. This would be the case if there were recursive loops in the grammar in such a way that certain rules could be reapplied an unlimited number of times. There appears in fact to be no limit to the length of English sentences.Similarly the grammar might impose no limit on the number of symbols stored at any time in the temporary memory. If such were the case, the mechanism would not be physically realizable because it would need an infinite temporary memory. An examination of English sentences appeared to show that a small temporary memory, capable of holding no more than about seven symbols, was adequate. This led to the hypothesis 4 that English and probably all languages possess grammars that impose the limitation that no more than about seven items need ever be stored in the temporary memory. A phrase-structure grammar with this restriction is equivalent to a finitestate device. Many of the complications of English syntax can be understood as the means for imposing such a restriction.A need has thus developed for a relatively complete grammar of English. It is needed for use in the translation routine. It is needed in order to test more carefully on English the hypothesis that a grammar with the predicted restriction is adequate. It is also needed in order to explore further certain additional questions about the structure of grammars.The grammar of a language cannot be written down immediately in its final form. It must be discovered. And as the various parts of it are discovered and written down, they must be tested. The testing of a grammar for adequacy is not easy. The aid of a computer seems indispenslble.A set of rules that purports to represent the grammar of a language, or part of it, partitions the set of all strings of characters into two mutually exclusive subsets, the subset containing those strings that it can generate, and the subset containing those strings that it cannot generate. If the set of rules is adequate as the grammar of a language, then the set of strings that the grammar can produce will all be recognized by native speakers as belonging to the language, and the set of strings that the grammar cannot produce will be recognised by native speakers as not belonging to the language.It is, of course, recognized that in many border-line cases, native speakers are unsure, and disagree as to whether a string is part of the language or not. But even if native speakers were always sure, and always agreed among themselves, a complete validation of a grammar would be impossible for the simple reason that the sets of strings to be tested are infinite sets. We are forced to fall back on a sampling procedure. A random sample of the set of strings generated by the grammar can be pro-(98026) duced by a computer program and examined by native speakers. In addition, sentences that are found to occur naturally can be checked to see if they are produced by the grammar. At a later stage, part of this process can be mechanized by the use of a recognition routine.For the first stage of writing and validating a grammar of English, it was decided to start with the simple, straightforward language of a carefully selected children's book, 5 and write the rules necessary to generate its 161 sentences. More complicated material would be turned to later. It was soon evident, however, that it would be too difficult to write the rules for the whole book before testing any of them. Attention was therefore directed to the first ten sentences, which provide a surprisingly wide linguistic diversity. These ten sentences are as follows:A set of tentative rules were written down that could produce these ten sentences and many other similar sentences. Among the trivial shortcoming of the grammar, it must be pointed out that the article A is not changed to AN before a vowel, and the plural S is not changed to ES after FIRE-BOX. Routines to do these things are straightforward and well understood. They will be added later.The sentence producing routine and the grammar were coded in COMIT 6 . A copy is appended. The full power of the subscript operations available in COMIT was not used in this program but will be needed for the next. The first run after the initial program check-out revealed an error in one of the grammar rules. This was corrected and a run of 100 sentences produced an output deck of cards which were then used to print the appended set of random sentences. The output sentences were for the most part quite grammatical, though of course nonsensical.An examination of the output sentences reveals a number of interesting points for further investigation. Most of these involve the coordination structures. In several of the sentences, the same item appears more than once in a series. There may be grammatical restrictions here. Also, it appears difficult to coordinate such diverse types of singular noun phrases as *ENGINEER *SMALL, WATER, THE BOILER, BOILERS, *SMALL AND IT. These items already represent different constructions in the grammar, but are shown together for purposes of coordination.This raises a delicate point as where to draw the line between what is grammatical and what is not grammatical, a question that is further pointed up by such sentences as *WHEN HE IS OILED, HE IS POLISHED. In the original sentence, OILED and POLISHED refer to the engine, and are used in their literal sense. In the above sentence, they would normally be construed in a different way. This fact argues in favor of setting up a classification of animate and inanimate, which could be used to restrict adjectives and nouns. BIG and LITTLE could apply to both groups of nouns, but OILED and POLISHED would have to be entered twice with different meanings. The trouble with all this is that the restriction is really semantic and not grammatical. The sentence can be construed in its literal sense, although this is admittedly a bit far-fetched. Appendix:
null
null
null
null
{ "paperhash": [ "yngve|a_model_and_an_hypothesis_for_language_structure", "yngve|a_programming_language_for_mechanical_translation", "yngve|a_framework_for_syntactic_translation" ], "title": [ "A model and an hypothesis for language structure", "A programming language for mechanical translation", "A framework for syntactic translation" ], "abstract": [ "Cover title. \"Reprint from Proceedings of the American Philosophical Society, vol.104, no.5.\"", "A notational system for use in writing translation routines and related programs is described. The system is specially designed to be convenient for the linguist so that he can do his own programming. Programs in this notation can be converted into computer programs automatically by the computer. This article presents complete instructions for using the notation and includes some illustrative programs.", "Adequate mechanical translation can be based only on adequate structural descriptions of the languages involved and on an adequate statement of equivalences. Translation is conceived of as a three-step process: recognition of the structure of the incoming text in terms of a structural specifier; transfer of this specifier into a structural specifier in the other language; and construction to order of the output text specified." ], "authors": [ { "name": [ "V. Yngve" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "V. Yngve" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "V. Yngve" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "18889404", "36706879", "26279825" ], "intents": [ [], [], [] ], "isInfluential": [ false, false, false ] }
null
761
0.027595
null
null
null
null
null
null
null
null
d14608edd72ec74c2bb2469a014db032d46183ce
6621683
null
On the value of dependency connections
VALUES are tentatively defined as numbers assigned to types of syntactic relations such that connections of higher value are established in preference to connections of lower value during sentence-structure determination. Given a text in which sentence structures are known, the values of some syntactic relations can be estimated by the following plan: assign value 1 to relations such that no relation is known to have lower value; assign value 2 to relations such that all relations known to have lower value are also known to have value 1; etc. The same procedure can be used in assigning adjectives to order classes, and for similar purposes.
{ "name": [ "Hays, David G." ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Machine Translation and Applied Language Analysis
1961-09-01
5
0
null
VALUES are tentatively defined as numbers assigned to types of syntactic relations such that connections of higher value are established in preference to connections of lower value during sentence-structure determination. Given a text in which sentence structures are known, the values of some syntactic relations can be estimated by the following plan: assign value 1 to relations such that no relation is known to have lower value; assign value 2 to relations such that all relations known to have lower value are also known to have value 1; etc. The same procedure can be used in assigning adjectives to order classes, and for similar purposes.* PROGRAMMES for sentence-structure determination (SSD), also called syntactic recognition or parsing programmes, differ in their responses to "ambiguity." Some programmes yield all possible structures of an ambiguous sentence, but most -like the RAND SSD programme -yield only one structure per sentence, namely, the most plausible structure according to the rules of some screening procedure. A programme of either type can fail to produce any "correct" structure for a given sentence, and a programme that seeks the most plausible single structure for each sentence is bound to miss one or more correct structures for any ambiguous sentence. Any programme of the latter type, which will be called heuristic in this paper, avoids certain excesses of the former type, since an exhaustive SSD programme can yield dozens of different structures per sentence if its grammar is weak.* The more powerful the grammar, the fewer the structures yielded by an exhaustive programme, and the more likely the heuristic programme to yield a complete, correct structure -assuming certain unproved qualities for natural language.Now, a heuristic SSD programme requires many heuristic devices to lead it, as directly as possible, to a single plausible structure; an exhaustive programme can utilize the same devices to rank its structures from most plausible to least. One device is assignment of value numbers to constructions (in an immediate-constituent theory) or to dependencies (in a dependency theory). Faced with a plurality of possible dependency connections, the heuristic programme establishes the one with highest value. Faced with a plurality of complete structures for a single sentence, the exhaustive programme orders them from highest average value to lowest. The concept of value has appeared before in the machine-translation literature, under several names (such as urgency) [1] [2] . The present paper offers an explication of the concept and a method for assignment of values on the basis of empirical data. Some other linguistic applications of the same method are noted in Sec. 3.
null
null
Values are to be assigned in such a way that establishing high-value dependency connections in preference to low-value improves the average accuracy of an SSD programme. In this section, a plan is given for the use of value numbers during SSD. This plan is not the only conceivable plan, and it is not necessarily useful for all types of dependency connections; it is proposed as a scheme for finding the governors of prepositions.*The RAND SSD programme establishes dependency** connections one by one; a stage in SSD terminates when a new connection is established. At any stage, certain pairs of occurrences are available for consideration; these are the pairs for which precedence*** holds. Among the precedence pairs, some (or none; in which case the programme is blocked) show agreement. If, at any stage, occurrence X precedes occurrence Y, and occurrences X and Y agree, a dependency connection can be established between them. At most stages, these two conditions are satisfied by two or more pairs of occurrences; in general, it is impossible to establish all separately possible connections simultaneously, since connections can interfere with one another in three ways. (i) Two connections can involve the same dependent, but an occurrence can depend on at most one other occurrence. (ii) One connection can cut the precedence relation in the other pair. For example, if 2dl and 3d4 in a sentence (see Fig.1 ), then 1p3, 1p4, and 2p4 (but not 2p3, since XpY only if X or Y or both are independent). If 4d2 be established, then 1p3 and 1p4 become false. (iii) One connection can modify the grammatic type of an occurrence so that it no longer agrees with another. For example, in the sequence N nom N nom / gen A nom , the central noun can depend on the preceding noun or govern the following adjective, but it cannot enter both combinations; in one it is genitive, in the other nominative. Hence, in general, it is necessary to choose one connection at a time, make it, and recompute precedences and agreements. Among the most difficult decisions to be made in many languages* are those concerning prepositions. Prepositions occur with high frequency, they show no morphological agreement with their governors, and they can be separated from their governors by long strings. Sentences are printed in Russian scientific text containing prepositional phrases preceded by sequences of possible governors; if the determination of the preposition's governor were postponed as long as possible, as many as half-a-dozen precedence pairs could be established in some sentences, all showing agreement.The following remarks are casual; they serve to motivate, but have no part in, the formal development below. The suitability of the formalism for empirical linguistics is not to be determined by such casual remarks. In general it is probably easier to locate a following governor; the introductory prepositional phrase, for example, may be a clausal modifier in every instance. In what follows, it is assumed for simplicity that the governor is to be found ahead of the preposition, i.e., that all prepositions with following governors can be handled without recourse to the present procedure. (A major conceptual difficulty is thus avoided; see Sec. 3.) Within a sequence preceding the given phrase, absolute position may be significant; for example, in the occurrence sequence* P X N X N gen . . . N gen P y N y where any number of genitive nouns can be inserted, N X may be the only possible governor of P y N y , or N x and the last N gen may be the only two, unless the last is of a special type and its right to govern a prepositional phrase is transferred to the penultimate genitive, etc. Rules of this order do turn up in natural languages, but they are disregarded here.Any occurrence X will be called accessible to another occurrence Y if and only if XpY (X precedes Y) at some stage of a feasible SSD programme. For example, let P be an occurrence of a preposition; establish all possible dependency connections in the sentence without attaching P to a governor. When no further connections can be made, all and only those occurrences X such that XpP at that stage are accessible to P. The plan to be set forth below is intended to choose a governor for each P from among the whole set of accessible occurrences, using values and relative distance as criteria. The accessible occurrences can be ordered as closest to P, next closest, etc., according to their positions in the text sequence (see Fig. 2 ).(ii) Type of preposition. It may be, in some languages, that there exists pairs or larger sets of equivalent prepositions. As a first approximation, it seems best to treat each separately. The method of Sec. 2 below permits grouping prepositions if the data make them equivalent.(iii) Type of object. In Russian, some prepositions govern objects of unique cases; others, such as B take objects of various cases. As a first approximation, the type of object associated with a given occurrence * Hereinafter, capital letters stand for words or word classes, unless it is noted that occurrences of given words or words of given classes are being mentioned. N = noun, P = preposition, X,Y,Z = any word. Subscripts are used for cross classification (as by case: gen. -genitive, x.y = any case) or as dummy indices.(98026) of a preposition can be characterized by grammatic case, but further characterization is probably needed (cf. Harper's study of prepositional equivalents [4] ), and can result from the procedure to be outlined.Initially, let the type of a prepositional phrase be defined by the preposition that is contains and the grammatic case of the object. If prepositions can be grouped, if objects must be subclassified, or if dependents of the object influence the syntactic functions of the phrase, this definition must be revised.(iv) Type of governor. It is difficult, and it may be impossible in terms of traditional parts of speech, to eliminate broad classes of words as not possible governors of a given type of prepositional phrase. However, in in a fixed corpus, it is possible to list all the governors that actually occur. Further characterization of governors is the purpose of value assignment. A word will be called a potential governor of a given type of prepositional phrase in a fixed corpus if it occurs anywhere in the corpus as a governor of that type of prepositional phrase.The following plan has not been programmed or verified; it is offered as a hypothesis, subject to empirical test. To locate the governor of a prepositional occurrence, P, during SSD:(1) Connect P with its object. Mark P to indicate the type of phrase that it heads. (2) Eliminate P if its governor follows it.(3) Find all occurrences, X, accessible to P.(98026) (4) Eliminate any X that is not an occurrence of a potential governor. (5) Obtain v(X,P), that is, the value of X as governor of P, for each remaining X. (During SSD, the values are obtained from a table; v(X,P) is a function of the types of X and P. Discovering the values to be stored in the table is the object of the procedure described below, Sec. 2.) (6) Take the closest X such that v(X,P) is not greater for any more distant X. It is not necessary to find all accessible occurrences before connecting P to a governor, provided that any occurrence that is later found to be accessible is tested by steps (4) through (6) of the plan.This plan can be taken as one definition of the concept of value as applied to syntactic relations. Values are numbers assigned in any fashion such that this plan yields correct results. Two questions remain: can such numbers be assigned to yield correct results throughout a fixed corpus of substantial size? If so, do the assignments tend to stability as the size of the corpus increases indefinitely? In the following section, a method for obtaining answers to these two questions is described.Given a corpus in which the structure of every sentence has been determined, the procedure outlined here assigns a set of values to the potential governors of any given type of prepositional phrase, such that the plan set forth in Sec. 1 will yield correct results throughout the corpus, or else it reveals that no consistent set of assignments is possible.The values of two words, X and Y, with respect to a preposition heading a given type of phrase, say P, only influence the structure of sentences in which both occur accessible to an occurrence of P. Suppose that X occurs to the left of Y; then if X governs P, v(X,P)> v(Y,P), but if Y governs P, v(X,P)  v(Y,P). An inference of this type can be made from each sentence in which two potential governors occur; if more occur in a sentence, all accessible, inferences can be made for each pair consisting of the correct governor and one other potential governor.Comparing inferences made from two sentences can reveal inconsistencies. Suppose that v(X,P) > v(Y,P) is inferred from one sentence, but v(Y,P)  v(X,P) from another; no assignment of values can satisfy these two conditions. Again, suppose that three sentences separately lead to the inferences that v(X,P) > v(Y,P), v(Y,P) > v(Z,P), and v(Z,P) > v(X,P); the last inference is inconsistent with the implication of the first two; namely that v(X,P) > v(Z,P). The object of an empirical assignment procedure is not to gloss over such inconsistencies, but to reveal them; they invalidate the hypothesis of simply ordered values, not the research procedure.In an infinite corpus, every potential governor of P could have a unique value, and the values could be simply ordered. The rule of Sec. 1 will, however, yield correct results in a finite corpus if the same value is assigned to two or more words, say X and Y, provided that the values of X and Y are not directly comparable in any sentence, and that if y(W,P) > v(X,P) > v(Z,P), then v(W,P) > v(Y,P) > v(Z,P) for all W,Z. If the unique values attainable in an infinite corpus are considered the true set, the procedure to be outlined only guarantees assignment of estimates less than or equal to the true values. Any word that is not a potential governor of P has, effectively, the value zero; if such a word is found, in a later corpus, to govern P, its value must be raised. This process can continue indefinitely, but it would tend in the limit to assign true values.When the structure of a sentence is known, the set of occurrences accessible to an occurrence of P can be located without recomputation of precedence pairs. Assume that the governor of an occurrence of P is to its left, since sentences in which P's governor follows it are irrelevant in this treatment. Then: (i) The governor of P is accessible to P. (ii) The occurrences from which P's governor derives are accessible to P, up to and including the last that lies to the left of P. These occurrences are beyond the governor of P, i.e., they lie to its left. (iii) Among the occurrences that depend on the governor of P, only one can be accessible to P; it must lie between P and its governor, and if two or more satisfy this criterion, the rightmost is the only one that is accessible. Applying this rule to the dependents of the accessible dependent of P's governor, etc., we can develop a rightmost derivation chain headed by P's governor and ending with the closest occurrence to P that does not derive from P. These occurrences are between P and its governor (see Fig. 3 ). Every occurrence accessible to P belongs to one of the three categories, (i), (ii), (iii); every value comparison involves the unique member of (i) and a member of (ii) -accessible/beyond-or (iii) -accessible/between.The assignment procedure consists of the following steps, carried out separately for each type of prepositional phrase, P: (1) Define the set A = {X i }, where X i is a potential governor of P. List the members of A. Set A is the set * of words to be evaluated.(2) Define the sets B i = {X j }, where** X j A and X j occurs accessible/ between X i and P. List the members of B i for all i. If X j B i , then it must be inferred that v(X i ,P) > v(X j ,P).(3) Define the sets C i = {X k }, where X k A and X k occurs accessible/ beyond X i and P. List the members of C i for all i.If X k C i , then it must be inferred that v(X k ,P)(X i ,P).Steps (1) through (3) tabulate the data to be analyzed.* Curly brackets enclose the members of a set; A = {X i } is read "A is the set whose members are X i " ** Here  means "is a member of".This step stops the procedure if values less than or equal to n have been assigned to all potential governors of P. Otherwise, the iteration continues with a zeroth approximation of the set of words with value n + 1.If the procedure is stopped because values cannot consistently be assigned to all potential governors of P, it can be converted into an approximate method, but the plan of Sec. 1 will yield some errors if the approximate method must be used.In steps (2) and 3of the assignment procedure, the frequency of occurrence must be shown for each member of each B i and C i . That is to say, the number of times that X j occurs accessible/between or accessible/beyond X i must be noted. In step (4), if there is no i such that B i = , the approximate method finds those X i such that the sum of occurrence frequencies over B i is minimal. In step (8), the same must be done. Approximations can also be used in steps (5) and (9).An alternative procedure, which complicates the results but avoids introducing error if it is successful, is to subclassify prepositional phrases. Suppose that X j B i (word X j occurs accessible/between X i and P) and X i B j ; then it must be inferred that v(X i ,P) > v(X j ,P) and also that v(X j ,P) > v(X i ,P). These two inferences are inconsistent; but they must be made from different sentences, and if the preposition has different objects in those two sentences, P can be resolved into two different phrase types, P' and P". The procedure is then carried out separately for P' and P", but the same inconsistencies can arise again. Indeed, if X j B i , and X i B j on the basis of two sentences in which the same preposltlon-object pairs occur (and if dependents of the object do not differ, etc.), then subclassification of P is useless.When observations on a new corpus are to be collated with the analysis of an old, it is necessary to merge the two sets of data and repeat the entire procedure -realizing that the number of inconsistencies can be increased, but not decreased, in the combined data. In principle, the number of distinct values assigned can increase without limit as the size of the corpus is increased; substantively, however, the total number of distinct values should remain small, since speakers of the language are presumably unable to handle many nuances. For the same reason, even if it is necessary to subdivide prepositional phrases according to object type, the number of subclasses should be small. If the number of subclasses or the number of distinct values assigned increases rapidly, the linguist would do well to look for another theory.The whole assignment procedure described in this section can be programmed for automatic operation on a computer, but most linguists would be unsatisfied with a list of value assignments as the sole output, and with good reason. It would be naive to expect as simple a plan as this to capture the whole of prepositional usage. Syntactic rules of quite different types are probably obeyed by the speakers of every language; only empirical test will show whether rules of the type assumed are obeyed in any language. At least in early applications, therefore, lists of exceptional occurrences will be wanted as part of the output. The procedures described in this section can be programmed easily and run at little expense on relatively large corpora. If only by winnowing exceptional occurrences out of masses of ordinary ones, the procedure should be useful to the linguist.
Automatic aids to linguistic analysis and lexicographic research are essential because the volumes of data that must be processed are too large for systematic, thorough study by manual techniques. Even relatively unsophisticated lexicography has consumed whole lifetimes of talented effort. In this paper, one computational aid has been presented. Beginning with a definition of value for certain classes of dependency types, a procedure for assigning estimated values to words in a text has been developed. The procedure requires postedited text, in which the structure of every sentence is known, as input; other procedures will eventually be developed that operate on unedited text [5] , but editing is only a small part of analysis, and the analyst benefits if other parts of the task can be made automatic in the meantime.One conceptual difficulty that remains to be investigated is that of the interaction between direction and distance. If the governor of every prepositional occurrence lies ahead of it in the sentence, accessible/between and accessible/beyond can be distinguished by a simple criterion (as in the present development). If the governor can lie in either direction, a more complicated criterion is required, and what that criterion should be is not obvious.Essentially the same procedure can be applied in the establishment of order classes of suffixes, adjectives, etc.* Hill, for example, asserts the existence of six adjective order classes in English [6] ; the six adjectives in "All the ten fine old stone houses" belong respectively to classes VI-I. When adjectives of different classes are used to modify a single noun, the adjective belonging to the lower numbered class must stand nearest to the noun. Hence an occurrence of A i A j N implies that c(A j < c(A i ). Data of this type are simpler than those analyzed in Sec. 2, since all of the inequalities are strict. The same ordering problem arises with suffixes that must be added to roots in a particular order. The procedure in Sec. 2 establishes as many suffix "positions" or adjective "order classes" as the data require, and assigns suffixes to positions or adjectives to classes, provided that the ordering is transitive, invariant over noun categories or root types, and unique in the sense that no suffix or adjective belongs to more than one class. Perhaps other applications will occur to other students of language.
Main paper: explication: Values are to be assigned in such a way that establishing high-value dependency connections in preference to low-value improves the average accuracy of an SSD programme. In this section, a plan is given for the use of value numbers during SSD. This plan is not the only conceivable plan, and it is not necessarily useful for all types of dependency connections; it is proposed as a scheme for finding the governors of prepositions.*The RAND SSD programme establishes dependency** connections one by one; a stage in SSD terminates when a new connection is established. At any stage, certain pairs of occurrences are available for consideration; these are the pairs for which precedence*** holds. Among the precedence pairs, some (or none; in which case the programme is blocked) show agreement. If, at any stage, occurrence X precedes occurrence Y, and occurrences X and Y agree, a dependency connection can be established between them. At most stages, these two conditions are satisfied by two or more pairs of occurrences; in general, it is impossible to establish all separately possible connections simultaneously, since connections can interfere with one another in three ways. (i) Two connections can involve the same dependent, but an occurrence can depend on at most one other occurrence. (ii) One connection can cut the precedence relation in the other pair. For example, if 2dl and 3d4 in a sentence (see Fig.1 ), then 1p3, 1p4, and 2p4 (but not 2p3, since XpY only if X or Y or both are independent). If 4d2 be established, then 1p3 and 1p4 become false. (iii) One connection can modify the grammatic type of an occurrence so that it no longer agrees with another. For example, in the sequence N nom N nom / gen A nom , the central noun can depend on the preceding noun or govern the following adjective, but it cannot enter both combinations; in one it is genitive, in the other nominative. Hence, in general, it is necessary to choose one connection at a time, make it, and recompute precedences and agreements. Among the most difficult decisions to be made in many languages* are those concerning prepositions. Prepositions occur with high frequency, they show no morphological agreement with their governors, and they can be separated from their governors by long strings. Sentences are printed in Russian scientific text containing prepositional phrases preceded by sequences of possible governors; if the determination of the preposition's governor were postponed as long as possible, as many as half-a-dozen precedence pairs could be established in some sentences, all showing agreement.The following remarks are casual; they serve to motivate, but have no part in, the formal development below. The suitability of the formalism for empirical linguistics is not to be determined by such casual remarks. In general it is probably easier to locate a following governor; the introductory prepositional phrase, for example, may be a clausal modifier in every instance. In what follows, it is assumed for simplicity that the governor is to be found ahead of the preposition, i.e., that all prepositions with following governors can be handled without recourse to the present procedure. (A major conceptual difficulty is thus avoided; see Sec. 3.) Within a sequence preceding the given phrase, absolute position may be significant; for example, in the occurrence sequence* P X N X N gen . . . N gen P y N y where any number of genitive nouns can be inserted, N X may be the only possible governor of P y N y , or N x and the last N gen may be the only two, unless the last is of a special type and its right to govern a prepositional phrase is transferred to the penultimate genitive, etc. Rules of this order do turn up in natural languages, but they are disregarded here.Any occurrence X will be called accessible to another occurrence Y if and only if XpY (X precedes Y) at some stage of a feasible SSD programme. For example, let P be an occurrence of a preposition; establish all possible dependency connections in the sentence without attaching P to a governor. When no further connections can be made, all and only those occurrences X such that XpP at that stage are accessible to P. The plan to be set forth below is intended to choose a governor for each P from among the whole set of accessible occurrences, using values and relative distance as criteria. The accessible occurrences can be ordered as closest to P, next closest, etc., according to their positions in the text sequence (see Fig. 2 ).(ii) Type of preposition. It may be, in some languages, that there exists pairs or larger sets of equivalent prepositions. As a first approximation, it seems best to treat each separately. The method of Sec. 2 below permits grouping prepositions if the data make them equivalent.(iii) Type of object. In Russian, some prepositions govern objects of unique cases; others, such as B take objects of various cases. As a first approximation, the type of object associated with a given occurrence * Hereinafter, capital letters stand for words or word classes, unless it is noted that occurrences of given words or words of given classes are being mentioned. N = noun, P = preposition, X,Y,Z = any word. Subscripts are used for cross classification (as by case: gen. -genitive, x.y = any case) or as dummy indices.(98026) of a preposition can be characterized by grammatic case, but further characterization is probably needed (cf. Harper's study of prepositional equivalents [4] ), and can result from the procedure to be outlined.Initially, let the type of a prepositional phrase be defined by the preposition that is contains and the grammatic case of the object. If prepositions can be grouped, if objects must be subclassified, or if dependents of the object influence the syntactic functions of the phrase, this definition must be revised.(iv) Type of governor. It is difficult, and it may be impossible in terms of traditional parts of speech, to eliminate broad classes of words as not possible governors of a given type of prepositional phrase. However, in in a fixed corpus, it is possible to list all the governors that actually occur. Further characterization of governors is the purpose of value assignment. A word will be called a potential governor of a given type of prepositional phrase in a fixed corpus if it occurs anywhere in the corpus as a governor of that type of prepositional phrase.The following plan has not been programmed or verified; it is offered as a hypothesis, subject to empirical test. To locate the governor of a prepositional occurrence, P, during SSD:(1) Connect P with its object. Mark P to indicate the type of phrase that it heads. (2) Eliminate P if its governor follows it.(3) Find all occurrences, X, accessible to P.(98026) (4) Eliminate any X that is not an occurrence of a potential governor. (5) Obtain v(X,P), that is, the value of X as governor of P, for each remaining X. (During SSD, the values are obtained from a table; v(X,P) is a function of the types of X and P. Discovering the values to be stored in the table is the object of the procedure described below, Sec. 2.) (6) Take the closest X such that v(X,P) is not greater for any more distant X. It is not necessary to find all accessible occurrences before connecting P to a governor, provided that any occurrence that is later found to be accessible is tested by steps (4) through (6) of the plan.This plan can be taken as one definition of the concept of value as applied to syntactic relations. Values are numbers assigned in any fashion such that this plan yields correct results. Two questions remain: can such numbers be assigned to yield correct results throughout a fixed corpus of substantial size? If so, do the assignments tend to stability as the size of the corpus increases indefinitely? In the following section, a method for obtaining answers to these two questions is described. an empirical procedure for the assignment of values: Given a corpus in which the structure of every sentence has been determined, the procedure outlined here assigns a set of values to the potential governors of any given type of prepositional phrase, such that the plan set forth in Sec. 1 will yield correct results throughout the corpus, or else it reveals that no consistent set of assignments is possible.The values of two words, X and Y, with respect to a preposition heading a given type of phrase, say P, only influence the structure of sentences in which both occur accessible to an occurrence of P. Suppose that X occurs to the left of Y; then if X governs P, v(X,P)> v(Y,P), but if Y governs P, v(X,P)  v(Y,P). An inference of this type can be made from each sentence in which two potential governors occur; if more occur in a sentence, all accessible, inferences can be made for each pair consisting of the correct governor and one other potential governor.Comparing inferences made from two sentences can reveal inconsistencies. Suppose that v(X,P) > v(Y,P) is inferred from one sentence, but v(Y,P)  v(X,P) from another; no assignment of values can satisfy these two conditions. Again, suppose that three sentences separately lead to the inferences that v(X,P) > v(Y,P), v(Y,P) > v(Z,P), and v(Z,P) > v(X,P); the last inference is inconsistent with the implication of the first two; namely that v(X,P) > v(Z,P). The object of an empirical assignment procedure is not to gloss over such inconsistencies, but to reveal them; they invalidate the hypothesis of simply ordered values, not the research procedure.In an infinite corpus, every potential governor of P could have a unique value, and the values could be simply ordered. The rule of Sec. 1 will, however, yield correct results in a finite corpus if the same value is assigned to two or more words, say X and Y, provided that the values of X and Y are not directly comparable in any sentence, and that if y(W,P) > v(X,P) > v(Z,P), then v(W,P) > v(Y,P) > v(Z,P) for all W,Z. If the unique values attainable in an infinite corpus are considered the true set, the procedure to be outlined only guarantees assignment of estimates less than or equal to the true values. Any word that is not a potential governor of P has, effectively, the value zero; if such a word is found, in a later corpus, to govern P, its value must be raised. This process can continue indefinitely, but it would tend in the limit to assign true values.When the structure of a sentence is known, the set of occurrences accessible to an occurrence of P can be located without recomputation of precedence pairs. Assume that the governor of an occurrence of P is to its left, since sentences in which P's governor follows it are irrelevant in this treatment. Then: (i) The governor of P is accessible to P. (ii) The occurrences from which P's governor derives are accessible to P, up to and including the last that lies to the left of P. These occurrences are beyond the governor of P, i.e., they lie to its left. (iii) Among the occurrences that depend on the governor of P, only one can be accessible to P; it must lie between P and its governor, and if two or more satisfy this criterion, the rightmost is the only one that is accessible. Applying this rule to the dependents of the accessible dependent of P's governor, etc., we can develop a rightmost derivation chain headed by P's governor and ending with the closest occurrence to P that does not derive from P. These occurrences are between P and its governor (see Fig. 3 ). Every occurrence accessible to P belongs to one of the three categories, (i), (ii), (iii); every value comparison involves the unique member of (i) and a member of (ii) -accessible/beyond-or (iii) -accessible/between.The assignment procedure consists of the following steps, carried out separately for each type of prepositional phrase, P: (1) Define the set A = {X i }, where X i is a potential governor of P. List the members of A. Set A is the set * of words to be evaluated.(2) Define the sets B i = {X j }, where** X j A and X j occurs accessible/ between X i and P. List the members of B i for all i. If X j B i , then it must be inferred that v(X i ,P) > v(X j ,P).(3) Define the sets C i = {X k }, where X k A and X k occurs accessible/ beyond X i and P. List the members of C i for all i.If X k C i , then it must be inferred that v(X k ,P)(X i ,P).Steps (1) through (3) tabulate the data to be analyzed.* Curly brackets enclose the members of a set; A = {X i } is read "A is the set whose members are X i " ** Here  means "is a member of".This step stops the procedure if values less than or equal to n have been assigned to all potential governors of P. Otherwise, the iteration continues with a zeroth approximation of the set of words with value n + 1.If the procedure is stopped because values cannot consistently be assigned to all potential governors of P, it can be converted into an approximate method, but the plan of Sec. 1 will yield some errors if the approximate method must be used.In steps (2) and 3of the assignment procedure, the frequency of occurrence must be shown for each member of each B i and C i . That is to say, the number of times that X j occurs accessible/between or accessible/beyond X i must be noted. In step (4), if there is no i such that B i = , the approximate method finds those X i such that the sum of occurrence frequencies over B i is minimal. In step (8), the same must be done. Approximations can also be used in steps (5) and (9).An alternative procedure, which complicates the results but avoids introducing error if it is successful, is to subclassify prepositional phrases. Suppose that X j B i (word X j occurs accessible/between X i and P) and X i B j ; then it must be inferred that v(X i ,P) > v(X j ,P) and also that v(X j ,P) > v(X i ,P). These two inferences are inconsistent; but they must be made from different sentences, and if the preposition has different objects in those two sentences, P can be resolved into two different phrase types, P' and P". The procedure is then carried out separately for P' and P", but the same inconsistencies can arise again. Indeed, if X j B i , and X i B j on the basis of two sentences in which the same preposltlon-object pairs occur (and if dependents of the object do not differ, etc.), then subclassification of P is useless.When observations on a new corpus are to be collated with the analysis of an old, it is necessary to merge the two sets of data and repeat the entire procedure -realizing that the number of inconsistencies can be increased, but not decreased, in the combined data. In principle, the number of distinct values assigned can increase without limit as the size of the corpus is increased; substantively, however, the total number of distinct values should remain small, since speakers of the language are presumably unable to handle many nuances. For the same reason, even if it is necessary to subdivide prepositional phrases according to object type, the number of subclasses should be small. If the number of subclasses or the number of distinct values assigned increases rapidly, the linguist would do well to look for another theory.The whole assignment procedure described in this section can be programmed for automatic operation on a computer, but most linguists would be unsatisfied with a list of value assignments as the sole output, and with good reason. It would be naive to expect as simple a plan as this to capture the whole of prepositional usage. Syntactic rules of quite different types are probably obeyed by the speakers of every language; only empirical test will show whether rules of the type assumed are obeyed in any language. At least in early applications, therefore, lists of exceptional occurrences will be wanted as part of the output. The procedures described in this section can be programmed easily and run at little expense on relatively large corpora. If only by winnowing exceptional occurrences out of masses of ordinary ones, the procedure should be useful to the linguist. discussion: Automatic aids to linguistic analysis and lexicographic research are essential because the volumes of data that must be processed are too large for systematic, thorough study by manual techniques. Even relatively unsophisticated lexicography has consumed whole lifetimes of talented effort. In this paper, one computational aid has been presented. Beginning with a definition of value for certain classes of dependency types, a procedure for assigning estimated values to words in a text has been developed. The procedure requires postedited text, in which the structure of every sentence is known, as input; other procedures will eventually be developed that operate on unedited text [5] , but editing is only a small part of analysis, and the analyst benefits if other parts of the task can be made automatic in the meantime.One conceptual difficulty that remains to be investigated is that of the interaction between direction and distance. If the governor of every prepositional occurrence lies ahead of it in the sentence, accessible/between and accessible/beyond can be distinguished by a simple criterion (as in the present development). If the governor can lie in either direction, a more complicated criterion is required, and what that criterion should be is not obvious.Essentially the same procedure can be applied in the establishment of order classes of suffixes, adjectives, etc.* Hill, for example, asserts the existence of six adjective order classes in English [6] ; the six adjectives in "All the ten fine old stone houses" belong respectively to classes VI-I. When adjectives of different classes are used to modify a single noun, the adjective belonging to the lower numbered class must stand nearest to the noun. Hence an occurrence of A i A j N implies that c(A j < c(A i ). Data of this type are simpler than those analyzed in Sec. 2, since all of the inequalities are strict. The same ordering problem arises with suffixes that must be added to roots in a particular order. The procedure in Sec. 2 establishes as many suffix "positions" or adjective "order classes" as the data require, and assigns suffixes to positions or adjectives to classes, provided that the ordering is transitive, invariant over noun categories or root types, and unique in the sense that no suffix or adjective belongs to more than one class. Perhaps other applications will occur to other students of language. : VALUES are tentatively defined as numbers assigned to types of syntactic relations such that connections of higher value are established in preference to connections of lower value during sentence-structure determination. Given a text in which sentence structures are known, the values of some syntactic relations can be estimated by the following plan: assign value 1 to relations such that no relation is known to have lower value; assign value 2 to relations such that all relations known to have lower value are also known to have value 1; etc. The same procedure can be used in assigning adjectives to order classes, and for similar purposes.* PROGRAMMES for sentence-structure determination (SSD), also called syntactic recognition or parsing programmes, differ in their responses to "ambiguity." Some programmes yield all possible structures of an ambiguous sentence, but most -like the RAND SSD programme -yield only one structure per sentence, namely, the most plausible structure according to the rules of some screening procedure. A programme of either type can fail to produce any "correct" structure for a given sentence, and a programme that seeks the most plausible single structure for each sentence is bound to miss one or more correct structures for any ambiguous sentence. Any programme of the latter type, which will be called heuristic in this paper, avoids certain excesses of the former type, since an exhaustive SSD programme can yield dozens of different structures per sentence if its grammar is weak.* The more powerful the grammar, the fewer the structures yielded by an exhaustive programme, and the more likely the heuristic programme to yield a complete, correct structure -assuming certain unproved qualities for natural language.Now, a heuristic SSD programme requires many heuristic devices to lead it, as directly as possible, to a single plausible structure; an exhaustive programme can utilize the same devices to rank its structures from most plausible to least. One device is assignment of value numbers to constructions (in an immediate-constituent theory) or to dependencies (in a dependency theory). Faced with a plurality of possible dependency connections, the heuristic programme establishes the one with highest value. Faced with a plurality of complete structures for a single sentence, the exhaustive programme orders them from highest average value to lowest. The concept of value has appeared before in the machine-translation literature, under several names (such as urgency) [1] [2] . The present paper offers an explication of the concept and a method for assignment of values on the basis of empirical data. Some other linguistic applications of the same method are noted in Sec. 3. Appendix:
null
null
null
null
{ "paperhash": [ "stoakes|introduction_to_linguistic_structures" ], "title": [ "Introduction to Linguistic Structures" ], "abstract": [ "Chapter headings include : What is Language t; Stress, Juncture, Pitch; Consonants; Phoneme and Allophone; Vowels and Vowel Nuclei; Phonotactios; Morphemics; Morphotactics; Inflection; Form Classes Marked by Derivational Morphemes; The Structure of Free Phrases; Verb Phrases (2 chapters) ; Modifying Phrases; Main Sentence Elements (S chapters); Simple Sentences; Complex Sentences (2 chapters); and Beyond The Sentence. Appendix A, ESKIMO — A Grammatical Sketch. Appendix B, LATIN." ], "authors": [ { "name": [ "Paul Stoakes", "A. A. Hill" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "149601593" ], "intents": [ [ "background" ] ], "isInfluential": [ false ] }
null
761
0
null
null
null
null
null
null
null
null
ae2a2e33d367d9344e75715d3e66d07ee32ba689
244077639
null
Human translation and translation by machine
THE COMPREHENSION OF A TEXT, BY THE HUMAN TRANSLATOR AND BY THE MACHINE 1. THE SUBJECT OF THE CONTRIBUTION TIME, at congresses, is always too short, and the contribution of a team, even if split up into several papers, cannot possibly cover the entire ground of a subject as large and as complex as Mechanical Translation.
{ "name": [ "Ceccato, Silvio and", "Zonta, Bruna" ], "affiliation": [ null, null ] }
null
null
Proceedings of the International Conference on Machine Translation and Applied Language Analysis
1961-09-01
0
5
null
rather more towards thought than towards language (at least with regard to those aspects of language that usually are considered formal).The results of this research, therefore, also belong to a general linguistics of which MT is an application and a test. As you will soon notice, ours is a novel kind of linguistics compared to traditional linguistics. For while the traditional studies start more or less directly from language and its formal aspects, ours start from an analysis of thought. In fact, we ask what language is, how it works, and how it matches thought, after an analysis of thought and its contents. The analysis is also of a novel kind, because the contents of thought have hitherto been conceived as static units, while we conceive them in terms of operations.(b)The individuation and description of the operations by means of which man translates also serve as a warning against certain dangers for those working in this field. The recent history of the first attempts to mechanise man's mental activities, such as perception, thought, translation, summarising, etc., clearly shows where these dangers lie.Since the philosophers and psychologists have refrained from supplying an analysis of these superior activities in terms that might be of use for mechanical construction, the engineers set themselves up as philosophers and began to improvise. They did not hesitate to consider procedures identical that have little or nothing in common. A fictitious example may serve as an illustration of what has happened. Within certain limits it is possible to produce the results of arithmetical operations, either by actually carrying out these operations or by using results that have previously been obtained and memorised, that is, by simple substitutions; but would it be justified, in the second case, to say that the man or the machine "calculates"? Similarly, if machines or men do no more than substitute the words or groups of words of a prefabricated translation for the words or groups of words of a text, would it be justified to say that they "translate"?Confusions of this kind are detrimental on various levels. On the theoretical level, for instance, they lead to the neglect of essential studies (in our case the most important branches of research concerning language and thought would no longer be furthered). On the psychological level they create an excessive initial optimism, because the problem would not only seem easier than in fact it is, but it would appear already solved; this optimism will then give way to a no less excessive pessimism, when it proves impossible to get beyond the limitations of the adapted solution, and one will then conclude that the mechanisation of this particular activity is impossible (in our case the conclusion would be that MT is impossible). On the practical level, finally, it might be that what seemed the quickest and most economical method turns out the slowest and most costly; because one might be led to constructing machines which yield only the one kind of result, instead of constructing machines which, since they reproduce the much richer human activity that produces these results among others, can be used -with small modifications -to obtain many other results as well (in our case, as we shall try to explain, the same analysis of an input (98026) text can be used for output in diverse languages, for summarising, for documentation, etc.).Our approach to MT, as we nave said, is but one of the possible applications of a series of studies aimed at an analysis of thought and its contents on the one hand and of language on the other.The purpose of these studies is primarily that of constituting a psychology and a linguistics. Linguistics is particularly important to us, because, for the time being, thought has been approached above all through language. But our analyses have been carried out also with a view to the immediate application of the findings. These analyses are intended, as we have mentioned, to present results in terms of operations, and this makes it possible for them to be used both as working hypotheses by physiologists and anatomists who are trying to individuate the organs which carry out these operations and as indications by the engineers who intend to construct artificial organs which carry out these operations as their function. We have tried above all to assure that the analyses break up the studied activities in operations that are constituted by changes of state or changes of place which, on the one hand, can be supposed to be observable in the nervous system once adequate techniques of observation have been evolved, and, on the other hand, can be reproduced by an engineer given the actual possibilities of construction.Different languages, in their presentation of single designated units and their temporal order in thought, display considerable similarities owing to similar exigencies; but they also display certain differences, especially when they are examined in their written rather than in their spoken forms. In some languages, for instance, there is a tendency to maintain in isolated words the single units of designation, and for this reason we call them isolating languages. In others, the inflecting or agglutinative ones, there is a tendency to group several units of designation together in one word; hence we speak of a root with suffixes, prefixes etc., of compound words, and soon.In spoken language, of course, such differences more or less disappear.Usually, in any language which is to be spoken, the various units of designation have to be presented in linear succession; and (98026) all languages use this sequence to impose an order on the flow of operating. But in this connection a complication arises: while the single units of designation can only succeed one another, each giving place to the next, we have seen that this is the only rhythmical figuration which cannot occur in thought. The figurations which, in thought, determine the units are always such that their elements present themselves simultaneously in one way or another; in other words, the units of designation appear in one single line and, thus, have monodic character; the designated units, on the other hand, appear on several lines and have polyphonic character. This means, in fact, that we carry out several operations concurrently, but designate them successively, and that we receive the indications one after the other, but carry out the corresponding things simultaneously.It may be of interest to remember that polyphonic music began when a system of written notation had been devised which indicates temporal superposition by vertical alignment. But since in spoken language this is not possible, it is necessary to have conventions which will permit us to make a connection between the two sequences. And thus the listener or the reader always has to wait until a certain number of units of designation has been presented before he can effect the temporal arrangement of these units.So far we have presented the operational flow, its fragmentation or articulation into units of designation, and the subsequent semantic conventions which give the units of designation their place in the flow, as something which could and must take place if an actual operational flow is to be accompanied exhaustively and univocally by its verbalisation.In fact, however, the historical development has not been like that. On the one hand there was certainly a lack of operational awareness and also of a general plan of action; on the other hand presumably the same criterion of economy which, as we have seen, was applied in the choice of those operational elements that are to be taken isolatedly, has also exercised a certain influence. In our discourse we necessarily have to designate all those parts of a given operational stream, which cannot be known in another say, or from another source; but why -if the economy of designation is to be considered -should we designate things that everyone knows already?Thus it has come about that texts sometimes lack those designations which would assure a univocal transition to the designated operations. Hence there are words that are linked to two or even more different operational elements; and, above all, it happens that the designations supplied by the words' places in their succession are equivocal. If the text is being understood by a man, he eliminates these ambiguities, because in the light of his knowledge only one alternative makes sense (see the examples in PART II). Now, if, instead of a man, a machine is to understand the text -and in order to translate, to summarise, etc., it has to understand it univocally -it will be necessary to supply this machine not only (98026) with all the semantic relations but also with a fund of integrating knowledge equivalent to that of the man. There is no reason why this should not be feasible, but at least for the time being, there are in practice two kinds of difficulties.The first arises from the way in which men learn and remember things, which, at our present stage of technological development, is not reproducible. For man, to learn things is to carry out the operations that constitute these things; and to remember them is to set up an organic situation, that will always function in the same way. The memory of machines, on the other hand, has so far always been devised by means of registration, that is, it works on rigid non-intersecting lines and this makes it anything but suitable to function as a fund of encyclopedic knowledge.But the second, greater difficulty arises from the fact that man, when he learns to think and to speak, actually carries out all the operations that he verbalises, whereas no machine could at present carry out all these operations, because neither engineering technology nor psychological analysis have reached the required stage of development. (Our research group is working on a project for a model capable of carrying out some of the human operations of observation, mental categorisation, thought and language, and will begin its construction before the end of this year on behalf of Euratom; but it is a very limited model and, even so, it has required extremely long preliminary analyses.) In any case, the machines employed at present for the immediate practical purposes of MT etc., are of the computer type which replaces the actual carrying out of operations by code numbers; and these code numbers in the machine represent only the results of our analyses of the operations designated by a discourse.This imposes certain limitations. What unit of discourse is to be taken for analysis and codification before input into the machine? Obviously not entire texts, for this could be done only with texts which have already been composed at the time of the machine's construction, and in this case the machine would serve to do no other work except the work already done by us. Moreover no registration memory at present in existence could possibly contain as input units the variety of existing texts and the results of their analysis.Hence there is a general tendency to restrict analyses to the manageable number of units represented by single words. Given this restriction we can ask ourselves what operations are designated by each single word; and this analysis must show both what the word designates as an operational fragment apart from the operational flow, that is, apart from its place in the rhythmical figuration in which it occurs, and its function in the building up of the rhythmical figurations characteristic of the flow of operations.In our work aimed at mechanical translation of Russian, English, Italian, and German we have chosen the word as the unit of discourse to be analysed, and we have left to the machine the task of reconstituting the operational flow from the data concerning the operational elements which are designated by the words of the four (98026) languages and which we supply to the machine.It was, however, also necessary to decide the limits of this analysis and thus also the limits up to which the flow of operations is to be reconstituted.By way of an experimental solution we have come to the following decisions: a) never to give a rhythmical figuration to something which in one language appears as designatum of one unit of designation; that is to say, this designatum will always be taken as a component of a rhythmical figure; b) to arrange the designated things in two kinds of structure: la) the explicit correlational structure in which the three correlanda (i.e. the two correlata and the modality of transition, or correlator) occur with the respective explicit designation.1b) the implicit correlational structure in which one of the correlata is not designated, either because it has been or will be designated in a context outside this correlation, or because it concerns the speaker and not what he says, and so forth.In these structures the particular modality of transition is the determining factor.2) the binary summative structure in which the resulting unit depends on the characteristics and on the order of the addenda.With regard to these summative structures, however, it has to be stated clearly to what extent one intends to keep apart, as separate addenda, the things designated by words.Here too, we have decided never to go beyond the single designated units and, further, to consider as units all the things that are given on a special list. c) to consider the operational flow broken off whenever no modality of transition, or correlator, has been designated -even if the things before and after the break suggest a specific rhythmical figure that might link them in our thought, but which, in orderto understand the text, can be gathered only from the two correlata (as happens, for instance, in the case of two sentences separated by a full stop).(98026) 254AFTER the summary remarks on the verbalisation of thought (see PART I, section 6), we now have to consider what kind of and how many semantic connections are necessary in order to establish a univocal relation between thought and language. We will only examine here those structures which are of correlational type.As we have seen, every correlation is constituted by three elements which are characterised in two ways: a) as what they are isolatedly, b) according to the function they have in the formation of the correlation, that is to say, whether they are being used as first or second correlatum, or as correlator. From this it follows that a correlation requires at least five distinct indications: three concerning the particular things that are being correlated, and two to indicate the function of at least two of the three things (since the function of the third can be inferred from the other two). The indications must be at least five, since not even the one concerning the modality of combination or construction can be left out, for the kind of thing that usually functions as correlator can sometimes occur also without that particular function; although this is comparatively rare. It does happen, for instance, in the thought structure one refers to when one says: "And and or are modalities of construction".The various languages have chosen different ways for supplying these necessary indications. For obvious reasons of economy the preference has been given to two basic methods of indication: a) the particular form given to the word; that is to say, part of the word is used to indicate the particular correlational function it is to have; b) the place given to a word relative to the others, i.e. its position in the propositional sequence of words. The situation becomes more complicated whenever a designation points to more than one correlation. In such a case the alternatives are no longer those admitted by two or three words that designate the three things with which the three places of a single correlation have to be filled; instead the alternatives are those admitted by many words placed in a linear sequence, for they must designate the many things which are to fill the many places of the correlational net. With regard to this it is important to realise that the formal characteristics of words, if used to designate the particular things put in correlation as well as their function in constituting the single correlation, can no longer be employed for the groups of words designating the whole correlation, which, nevertheless, must be an element of another one. Two solutions have been evolved for this problem: a) stress and timing; in other words, the play of accentuation and pauses in speech, and a system of punctuation marks in writing, and b) -at least in a great number of languages -formal grammatical agreement, that is to say, discrimination according to gender, number, person, and finally, according to case (wherever the case has not a direct designatory function). Thus, where the position in the sequence of words gives no indication, it is owing to these other kinds of indication that we can decide of which correlation the designated things are a part. 98026Although the system of classification is still under revision we will give here an outline of how analysis is at present carried out.The input vocabulary for our current programme includes about 50,000 inflected Russian forms, corresponding to about 2,500 headwords and punctuation marks.At the first level these were divided according as they can or cannot designate contents of thought.Four classes have been distinguished in this respect: a) constructive words, i.e. words corresponding to correlanda. These in turn are divided, according to the number of correlanda which they represent into: monoconstructive, if only one correlandum corresponds to them, polyconstructive, if more than one correlandum corresponds to them, in the same or in different correlations (see figs 3 and 4). b) directive words, i.e. words which only indicate operations to be performed on correlanda. Purely directive words include quotation marks, whose function is to indicate that what is between them is to be treated as a correlandum: e.g. "A" is the first letter of the alphabet. Another function of punctuation marks is to indicate that a correlandum is exercising a correlational function which is not its characteristic one: e.g. "a" and "the" are articles. An example of a purely directive word is, in Italian, the accent introduced to eliminate a dictionary polysemanticity. There are in this language cases in which an identical spelling has more than one meaning: e.g. àncora (anchor) ancòra (still, yet) In the same way a change of typeface or letter size can be considered as directive.In our figurative representation purely directive words do not occupy any place in the rectangles. c) there are other words which, according to the particular context, may have constructive or directive functions.These include the comma, constructive when it acts as a correlator, directive when it only marks a pause of grouping e.g. Hadrian, (constructive) Emperor of the Romans, (directive) where the first is the correlator of the apposition correlation, while the second simply indicates that the appositive group is finished. d) There are also words which always exercise both functions. The full stop, for example, on one hand indicates that what has gone before is to be taken as a unit of thought, and on the other represents the correlator between one sentence and the next. If after the full stop the text does not go on, the mental category "end" will figure as second correlatum.This kind of classification is only made on the word matrices, it will not appear on the product matrices nor on the construction-control matrices, since it belongs to the preconstructive classification of input.After the words had been divided into directive words and constructive words, analysis was next applied, of course, to constructive words, in respect of the operations which they represent.Since the Russian language is an inflected one, we first analysed the suffixes of nominal flexion, or declension, and of verbal 98026flexion, or conjugation. Other suffixes, apart from their designation of correlational function, were analysed in respect of agreement, that is, when the words designate a certain correlation by the use of identity of case, person, number, or gender. For the systematisation of agreement it was necessary to classify not only the nominata but also the words. It is well known that in many languages the gender of the word corresponds only rarely with the sex of the nominatum. For example, some names of animals in Italian have only one form to designate both male and female, and this form is sometimes masculine and sometimes feminine ("il leopardo", "la pantera", etc.). The gender of the names of objects is always conventional; some names masculine in the singular become feminine in the plural ("il lenzuolo" and "le lensuola"); and others vice versa. Classifications of this type appear on all matrices, and they appear in the columns under the items "Agreement" and "Individuation".There is, however, another solution which arises by itself and which, at least within certain limits, makes it possible not only to do without any rule of the kind mentioned on page 16 but also to go against such rules as have been established. To understand how this is possible we have to remember what has been said about the unitary flow of thought and the way in which this is broken up into correlational structures. Certain things in it not only have arisen together, but usually arise together, at least inasmuch as regards that particular relation that constitutes the modality of their construction, even if this is indicated separately. If they are then broken up, every one still knows or notices their common origin, their reciprocal appurtenance. For instance, a quantity can be large, or small; but that could not apply to material, which never appears in pieces. Thus, if one says in Italian "quantita di acqua grande" no one could help understanding that "grande" has arisen together with "quantita", and not together with "acqua", and that it refers to the first, even if its position in the sequence of words, according to the normal rules, would imply that it should refer to the second. And similarly, if one says "The trees and the fruit which hung from the branches...", no one could not understand that the relative refers only to "fruit", and not to the "trees"; but the situation would be different in a sentence such as this: "The trees and the fruit which the land produces...", where the relative can correctly refer to both the "trees" and the "fruit". This situation is very common, and always occurs when an expression containing "and" is referred to by a relative or followed by "from", "of", etc. See, for instance:"The boat and the fish which he has filled ...", but "The boat and the fish which he has bought ...". It.: "Una pozza di acqua circolare", but "Una pozza di acqua sporca". It.: "Un cassetto del tavolo aperto", but "Un cassetto del tavolo apparecchiato". Such expressions produce, at the formal level of analysis a double correlational net, and, if the output language requires a particular agreement, they produce a double output.Our representation of things already places them in a certain way and suggests certain relations, while others are excluded; this is so at least wherever there is a choice of alternatives.One of the things which most helps a human translator in understanding a text is the whole representational world which words continually evoke. It is on the rich representational material that the human translator really begins to work, guided only secondarily by the formal suggestions of the words. Someone who reads, for instance, a newspaper headline sets up a whole mental and representational network; if at a certain point in the text a word is found that the dictionary defines polyvocal, it will nevertheless be understood immediately in one single sense (the sense best adapted to the general setting ). In fact the human reader, or translator, more often than not does not notice polyvocality in a text at all. But the machine, only to eliminate polysemanticity, has recourse to a "notional sphere".In the case of a machine, if we want the input to be limited in number, the largest pieces acceptable will have to be words, or even smaller units; and of the things designated by these units one will have to indicate from the beginning all the correlational possibilities of the two relevant kinds (i.e. which things are put in correlation, and what correlational functions they carry out). Then, as the words are put in one by one, one will check what happens when the things indicated by them meet one another. The makers of our languages to some extent relied on the relations between nominata, which relations are known independently of their linguistic expression in a discourse.Thus they did not always provide all the linguistic indications to assure the correct connections between language and thought.In order to overcome this difficulty it will be necessary to carry out another kind of analysis which has to concern just these relations between the nominata; and this can be done in two ways: a) by breaking things up into the smallest possible component operations, and assigning to the machine as its program the task of finding the possible relations between things by examining their compositions; b) by analysing the relations between certain things, and things in certain relations, as notional sphere, and, that is to say, as a unit of knowledge.In the last two years, we have isolated about 100 salient types of relations, in a field of 500 headwords (see figure 9 and Appendix I). It was not always easy to find the salient relations and the formulation of them was not always simple. For one reason in fact a long chain of indirect relations was sometimes involved, at another time a more or less sophisticated cultural reference is called upon.An interesting example of this arose when, in the course of making a figurative representation of the correlational net of the Lord's Prayer (Latin input) for a Euratom Report, we realised that the words "fiat voluntas tua sicut in caelo et in terra" give rise to as many as three different correlational nets. I.fiat voluntas tua sic-(-ut in caelo et in terra), where the whole expression "ut in caelo et in terra" is taken as a term of comparison (the indicative "facta est" is to be interpolated in the expression). II.fiat voluntas tua sic-in caelo -ut in terra, which gives exact parity to the two terms (sive ... sive), and interprets "et" as referring back to the earlier "ut". III. fiat voluntas tua sic-(-ut in caelo) et in terra, (98026) 238where "ut in caelo" is taken as a term of comparison, and "et" equal to "etiam" (the indicative "facta est" is to be interpolated in the expression "ut in caelo").In such a case the human translator might be in doubt as to which of the three structures should have the preference, given that we become aware of all the three possibilities, but he would certainly give a single final translation, choosing only one of them. But at this level it still proves difficult to make a satisfactory analysis, apply classifications, and formulate rules. In any case, given the actual situation of the relations between language and thought, the usual discrimination into syntactic analysis and semantic analysis is not the most convenient; even if there is no other reason but that inherent to the conventional way in which these analyses have been understood, and which contains the supposition that language consists of two parts: meaningful, or semantic words, and logical or syntactical words. This division is misleading, because if something enters to make part of language, and does not remain mere phonic or graphic material, it is always because one considers its designatory function.A satisfactory program of analysis aimed at the mechanisation of understanding a text -even if this is limited to its substitution by the correlational net that corresponds to the text when a man understands it, -would, in my opinion, have to comprise, like the following, an analysis in four directions: A) Examine:1) all that a word, taken singly, designates by means of its form and all it designates taken in conjunction with others, by means of its place in the sequence; 2) completion of this examination by an examination of the relations arising between the nominata, as a result of their operational contents. B) Discriminate neatly the designations: 3) into designations with the purpose of indicating which particular things are to be put in correlation; and 4) into designations with the purpose of indicating the correlational function these things have in constituting the correlation; that is, whether they function as correlator, as first correlatum, or as second correlatum.(98026) 239
null
null
An analysis of thought and its contents that accounts for every different word and every different expression by isolating as corresponding to each a different operation or combination of operations, shows that four kinds of operation are required: Differentiation, Figuration, Categorisation, and Correlation.Differentiation consists in changes of state. It gives rise to Differentiata, each of which results, of course, from a change of state and not from a single state; it is the function of two states and the direction of the shift from one to the other.By differentiation we obtain the nominata of words like "dark", "light", "hot", "cold", "resistant", "yielding", "green", "red", "yellow", "silence", "noise", etc. The nominata of these words, however, often contain already more than the result of a single differentiation, because as a rule we use them to designate also the results of other operations which determine their place as content of a thought (cf. below).Since differentiation is here taken as an elementary component operation, an analysis of the differentiation is excluded for this very reason. When we speak of states and of process we refer, in fact, to 98026a possible investigation to be carried out with regard to the functioning organ. It does, however, not exclude that we delimit a differentiatum by naming, for instance, its opposite, that is the differentiatum one obtains from the same states but by a shift in the opposite direction; or by naming the conditions under which we are inclined to effect the shift, that is, by indicating the dependences of the functioning. In this way we can say that we have noise when we make bodies vibrate, and silence when we make their vibration cease; that we have a certain colour when we put a certain salt into the flame of a Bunsen burner, and so on. But it would be a mistake to identify the two things with one another and to say that the differentiatum, as activity, is these other things which, in fact, are observata having their own figure, their own place, etc.Differentiation, by itself, does not produce anything figurated or localised (i.e. having a place in space or time). Nor, I should like to stress, does it correspond to sensation, which is, in fact, obtained by adding to differentiation the mental category of subject. Without this distinction we could no longer discriminate "green" from "green spot" nor "hot" from "sensation of heat".Figuration consists in changes of place. It gives rise to Figures or shapes, each of which results, as in the case of the differentiata, from a change of place and not from a single place; and, therefore, it is the function of two places and the direction of the shift from one to the other.By figurating we obtain things which, as a rule, are not designated isolatedly but together with differentiata (above all in the activities of perception and representation, as we shall see shortly). Although in some languages we find examples such as "lance" and "lanceolate" or (Italian) "uovo" (egg) and "ovale", there are usually no words to designate only the shape of the common objects of observation such as, for instance, apple, pear, tree, dog, horse, house, etc. Most of the shapes that are recognised and designated isolatedly belong to the technical realm of geometry, as, for instance, the circle, the ellipse, etc.Most of the shapes which we name, either isolatedly or together with differentiata, are not constituted by a single change of place. Mostly they result from several of these changes which thus constitute the elements of the shape. Among these elements there are the simple traces (lines of the shifts) constituted by two places, that is by shifting from the one to the other; there are the composite traces which have one place in common; there are the regions constituted by a trace and a place outside the trace, that is by shifting from a trace to a place or vice versa; they, too, can be composed if they have a place or a trace in common; and, finally, there are the volumes, constituted by a region and a place. The configuration of volume cannot be overstepped, because a shift from it to another place must necessarily lead through a region which thus becomes a region common to both volumes and as such gives rise simply to a composite volume.Categorisation consists in combinations of a particular differentiatum, namely the differentiatum of attention, of consciousness, of presence; it is the differentiatum that corresponds to words such as "watch!", "look!", "listen!", and the like. This dlfferentiatum can be combined with others of its type, because having effected one of them, this one can either be maintained or let go while a second one is effected. If the first one has been maintained, the second one 98026will be temporally superimposed on it and that gives rise to the simplest categorial combination. This corresponds to an attention that becomes focussed, as for instance when the word "watch!" is followed up with the word "there!". If this first, simplest combination is taken isolatedly it is designated by the word "something" or "thing" (corresponding to the Italian "cosa" in the question "che cosa?", or to the German "etwas",etc.), The further combinations are obtained by following up an isolated differentiatum of attention with a 'something' or a 'something' with an isolated dlfferentiatum of attention, and so on.By categorising we obtain things which are designated by words such as "or", "and", "not", "cause", "effect", "singular", "plural", "being", "can", "must", "want", "time", "space", "free", "necessary", "probable", "number", "point", "line", "surface", "substance", "accident", "subject", "object", "state", "process", etc.Also mental categories are very often designated together with results of other operations. As an example, it is sufficient to think of the singular and the plural which occur in conjunction with the nouns of almost all languages.Every differentiation, figure, or category can be combined with other elements that may be of the same or of the other two kinds; and the resulting combination derives its individuality from the particular elements combined in it, to the particular order in which they have been combined, to the time taken to combine them, etc.Among the most usual combinations are perception and representation.Perception consists in the following operations carried out in the following order: a) a succession of two differentiata, and b) categorisation of the second differentiatum as object; b') the object-differentiatum may be given a shape (by figuration which, in every case, is guided by the separation between the two differentiata). In representation we have: a) a categorisation of something as object, and a') possibly, a figuration of this (in this case the figuration is free), and, b) differentiation of the object and the figure, that is, addition of a differentiatum to the object-figure. This break-up into operations explains why perception is always felt to be constrained, or obligatory, in comparison to the sense of freedom, that is characteristic of representation. In perception, in fact, the object, being the result of the succession of two differentiata, arises always coupled with something else, that is, together with its background, with another object, or with a determinate spatial or temporal relation; in representation, instead, the object arises without any link whatever. And this analysis in operations explains also why representation has always been felt to be poorer than perception even if it is always possible to effect a comparison between a representational and a perceptional result.Other very usual combinations are the physical things and the psychical things. The first are the result of a spatial categorisation, the second the result of a temporal categorisation of the differentiata;(98026) hence a physical thing must always be in a certain place and distinct from at least one other thing in another place; whereas a psychical thing must always be at a certain moment and distinct from at least one other thing at another moment.If we ask ourselves which word designates the flow of the three kinds of operation we have so far discussed, there arises a terminological question: should the word "thought" be used to designate this flow already before any fragmentation into operational elements, or only after this fragmentation? And if after fragmentation, already with a discourse matching it or without any verbal accompaniment?We believe that our current dictionary reserves the word "thought" for a certain fragmentation, or even a certain verbalisation of the operational flow (which, further on, we shall define with greater precision).Moreover, a presentation of the flow as single operations, sequences, and groupings, already stems from a particular fragmentation effected with a view to articulation into linguistic units.Some fragmentation, in any case, is necessary in order to achieve the conditions under which the flow can be accompanied by words. If one had to assign an individual word to each flow, or train of thought comprised between two pauses or stops, one would have to fix an unlimited number of semantic relations with, consequently, an unlimited number of words. In fact, we should have to spend our lives preparing this linguistic material without ever getting round to using it. In order to serve their actual purpose the semantic conventions must be of a relatively small number -small enough to be passed on and to be learnt during a short first period of our lives.The criterion adopted for the articulation of the operational flow for the purpose of designation had to be one of economy: to isolate as units those single operations and combinations of operations that occur most frequently, and to leave the less frequent ones to be composed by combining the frequent ones.For instance, "violet" and "light" certainly recur much more frequently in many combinations than does the particular situation "light violet"; and thus they are designated individually while the designation of the rarer situation is obtained by their combination. This is still more obvious, for instance, with the "singular" and the "plural"; sometimes they are found in conjunction with "horse", at other times with "tree", then again with "chair", etc. Thus they have been taken as units of designation and "horse in the singular" or "horse-" and "horse in the plural" or "horse-s" are obtained by combining them.Among the operational elements that are to be taken individually we have single as well as composite differentiata (composite above all if their localisations coincide giving rise to the materials), observata and their changes, mental categories and their applications, etc.The most frequent of all are perhaps the operational elements that represent the modalities with which one passes from certain designated things on to others. If, for instance, one separates two things which initially occurred together, the designation is "with"; if one (98026) unites two things which initially occurred separately, the designation is "of"; if, in a succession of things, attention remains focussed on all of then, the designation is "and"; if the focus of attention is shifted during a succession of things, the designation is "or"; etc.The criterion in the examination of how, in a particular language, the operations have been grouped into units of designation, is the classical criterion of considering as a unit all that can be separated from one combination and used in another while preserving unchanged its phonic or graphic designatory material and its signification (or at least the latter).At this point we have to consider the temporal relations subsisting between the operations that constitute an actual operational flow.Once one has applied the analytical criterion we indicated when we spoke of three kinds of operation we have, of course, the possibility of diverse rhythmical figures within the single designated units. We can at once individuate the four possible rhythmical figures: 1) the operations are carried out simultaneously; that is to say, they begin and end together; 2) the operations begin one after the other, but end at the same time; 3) the operations begin at the same time, but end one after the other; 4) one of the operations begins before and ends after the other.If two operations are carried out in such a way that the first ends before the second begins, it brings about a halt in the operational flow, an interval of non-operating such as occurs, for instance, when we "switch to another thought" or "stop one train of thought and immediately, or some time afterwards, embark upon another". The more complex rhythmical structures, however, result from combination of the four possibilities given above.Also the single designated units are found to be in these temporal relations. To examine them becomes necessary if we want to account for the way in which a language designates an actual flow of operations. In fact, it is not enough if, in a discourse, there figure only the particular units derived from the fragmentation of the flow. The discourse must also contain the designations of their temporal order, that is to say, designations in order to constitute rhythmical figures consisting of the four possible ones and their combinations.Before examining the possible designatory solutions, I should like to return once more to the terminological question concerning thought. We believe that the word "thought" is applicable to the operational flow only where this flow is articulated into operational elements that are grouped in a unitary structure by means of diverse rhythmical figuration. In particular the rhythmical composition corresponding to correlation is constitutive of thought; this rhythmical composition results from two operational elements succeeding one upon the other as correlata, or modalistata, while a third -the correlator, or modality of transition -persists; it 98026is the structure characteristic of all relations.In fact, it is this structure which determines, more than any other, the unmistakable dynamism of thought, for every time we are presented with a correlatum and a modality of transition we necessarily have to wait for a second correlatum. Our thinking thus proceeds by a continual opening and closing of correlations. Expressions such as "fish and ...", "either red or ...", "a piece of ..." show the opening of a correlation and the consequent state of expectation and suspense that ceases as soon as the correlation can be closed ("fish and fowl", "either red or black", etc.).However, there is some dynamism also in other types of modality, for instance in those of construction. "To want" and "to be able", for instance, -the first indicating that two equal developments attributed to the same subject must temporally succeed one upon the other, the second indicating that two different developments attributed to the same subject must be temporally superimposed one upon the other -if applied to another development such as "to go", distribute the going and its subject in the particular temporal order indicated by them. But the modalities of construction and their modalista are coincident and they do not create a void that has to be filled, as do the modalities of transition.Stressing the parallel with music, one might say that the various other modalities correspond to the establishing of temporal relations between the single notes, whereas the modality of transition, the correlation, corresponds to the very bar itself.A correlation can, of course, figure as a correlandum in a larger correlation; and, as a rule, our thoughts are constituted by a network of correlations, or correlational net. For instance "the meat and the fish are in the refrigerator" already represents a correlational net.With regard to the discoursive accompaniment, or verbalisation, of the operational flow, we can say that it is not absolutely necessary, in order to have "thought". Nevertheless discourse was presumably the prime motive for its articulation; by now, in any case, discourse, either precedent or subsequent, accompanies thought so universally that thought without discourse would be exceptional rather than the rule.
A dynamic conception of this kind made it necessary to overcome some difficulties inherent in the way in which thought and language have been considered in traditional philosophy and in the psychoology deriving from this. According to this tradition we see in the brain, not operations, but a passive mirror which reflects all that surrounds us. The brain, that is to say, is supposed to double the physical objects of our environment by means of as many entities equal to the physical objects, yet lacking their physicality. (If for no other reason, because the brain is already a physical object and would thus have to give up its place and its matter to the other object.) Given this conception, however, the physiologist and anatomist as well as the engineer are put out of action, because these entities which are necessarily present in a negative form are neither observable nor reproducible; and thought, whose contents they are supposed to be, thus becomes equally unobservable and irreproducible.It may seem strange, indeed, that such a conception should have become traditional; but there is one explanation that makes it plausible.For the normal requirements of living it is important to know above all in what relation the observational objects are to one another; for instance, that fire heats water and that water quenches (98026) fire, that salt can be found in the sea, and that certain mushrooms nourish and others poison our body; and so on.Man has undoubtedly worked in this way for thousands of years, acquiring a particular ability and making a habit of it. In this research however, he has proceeded by searching for relations between objects that are always already present, and no attention is given to the activity of observation from which the objects result. Thus, when curiosity or some practical interest led man to investigate the very activity of observation, he did not, as would have been necessary, leave aside the already present objects in order to study the activity by means of which they are constituted, but tried, instead, to keep them present by devising an observational activity that might provide a double of them inside the head.The split between outside and inside that became applied to all contents of thought, although not directly interfering with the studies concerning physical objects, created difficulties of every kind for research on non-physical things, such as figures and mental categories, as well as for research on any mental activity.In language it had its repercussion inasmuch as it led to the belief that only those words which indicate physical objects had a corresponding nominatum. The remaining words were considered to be either flatus vocis, empty words, or elements of connection, not between nominata, but between the words themselves, etc. In this way language, whose constitutive function is designation, not only was understood contradictorily, but it was also lost as a way towards thought to which, in fact, it still is the most fertile and controllable way of access.For our research on thought and language it was, therefore, necessary as a first move to get rid of that tradition which is linked to the doubling of observational objects.
Main paper: the types of study: Our approach to MT, as we nave said, is but one of the possible applications of a series of studies aimed at an analysis of thought and its contents on the one hand and of language on the other.The purpose of these studies is primarily that of constituting a psychology and a linguistics. Linguistics is particularly important to us, because, for the time being, thought has been approached above all through language. But our analyses have been carried out also with a view to the immediate application of the findings. These analyses are intended, as we have mentioned, to present results in terms of operations, and this makes it possible for them to be used both as working hypotheses by physiologists and anatomists who are trying to individuate the organs which carry out these operations and as indications by the engineers who intend to construct artificial organs which carry out these operations as their function. We have tried above all to assure that the analyses break up the studied activities in operations that are constituted by changes of state or changes of place which, on the one hand, can be supposed to be observable in the nervous system once adequate techniques of observation have been evolved, and, on the other hand, can be reproduced by an engineer given the actual possibilities of construction. the obstacle overcome: A dynamic conception of this kind made it necessary to overcome some difficulties inherent in the way in which thought and language have been considered in traditional philosophy and in the psychoology deriving from this. According to this tradition we see in the brain, not operations, but a passive mirror which reflects all that surrounds us. The brain, that is to say, is supposed to double the physical objects of our environment by means of as many entities equal to the physical objects, yet lacking their physicality. (If for no other reason, because the brain is already a physical object and would thus have to give up its place and its matter to the other object.) Given this conception, however, the physiologist and anatomist as well as the engineer are put out of action, because these entities which are necessarily present in a negative form are neither observable nor reproducible; and thought, whose contents they are supposed to be, thus becomes equally unobservable and irreproducible.It may seem strange, indeed, that such a conception should have become traditional; but there is one explanation that makes it plausible.For the normal requirements of living it is important to know above all in what relation the observational objects are to one another; for instance, that fire heats water and that water quenches (98026) fire, that salt can be found in the sea, and that certain mushrooms nourish and others poison our body; and so on.Man has undoubtedly worked in this way for thousands of years, acquiring a particular ability and making a habit of it. In this research however, he has proceeded by searching for relations between objects that are always already present, and no attention is given to the activity of observation from which the objects result. Thus, when curiosity or some practical interest led man to investigate the very activity of observation, he did not, as would have been necessary, leave aside the already present objects in order to study the activity by means of which they are constituted, but tried, instead, to keep them present by devising an observational activity that might provide a double of them inside the head.The split between outside and inside that became applied to all contents of thought, although not directly interfering with the studies concerning physical objects, created difficulties of every kind for research on non-physical things, such as figures and mental categories, as well as for research on any mental activity.In language it had its repercussion inasmuch as it led to the belief that only those words which indicate physical objects had a corresponding nominatum. The remaining words were considered to be either flatus vocis, empty words, or elements of connection, not between nominata, but between the words themselves, etc. In this way language, whose constitutive function is designation, not only was understood contradictorily, but it was also lost as a way towards thought to which, in fact, it still is the most fertile and controllable way of access.For our research on thought and language it was, therefore, necessary as a first move to get rid of that tradition which is linked to the doubling of observational objects. the operations up to those of thought: An analysis of thought and its contents that accounts for every different word and every different expression by isolating as corresponding to each a different operation or combination of operations, shows that four kinds of operation are required: Differentiation, Figuration, Categorisation, and Correlation.Differentiation consists in changes of state. It gives rise to Differentiata, each of which results, of course, from a change of state and not from a single state; it is the function of two states and the direction of the shift from one to the other.By differentiation we obtain the nominata of words like "dark", "light", "hot", "cold", "resistant", "yielding", "green", "red", "yellow", "silence", "noise", etc. The nominata of these words, however, often contain already more than the result of a single differentiation, because as a rule we use them to designate also the results of other operations which determine their place as content of a thought (cf. below).Since differentiation is here taken as an elementary component operation, an analysis of the differentiation is excluded for this very reason. When we speak of states and of process we refer, in fact, to 98026a possible investigation to be carried out with regard to the functioning organ. It does, however, not exclude that we delimit a differentiatum by naming, for instance, its opposite, that is the differentiatum one obtains from the same states but by a shift in the opposite direction; or by naming the conditions under which we are inclined to effect the shift, that is, by indicating the dependences of the functioning. In this way we can say that we have noise when we make bodies vibrate, and silence when we make their vibration cease; that we have a certain colour when we put a certain salt into the flame of a Bunsen burner, and so on. But it would be a mistake to identify the two things with one another and to say that the differentiatum, as activity, is these other things which, in fact, are observata having their own figure, their own place, etc.Differentiation, by itself, does not produce anything figurated or localised (i.e. having a place in space or time). Nor, I should like to stress, does it correspond to sensation, which is, in fact, obtained by adding to differentiation the mental category of subject. Without this distinction we could no longer discriminate "green" from "green spot" nor "hot" from "sensation of heat".Figuration consists in changes of place. It gives rise to Figures or shapes, each of which results, as in the case of the differentiata, from a change of place and not from a single place; and, therefore, it is the function of two places and the direction of the shift from one to the other.By figurating we obtain things which, as a rule, are not designated isolatedly but together with differentiata (above all in the activities of perception and representation, as we shall see shortly). Although in some languages we find examples such as "lance" and "lanceolate" or (Italian) "uovo" (egg) and "ovale", there are usually no words to designate only the shape of the common objects of observation such as, for instance, apple, pear, tree, dog, horse, house, etc. Most of the shapes that are recognised and designated isolatedly belong to the technical realm of geometry, as, for instance, the circle, the ellipse, etc.Most of the shapes which we name, either isolatedly or together with differentiata, are not constituted by a single change of place. Mostly they result from several of these changes which thus constitute the elements of the shape. Among these elements there are the simple traces (lines of the shifts) constituted by two places, that is by shifting from the one to the other; there are the composite traces which have one place in common; there are the regions constituted by a trace and a place outside the trace, that is by shifting from a trace to a place or vice versa; they, too, can be composed if they have a place or a trace in common; and, finally, there are the volumes, constituted by a region and a place. The configuration of volume cannot be overstepped, because a shift from it to another place must necessarily lead through a region which thus becomes a region common to both volumes and as such gives rise simply to a composite volume.Categorisation consists in combinations of a particular differentiatum, namely the differentiatum of attention, of consciousness, of presence; it is the differentiatum that corresponds to words such as "watch!", "look!", "listen!", and the like. This dlfferentiatum can be combined with others of its type, because having effected one of them, this one can either be maintained or let go while a second one is effected. If the first one has been maintained, the second one 98026will be temporally superimposed on it and that gives rise to the simplest categorial combination. This corresponds to an attention that becomes focussed, as for instance when the word "watch!" is followed up with the word "there!". If this first, simplest combination is taken isolatedly it is designated by the word "something" or "thing" (corresponding to the Italian "cosa" in the question "che cosa?", or to the German "etwas",etc.), The further combinations are obtained by following up an isolated differentiatum of attention with a 'something' or a 'something' with an isolated dlfferentiatum of attention, and so on.By categorising we obtain things which are designated by words such as "or", "and", "not", "cause", "effect", "singular", "plural", "being", "can", "must", "want", "time", "space", "free", "necessary", "probable", "number", "point", "line", "surface", "substance", "accident", "subject", "object", "state", "process", etc.Also mental categories are very often designated together with results of other operations. As an example, it is sufficient to think of the singular and the plural which occur in conjunction with the nouns of almost all languages.Every differentiation, figure, or category can be combined with other elements that may be of the same or of the other two kinds; and the resulting combination derives its individuality from the particular elements combined in it, to the particular order in which they have been combined, to the time taken to combine them, etc.Among the most usual combinations are perception and representation.Perception consists in the following operations carried out in the following order: a) a succession of two differentiata, and b) categorisation of the second differentiatum as object; b') the object-differentiatum may be given a shape (by figuration which, in every case, is guided by the separation between the two differentiata). In representation we have: a) a categorisation of something as object, and a') possibly, a figuration of this (in this case the figuration is free), and, b) differentiation of the object and the figure, that is, addition of a differentiatum to the object-figure. This break-up into operations explains why perception is always felt to be constrained, or obligatory, in comparison to the sense of freedom, that is characteristic of representation. In perception, in fact, the object, being the result of the succession of two differentiata, arises always coupled with something else, that is, together with its background, with another object, or with a determinate spatial or temporal relation; in representation, instead, the object arises without any link whatever. And this analysis in operations explains also why representation has always been felt to be poorer than perception even if it is always possible to effect a comparison between a representational and a perceptional result.Other very usual combinations are the physical things and the psychical things. The first are the result of a spatial categorisation, the second the result of a temporal categorisation of the differentiata;(98026) hence a physical thing must always be in a certain place and distinct from at least one other thing in another place; whereas a psychical thing must always be at a certain moment and distinct from at least one other thing at another moment. thought and its verbalisation: If we ask ourselves which word designates the flow of the three kinds of operation we have so far discussed, there arises a terminological question: should the word "thought" be used to designate this flow already before any fragmentation into operational elements, or only after this fragmentation? And if after fragmentation, already with a discourse matching it or without any verbal accompaniment?We believe that our current dictionary reserves the word "thought" for a certain fragmentation, or even a certain verbalisation of the operational flow (which, further on, we shall define with greater precision).Moreover, a presentation of the flow as single operations, sequences, and groupings, already stems from a particular fragmentation effected with a view to articulation into linguistic units.Some fragmentation, in any case, is necessary in order to achieve the conditions under which the flow can be accompanied by words. If one had to assign an individual word to each flow, or train of thought comprised between two pauses or stops, one would have to fix an unlimited number of semantic relations with, consequently, an unlimited number of words. In fact, we should have to spend our lives preparing this linguistic material without ever getting round to using it. In order to serve their actual purpose the semantic conventions must be of a relatively small number -small enough to be passed on and to be learnt during a short first period of our lives.The criterion adopted for the articulation of the operational flow for the purpose of designation had to be one of economy: to isolate as units those single operations and combinations of operations that occur most frequently, and to leave the less frequent ones to be composed by combining the frequent ones.For instance, "violet" and "light" certainly recur much more frequently in many combinations than does the particular situation "light violet"; and thus they are designated individually while the designation of the rarer situation is obtained by their combination. This is still more obvious, for instance, with the "singular" and the "plural"; sometimes they are found in conjunction with "horse", at other times with "tree", then again with "chair", etc. Thus they have been taken as units of designation and "horse in the singular" or "horse-" and "horse in the plural" or "horse-s" are obtained by combining them.Among the operational elements that are to be taken individually we have single as well as composite differentiata (composite above all if their localisations coincide giving rise to the materials), observata and their changes, mental categories and their applications, etc.The most frequent of all are perhaps the operational elements that represent the modalities with which one passes from certain designated things on to others. If, for instance, one separates two things which initially occurred together, the designation is "with"; if one (98026) unites two things which initially occurred separately, the designation is "of"; if, in a succession of things, attention remains focussed on all of then, the designation is "and"; if the focus of attention is shifted during a succession of things, the designation is "or"; etc.The criterion in the examination of how, in a particular language, the operations have been grouped into units of designation, is the classical criterion of considering as a unit all that can be separated from one combination and used in another while preserving unchanged its phonic or graphic designatory material and its signification (or at least the latter).At this point we have to consider the temporal relations subsisting between the operations that constitute an actual operational flow.Once one has applied the analytical criterion we indicated when we spoke of three kinds of operation we have, of course, the possibility of diverse rhythmical figures within the single designated units. We can at once individuate the four possible rhythmical figures: 1) the operations are carried out simultaneously; that is to say, they begin and end together; 2) the operations begin one after the other, but end at the same time; 3) the operations begin at the same time, but end one after the other; 4) one of the operations begins before and ends after the other.If two operations are carried out in such a way that the first ends before the second begins, it brings about a halt in the operational flow, an interval of non-operating such as occurs, for instance, when we "switch to another thought" or "stop one train of thought and immediately, or some time afterwards, embark upon another". The more complex rhythmical structures, however, result from combination of the four possibilities given above.Also the single designated units are found to be in these temporal relations. To examine them becomes necessary if we want to account for the way in which a language designates an actual flow of operations. In fact, it is not enough if, in a discourse, there figure only the particular units derived from the fragmentation of the flow. The discourse must also contain the designations of their temporal order, that is to say, designations in order to constitute rhythmical figures consisting of the four possible ones and their combinations.Before examining the possible designatory solutions, I should like to return once more to the terminological question concerning thought. We believe that the word "thought" is applicable to the operational flow only where this flow is articulated into operational elements that are grouped in a unitary structure by means of diverse rhythmical figuration. In particular the rhythmical composition corresponding to correlation is constitutive of thought; this rhythmical composition results from two operational elements succeeding one upon the other as correlata, or modalistata, while a third -the correlator, or modality of transition -persists; it 98026is the structure characteristic of all relations.In fact, it is this structure which determines, more than any other, the unmistakable dynamism of thought, for every time we are presented with a correlatum and a modality of transition we necessarily have to wait for a second correlatum. Our thinking thus proceeds by a continual opening and closing of correlations. Expressions such as "fish and ...", "either red or ...", "a piece of ..." show the opening of a correlation and the consequent state of expectation and suspense that ceases as soon as the correlation can be closed ("fish and fowl", "either red or black", etc.).However, there is some dynamism also in other types of modality, for instance in those of construction. "To want" and "to be able", for instance, -the first indicating that two equal developments attributed to the same subject must temporally succeed one upon the other, the second indicating that two different developments attributed to the same subject must be temporally superimposed one upon the other -if applied to another development such as "to go", distribute the going and its subject in the particular temporal order indicated by them. But the modalities of construction and their modalista are coincident and they do not create a void that has to be filled, as do the modalities of transition.Stressing the parallel with music, one might say that the various other modalities correspond to the establishing of temporal relations between the single notes, whereas the modality of transition, the correlation, corresponds to the very bar itself.A correlation can, of course, figure as a correlandum in a larger correlation; and, as a rule, our thoughts are constituted by a network of correlations, or correlational net. For instance "the meat and the fish are in the refrigerator" already represents a correlational net.With regard to the discoursive accompaniment, or verbalisation, of the operational flow, we can say that it is not absolutely necessary, in order to have "thought". Nevertheless discourse was presumably the prime motive for its articulation; by now, in any case, discourse, either precedent or subsequent, accompanies thought so universally that thought without discourse would be exceptional rather than the rule. the designations: Different languages, in their presentation of single designated units and their temporal order in thought, display considerable similarities owing to similar exigencies; but they also display certain differences, especially when they are examined in their written rather than in their spoken forms. In some languages, for instance, there is a tendency to maintain in isolated words the single units of designation, and for this reason we call them isolating languages. In others, the inflecting or agglutinative ones, there is a tendency to group several units of designation together in one word; hence we speak of a root with suffixes, prefixes etc., of compound words, and soon.In spoken language, of course, such differences more or less disappear.Usually, in any language which is to be spoken, the various units of designation have to be presented in linear succession; and (98026) all languages use this sequence to impose an order on the flow of operating. But in this connection a complication arises: while the single units of designation can only succeed one another, each giving place to the next, we have seen that this is the only rhythmical figuration which cannot occur in thought. The figurations which, in thought, determine the units are always such that their elements present themselves simultaneously in one way or another; in other words, the units of designation appear in one single line and, thus, have monodic character; the designated units, on the other hand, appear on several lines and have polyphonic character. This means, in fact, that we carry out several operations concurrently, but designate them successively, and that we receive the indications one after the other, but carry out the corresponding things simultaneously.It may be of interest to remember that polyphonic music began when a system of written notation had been devised which indicates temporal superposition by vertical alignment. But since in spoken language this is not possible, it is necessary to have conventions which will permit us to make a connection between the two sequences. And thus the listener or the reader always has to wait until a certain number of units of designation has been presented before he can effect the temporal arrangement of these units. comprehension in kan and comprehension in the machine: So far we have presented the operational flow, its fragmentation or articulation into units of designation, and the subsequent semantic conventions which give the units of designation their place in the flow, as something which could and must take place if an actual operational flow is to be accompanied exhaustively and univocally by its verbalisation.In fact, however, the historical development has not been like that. On the one hand there was certainly a lack of operational awareness and also of a general plan of action; on the other hand presumably the same criterion of economy which, as we have seen, was applied in the choice of those operational elements that are to be taken isolatedly, has also exercised a certain influence. In our discourse we necessarily have to designate all those parts of a given operational stream, which cannot be known in another say, or from another source; but why -if the economy of designation is to be considered -should we designate things that everyone knows already?Thus it has come about that texts sometimes lack those designations which would assure a univocal transition to the designated operations. Hence there are words that are linked to two or even more different operational elements; and, above all, it happens that the designations supplied by the words' places in their succession are equivocal. If the text is being understood by a man, he eliminates these ambiguities, because in the light of his knowledge only one alternative makes sense (see the examples in PART II). Now, if, instead of a man, a machine is to understand the text -and in order to translate, to summarise, etc., it has to understand it univocally -it will be necessary to supply this machine not only (98026) with all the semantic relations but also with a fund of integrating knowledge equivalent to that of the man. There is no reason why this should not be feasible, but at least for the time being, there are in practice two kinds of difficulties.The first arises from the way in which men learn and remember things, which, at our present stage of technological development, is not reproducible. For man, to learn things is to carry out the operations that constitute these things; and to remember them is to set up an organic situation, that will always function in the same way. The memory of machines, on the other hand, has so far always been devised by means of registration, that is, it works on rigid non-intersecting lines and this makes it anything but suitable to function as a fund of encyclopedic knowledge.But the second, greater difficulty arises from the fact that man, when he learns to think and to speak, actually carries out all the operations that he verbalises, whereas no machine could at present carry out all these operations, because neither engineering technology nor psychological analysis have reached the required stage of development. (Our research group is working on a project for a model capable of carrying out some of the human operations of observation, mental categorisation, thought and language, and will begin its construction before the end of this year on behalf of Euratom; but it is a very limited model and, even so, it has required extremely long preliminary analyses.) In any case, the machines employed at present for the immediate practical purposes of MT etc., are of the computer type which replaces the actual carrying out of operations by code numbers; and these code numbers in the machine represent only the results of our analyses of the operations designated by a discourse.This imposes certain limitations. What unit of discourse is to be taken for analysis and codification before input into the machine? Obviously not entire texts, for this could be done only with texts which have already been composed at the time of the machine's construction, and in this case the machine would serve to do no other work except the work already done by us. Moreover no registration memory at present in existence could possibly contain as input units the variety of existing texts and the results of their analysis.Hence there is a general tendency to restrict analyses to the manageable number of units represented by single words. Given this restriction we can ask ourselves what operations are designated by each single word; and this analysis must show both what the word designates as an operational fragment apart from the operational flow, that is, apart from its place in the rhythmical figuration in which it occurs, and its function in the building up of the rhythmical figurations characteristic of the flow of operations. the analysis: In our work aimed at mechanical translation of Russian, English, Italian, and German we have chosen the word as the unit of discourse to be analysed, and we have left to the machine the task of reconstituting the operational flow from the data concerning the operational elements which are designated by the words of the four (98026) languages and which we supply to the machine.It was, however, also necessary to decide the limits of this analysis and thus also the limits up to which the flow of operations is to be reconstituted.By way of an experimental solution we have come to the following decisions: a) never to give a rhythmical figuration to something which in one language appears as designatum of one unit of designation; that is to say, this designatum will always be taken as a component of a rhythmical figure; b) to arrange the designated things in two kinds of structure: la) the explicit correlational structure in which the three correlanda (i.e. the two correlata and the modality of transition, or correlator) occur with the respective explicit designation.1b) the implicit correlational structure in which one of the correlata is not designated, either because it has been or will be designated in a context outside this correlation, or because it concerns the speaker and not what he says, and so forth.In these structures the particular modality of transition is the determining factor.2) the binary summative structure in which the resulting unit depends on the characteristics and on the order of the addenda.With regard to these summative structures, however, it has to be stated clearly to what extent one intends to keep apart, as separate addenda, the things designated by words.Here too, we have decided never to go beyond the single designated units and, further, to consider as units all the things that are given on a special list. c) to consider the operational flow broken off whenever no modality of transition, or correlator, has been designated -even if the things before and after the break suggest a specific rhythmical figure that might link them in our thought, but which, in orderto understand the text, can be gathered only from the two correlata (as happens, for instance, in the case of two sentences separated by a full stop).(98026) 254AFTER the summary remarks on the verbalisation of thought (see PART I, section 6), we now have to consider what kind of and how many semantic connections are necessary in order to establish a univocal relation between thought and language. We will only examine here those structures which are of correlational type.As we have seen, every correlation is constituted by three elements which are characterised in two ways: a) as what they are isolatedly, b) according to the function they have in the formation of the correlation, that is to say, whether they are being used as first or second correlatum, or as correlator. From this it follows that a correlation requires at least five distinct indications: three concerning the particular things that are being correlated, and two to indicate the function of at least two of the three things (since the function of the third can be inferred from the other two). The indications must be at least five, since not even the one concerning the modality of combination or construction can be left out, for the kind of thing that usually functions as correlator can sometimes occur also without that particular function; although this is comparatively rare. It does happen, for instance, in the thought structure one refers to when one says: "And and or are modalities of construction".The various languages have chosen different ways for supplying these necessary indications. For obvious reasons of economy the preference has been given to two basic methods of indication: a) the particular form given to the word; that is to say, part of the word is used to indicate the particular correlational function it is to have; b) the place given to a word relative to the others, i.e. its position in the propositional sequence of words. The situation becomes more complicated whenever a designation points to more than one correlation. In such a case the alternatives are no longer those admitted by two or three words that designate the three things with which the three places of a single correlation have to be filled; instead the alternatives are those admitted by many words placed in a linear sequence, for they must designate the many things which are to fill the many places of the correlational net. With regard to this it is important to realise that the formal characteristics of words, if used to designate the particular things put in correlation as well as their function in constituting the single correlation, can no longer be employed for the groups of words designating the whole correlation, which, nevertheless, must be an element of another one. Two solutions have been evolved for this problem: a) stress and timing; in other words, the play of accentuation and pauses in speech, and a system of punctuation marks in writing, and b) -at least in a great number of languages -formal grammatical agreement, that is to say, discrimination according to gender, number, person, and finally, according to case (wherever the case has not a direct designatory function). Thus, where the position in the sequence of words gives no indication, it is owing to these other kinds of indication that we can decide of which correlation the designated things are a part. 98026Although the system of classification is still under revision we will give here an outline of how analysis is at present carried out.The input vocabulary for our current programme includes about 50,000 inflected Russian forms, corresponding to about 2,500 headwords and punctuation marks.At the first level these were divided according as they can or cannot designate contents of thought.Four classes have been distinguished in this respect: a) constructive words, i.e. words corresponding to correlanda. These in turn are divided, according to the number of correlanda which they represent into: monoconstructive, if only one correlandum corresponds to them, polyconstructive, if more than one correlandum corresponds to them, in the same or in different correlations (see figs 3 and 4). b) directive words, i.e. words which only indicate operations to be performed on correlanda. Purely directive words include quotation marks, whose function is to indicate that what is between them is to be treated as a correlandum: e.g. "A" is the first letter of the alphabet. Another function of punctuation marks is to indicate that a correlandum is exercising a correlational function which is not its characteristic one: e.g. "a" and "the" are articles. An example of a purely directive word is, in Italian, the accent introduced to eliminate a dictionary polysemanticity. There are in this language cases in which an identical spelling has more than one meaning: e.g. àncora (anchor) ancòra (still, yet) In the same way a change of typeface or letter size can be considered as directive.In our figurative representation purely directive words do not occupy any place in the rectangles. c) there are other words which, according to the particular context, may have constructive or directive functions.These include the comma, constructive when it acts as a correlator, directive when it only marks a pause of grouping e.g. Hadrian, (constructive) Emperor of the Romans, (directive) where the first is the correlator of the apposition correlation, while the second simply indicates that the appositive group is finished. d) There are also words which always exercise both functions. The full stop, for example, on one hand indicates that what has gone before is to be taken as a unit of thought, and on the other represents the correlator between one sentence and the next. If after the full stop the text does not go on, the mental category "end" will figure as second correlatum.This kind of classification is only made on the word matrices, it will not appear on the product matrices nor on the construction-control matrices, since it belongs to the preconstructive classification of input.After the words had been divided into directive words and constructive words, analysis was next applied, of course, to constructive words, in respect of the operations which they represent.Since the Russian language is an inflected one, we first analysed the suffixes of nominal flexion, or declension, and of verbal 98026flexion, or conjugation. Other suffixes, apart from their designation of correlational function, were analysed in respect of agreement, that is, when the words designate a certain correlation by the use of identity of case, person, number, or gender. For the systematisation of agreement it was necessary to classify not only the nominata but also the words. It is well known that in many languages the gender of the word corresponds only rarely with the sex of the nominatum. For example, some names of animals in Italian have only one form to designate both male and female, and this form is sometimes masculine and sometimes feminine ("il leopardo", "la pantera", etc.). The gender of the names of objects is always conventional; some names masculine in the singular become feminine in the plural ("il lenzuolo" and "le lensuola"); and others vice versa. Classifications of this type appear on all matrices, and they appear in the columns under the items "Agreement" and "Individuation".There is, however, another solution which arises by itself and which, at least within certain limits, makes it possible not only to do without any rule of the kind mentioned on page 16 but also to go against such rules as have been established. To understand how this is possible we have to remember what has been said about the unitary flow of thought and the way in which this is broken up into correlational structures. Certain things in it not only have arisen together, but usually arise together, at least inasmuch as regards that particular relation that constitutes the modality of their construction, even if this is indicated separately. If they are then broken up, every one still knows or notices their common origin, their reciprocal appurtenance. For instance, a quantity can be large, or small; but that could not apply to material, which never appears in pieces. Thus, if one says in Italian "quantita di acqua grande" no one could help understanding that "grande" has arisen together with "quantita", and not together with "acqua", and that it refers to the first, even if its position in the sequence of words, according to the normal rules, would imply that it should refer to the second. And similarly, if one says "The trees and the fruit which hung from the branches...", no one could not understand that the relative refers only to "fruit", and not to the "trees"; but the situation would be different in a sentence such as this: "The trees and the fruit which the land produces...", where the relative can correctly refer to both the "trees" and the "fruit". This situation is very common, and always occurs when an expression containing "and" is referred to by a relative or followed by "from", "of", etc. See, for instance:"The boat and the fish which he has filled ...", but "The boat and the fish which he has bought ...". It.: "Una pozza di acqua circolare", but "Una pozza di acqua sporca". It.: "Un cassetto del tavolo aperto", but "Un cassetto del tavolo apparecchiato". Such expressions produce, at the formal level of analysis a double correlational net, and, if the output language requires a particular agreement, they produce a double output.Our representation of things already places them in a certain way and suggests certain relations, while others are excluded; this is so at least wherever there is a choice of alternatives.One of the things which most helps a human translator in understanding a text is the whole representational world which words continually evoke. It is on the rich representational material that the human translator really begins to work, guided only secondarily by the formal suggestions of the words. Someone who reads, for instance, a newspaper headline sets up a whole mental and representational network; if at a certain point in the text a word is found that the dictionary defines polyvocal, it will nevertheless be understood immediately in one single sense (the sense best adapted to the general setting ). In fact the human reader, or translator, more often than not does not notice polyvocality in a text at all. But the machine, only to eliminate polysemanticity, has recourse to a "notional sphere".In the case of a machine, if we want the input to be limited in number, the largest pieces acceptable will have to be words, or even smaller units; and of the things designated by these units one will have to indicate from the beginning all the correlational possibilities of the two relevant kinds (i.e. which things are put in correlation, and what correlational functions they carry out). Then, as the words are put in one by one, one will check what happens when the things indicated by them meet one another. The makers of our languages to some extent relied on the relations between nominata, which relations are known independently of their linguistic expression in a discourse.Thus they did not always provide all the linguistic indications to assure the correct connections between language and thought.In order to overcome this difficulty it will be necessary to carry out another kind of analysis which has to concern just these relations between the nominata; and this can be done in two ways: a) by breaking things up into the smallest possible component operations, and assigning to the machine as its program the task of finding the possible relations between things by examining their compositions; b) by analysing the relations between certain things, and things in certain relations, as notional sphere, and, that is to say, as a unit of knowledge.In the last two years, we have isolated about 100 salient types of relations, in a field of 500 headwords (see figure 9 and Appendix I). It was not always easy to find the salient relations and the formulation of them was not always simple. For one reason in fact a long chain of indirect relations was sometimes involved, at another time a more or less sophisticated cultural reference is called upon.An interesting example of this arose when, in the course of making a figurative representation of the correlational net of the Lord's Prayer (Latin input) for a Euratom Report, we realised that the words "fiat voluntas tua sicut in caelo et in terra" give rise to as many as three different correlational nets. I.fiat voluntas tua sic-(-ut in caelo et in terra), where the whole expression "ut in caelo et in terra" is taken as a term of comparison (the indicative "facta est" is to be interpolated in the expression). II.fiat voluntas tua sic-in caelo -ut in terra, which gives exact parity to the two terms (sive ... sive), and interprets "et" as referring back to the earlier "ut". III. fiat voluntas tua sic-(-ut in caelo) et in terra, (98026) 238where "ut in caelo" is taken as a term of comparison, and "et" equal to "etiam" (the indicative "facta est" is to be interpolated in the expression "ut in caelo").In such a case the human translator might be in doubt as to which of the three structures should have the preference, given that we become aware of all the three possibilities, but he would certainly give a single final translation, choosing only one of them. But at this level it still proves difficult to make a satisfactory analysis, apply classifications, and formulate rules. In any case, given the actual situation of the relations between language and thought, the usual discrimination into syntactic analysis and semantic analysis is not the most convenient; even if there is no other reason but that inherent to the conventional way in which these analyses have been understood, and which contains the supposition that language consists of two parts: meaningful, or semantic words, and logical or syntactical words. This division is misleading, because if something enters to make part of language, and does not remain mere phonic or graphic material, it is always because one considers its designatory function.A satisfactory program of analysis aimed at the mechanisation of understanding a text -even if this is limited to its substitution by the correlational net that corresponds to the text when a man understands it, -would, in my opinion, have to comprise, like the following, an analysis in four directions: A) Examine:1) all that a word, taken singly, designates by means of its form and all it designates taken in conjunction with others, by means of its place in the sequence; 2) completion of this examination by an examination of the relations arising between the nominata, as a result of their operational contents. B) Discriminate neatly the designations: 3) into designations with the purpose of indicating which particular things are to be put in correlation; and 4) into designations with the purpose of indicating the correlational function these things have in constituting the correlation; that is, whether they function as correlator, as first correlatum, or as second correlatum.(98026) 239 : rather more towards thought than towards language (at least with regard to those aspects of language that usually are considered formal).The results of this research, therefore, also belong to a general linguistics of which MT is an application and a test. As you will soon notice, ours is a novel kind of linguistics compared to traditional linguistics. For while the traditional studies start more or less directly from language and its formal aspects, ours start from an analysis of thought. In fact, we ask what language is, how it works, and how it matches thought, after an analysis of thought and its contents. The analysis is also of a novel kind, because the contents of thought have hitherto been conceived as static units, while we conceive them in terms of operations.(b)The individuation and description of the operations by means of which man translates also serve as a warning against certain dangers for those working in this field. The recent history of the first attempts to mechanise man's mental activities, such as perception, thought, translation, summarising, etc., clearly shows where these dangers lie.Since the philosophers and psychologists have refrained from supplying an analysis of these superior activities in terms that might be of use for mechanical construction, the engineers set themselves up as philosophers and began to improvise. They did not hesitate to consider procedures identical that have little or nothing in common. A fictitious example may serve as an illustration of what has happened. Within certain limits it is possible to produce the results of arithmetical operations, either by actually carrying out these operations or by using results that have previously been obtained and memorised, that is, by simple substitutions; but would it be justified, in the second case, to say that the man or the machine "calculates"? Similarly, if machines or men do no more than substitute the words or groups of words of a prefabricated translation for the words or groups of words of a text, would it be justified to say that they "translate"?Confusions of this kind are detrimental on various levels. On the theoretical level, for instance, they lead to the neglect of essential studies (in our case the most important branches of research concerning language and thought would no longer be furthered). On the psychological level they create an excessive initial optimism, because the problem would not only seem easier than in fact it is, but it would appear already solved; this optimism will then give way to a no less excessive pessimism, when it proves impossible to get beyond the limitations of the adapted solution, and one will then conclude that the mechanisation of this particular activity is impossible (in our case the conclusion would be that MT is impossible). On the practical level, finally, it might be that what seemed the quickest and most economical method turns out the slowest and most costly; because one might be led to constructing machines which yield only the one kind of result, instead of constructing machines which, since they reproduce the much richer human activity that produces these results among others, can be used -with small modifications -to obtain many other results as well (in our case, as we shall try to explain, the same analysis of an input (98026) text can be used for output in diverse languages, for summarising, for documentation, etc.). Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
761
0.00657
null
null
null
null
null
null
null
null
71162db89ed8003dc810bb644a13084a1affc02b
244077634
null
A fourth level of linguistic analysis
Note the article: it is 'a', not 'the'.
{ "name": [ "Zarechnak, Michael" ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Machine Translation and Applied Language Analysis
1961-09-01
0
0
null
THE GAT (Georgetown Automatic Translation) programs for Russian/English Machine Translation have, up to the present time, provided for three levels of linguistic analysis (morphological, syntagmatic, syntactic) .# The machine translation output produced by these programmes has been subjected to further structural analysis in order to ascertain its strengths and weaknesses.The first result of this analysis was reported in Los Angeles at the National Symposium on Machine Translation, Session 6, on February 4th, 1960.The purpose of this paper is to present structural data in order to show why it is necessary to introduce a fourth level into the analysis of the input language to significantly improve the output in the target language. The improvements would affect the following:1. The Russian case endings would be transferred into English predominantly on the basis of the kernel structures within which they occur, rather than on the present basis of syntagmatically related words. Thus the span of the linear search to select a proper equivalent for the Russian case endings would be increased.2. The rearrangement of the English output would be based on generalised structural patterns, reducing reliance upon the specific lists. The result will be fewer exceptions to the rearrangement rules.The routines, which would be worked out according to these conditions, would facilitate the introduction of the analysis of semantic components within a kernel structure on the operational level.In our experimental approach to MT, we found that certain assumptions had to be modified in the light of experience. As an example, I refer to the structure of a genitive noun-noun government string.In translating the genitive case from Russian into English, the following rules served as a basis for the algorithm:*The substance of the genitive transfer routine is as follows:1. If a word in the genitive case is the first one in the government structure, the translation of the genitive case is zeroed; 2. If not, and if the word is not listed as an exception, the genitive case is translated by the preposition "of". An analysis of a translated corpus recently brought to our attention problems which make it necessary for us to initiate not only quantitative changes, such as increasing the list of complex prepositions, but also qualitative changes which will replace the given routine by a new one.The genitive transfer routine of the noun in the genitive case (N c2 ) was based on computer generated codes. It was assumed that in a string of two or more nouns, the second (or third, etc.) noun in the genitive case belonged semantically to the first. This assumption proved inadequate in practice.Structurally, it became apparent that two or more nouns in the genitive case do not automatically signal a semantic relationship. The conditions which prevented two nouns in the genitive case from being considered as a noun phrase were the following:1. The second noun belongs to a nested structure; Example 1: Все скопившиеся за день тучки All the small clouds which gathered during the day. 2. The second noun (or the third, etc.) is governed by the predicate of the sentence. Example 2: Дураки нанесли лесу ущерба не меньше хищниковVandals have done as much harm to the forests as commercial exploiters. The above examples indicate that the phrase structure exists within the sentence structure. Therefore, the problem of the hierarchy of government structures is introduced.It is our belief that the sentence type has to be determined before the subsentence units (phrases) are determined. This in turn raises the perennial problem of the relation of meaning and form.In order to determine the grammatical function of a given form, one has to know its ontological meaning. Similarly, to select its ontological meaning, one has to know its grammatical function. Theoretically this seems to be a vicious circle. However, experimentally, in any given sentence, if one knows the subject matter and the sentence nuclei in Russian, there is little or no problem in determining both the function of the form and the ontological meaning of the word.The above-mentioned problem is illustrated by translation samples of the nouns in the genitive case. If a genitive Russian string is translated only slightly differently (for example, as to the order of words, or the suppression of the ending of a noun in the genitive case) the translator would be tempted to think of ad hoc solutions.Destroying a part of the productive forces. On the other hand, if the given genitive Russian structure is transferred by a sentence the difference is more apparent. Example 4: Перед наступлением кризиса Before the crisis occurs. It is suggested that: 1. The genitive string might have been formed from a sentence; and 2. The information conveyed in such a genitive string could be usefully analyzed to discern the semantic components of the genitive string as well as of the sentence. To summarize:Transformation of the genitive string into: 1. The sentence kernel facilitates the analysis of structural genitive relations; 2. The genitive string aids in analyzing the semantic components of the sentence structure. Therefore, if the binary genitive structure (i.e., in terms of each successive pair of nouns) is reduced to the sentence kernels from which the genitive string was formed, any genitive structure could be operationally classified by sub-classes based on the type of kernel into which the genitive string is transformable. Each kernelized sub-class of the genitive string could be again operationally subdivided into sub-classes of governing and governed nouns.Experimentally, kernels created from genitive strings were observed as follows:We start with a two-positional string in which only one of the nouns must be in the genitive case (usually the second). We call the first noun N 1 and the second N 2 . a)(N 1 N 2 )  (N 2 VX N1 )* Example обсуждение тезисов тезисы обсуждаются b) (N 1 N 2 )  (N 2 V N1 ) Example постановление пленума пленум постановил С) (N 1 N 2 )  (N 2 is A N1 ) Example возможность реализации реализация возможна d) (N 1 N 2 )  (N 1 P N 2 )Example программа подъема программа по подъемуe) (N l N 2 )  (N l N 2 )Example в ряде районов в ряде районовFrom the kernelization procedure it is obvious that the noun in the genitive case occupied the subject position and the other noun the predicate position. This correlation is operationally important for the selection of the translation for the genitive morpheme. Once the subject-predicate positions are established, the remaining positions would be distributed among the identity sub-classes such as adverbs, adjectives, particles, and conjunctions.If the genitive string exceeds two positions (a position is defined as that which is occupied by a noun-like word), a test is conducted to determine:1. Whether more than one kernel formed the genitive string; 2. Whether one kernel had identity sub-classes; 3. Whether the multiple genitive string is not a kernelizable unit.1. (N l N 2 N 3 ) (N l P N 2 ) + (N 3 Vx N1 ) (N 1 N 2 ) (N 1 P N 2 ) система для организации (N 2 N 3 ) (N 3 Vx N1 ) ТРУД ОРГАНИЗУЕСЯ Example система организации труда  система для организации  труд организуется 2. (N l N 2 N 3 ) (N l N 2 ) + (N 3 Vx N2 ) (N 1 N 2 ) (N 1 N 2 ) число самоубийств (N 2 N 3 ) (N 3 V N 2 ) люди кончают самоубийствомExample число самоубийств людей  люди кончают самоубийством * Vx = verb reflexive; P = preposition; "is" = any auxiliary verb.3.(N 1 N 2 N 3 )  (N l P N 2 ) + (N 2 P N 3 ) (N 1 N 2 )  (N 2 P N 2 ) волна от банкротств (N 2 N 3 )  (N 2 P N 3 ) банкротства на предприятияхExample прокатывается волна банкротств промышленных предприятий The patterns of combinatory kernelizations are listed in Appendix 2.It has been found that in certain instances the genitive case cannot be translated solely on the basis of a pair of nouns co-occurring in a genitive string. Example Затрата многих десятков дней труда большого числа рабочихThe expenditure of many weeks of labour by_a large number of workers.The English preposition "by" is not conditioned by the words "labour" and "number" but rather by the word "expenditure".Since the translation of a noun in the genitive case may depend on more than two co-occurring nouns, it can be concluded that the entire genitive string should be analyzed before the English translation is selected.Analysis of the genitive structure as a unit means that the sequence of the semantic components in a structure occupies two or more positions from which the genitive string is formed.The preliminary analysis of approximately 10,000 genitive structures demonstrated that it is the sequence of the sub-classes of the nouns rather than the class of the nouns itself which determines the classification of the semantic components within the genitive structure. Furthermore, it was shown that the sequence of semantic components is rigorously structured.The sub-class of inanimate concrete nouns (discernable by the human senses; for example стол) shows certain patterns of predictable sequences. These are listed in Appendix 3 with accompanying examples.The following five problems were considered as relevant for the transfer into English (the translation by "of" and "" has been previously mentioned: see p.4). 980261. The translation of the genitive morpheme by the following set of English prepositions: "for", "in", "by", "on". Examples:AS TRANSLATED BY THE HUMAN TRANSLATOR Wars on a world-wide scale.2. The noun in the genitive case is zeroed between two nouns. Previously the genitive case was zeroed only if it occurred with the first word in a prepositional structure. Example:путем сокращения N 1 времени N 2 обращения N 3капитала N 4 If kernelized, the structure would break down as follows:(1) N 1 N 2  N 2 Vx N1 время сокращается (2) N 2 N 3  N 1 P N 2  N 3 is A N2 время для обращения, "обращающееся время", "oбращаемое время" (3) N S N 4  N 4 Vх N3капитал обращается It is clear that (1) and (3) are kernels. Note that the verb is used transitively in (1) and intransitively in (2). This suggests the rule:(1) If the predicate equivalent in the genitive string is formed from a transitive verb, the genitive case of the following governing noun would be zeroed. (2) If the predicate equivalent is used intransitively, the following governing noun would receive the preposition "of"; Thus the above genitive string would be translated: "by curtailing the circulation time of capital" The reverse was effected by transformation (2) which resulted in N 2 P N s and N 2 N 3  N 3 is A N2 .3. The noun in the genitive case is transformed into an adjective and 98026its governing noun is rearranged into the second position. This constitutes a simple reverse. Examples: потогонная система организации труда sweatshop system of work organizationIf an adjective precedes such a noun in the genitive case, this would be a multiple reverse. Example:двигатели внутреннего сгорания internal combustion engines 4. There may be a number of problems within a single genitive structure of more than two positions. In such cases, the order of testing solutions becomes important.* This constitutes zeroing plus reverse. Example: при сохранении капиталистической системы хозяйства while retaining the capitalistic economic system 5. The genitive structure could be replaced by an English sentence. Example:до того как кризис наступил -до наступления кризиса before the crisis occursThe additional coding of nouns will include markers indicating each stem's derivational capacity, i.e. whether or not the given noun-stem is transformable into V or A. This code will be utilized in kernelization formulas (algorithms).The semantic sub-classes of nouns will also be coded. This code is operationally produced as is apparent from Appendix 4.This code will be used for the generalization of preposition selection in translating the genitive case in such cases where a pair of nouns is not kernelizable or the kernelization is insufficient. Step 1: If the item is C 2 and it carries the code 5122 or 512x and it is first in the string, transfer to ZERO.Step 2: If the C 2 does not carry the code 5122, but carries the code 1122 and at i-1 there is ":", or "," or U-6, transfer by ZERO.Step 3: If the item is C 2 and it does not carry the code 5122 and it carries the code 1122 and there is no ":", or "," or U-6 at i-1, but carries the code 3112 and there is ":" or "," or U-6 at i-n (before the first item carrying the code 3112), transfer by ZERO.Step 4: If the item does not carry the code 5122 or 1122, but it does carry the code 2122 or 4122, transfer by ZERO, if the item is the first noun in the stretch.Step 5: If the C 2 and i-1 is два or три or четыре or a number smaller than 1, 2, transfer by ZERO.Step 6: If the С 2 carries the code 3112 and the item before the 3112 stretch is целью transfer by ZERO.Step 7: If the C 2 and the item at i-1 is вследствие transfer by ZERO.Step 8: if the C 2 and i-1 is кривые transfer by "FOR".Step 9: If the C 2 and i-1 or i-2 is отношении transfer by "TO".Step 10: If the item is C 2 and it carries the code 1122 and it does not carry the code 5122 and it does not carry the code 3112 and there is no ":", or "," or U-6 at i-1, transfer it by "OF", and insert it immediately before the i -item.Step 11: If the item is C 2 and it carries the code 1122 and there is no ":", or "," or U-6 at i-1 and there is code 3112 and there is no ":", or "," or U-6 at i-n (before the first item carrying the code 3112), transfer by "OF" and insert it immediately before the first item carrying the code 3112.N 2 is A N1N 2 P N 3 N 3 V N2 N 3 P N 4 N 3 P N 4 N 4 P N 5 (98026) N 2 V N1 N 2 V N1 N 2 V N1 N 2 V N1 N 2 V N1 N 2 P N 3 N 3 is A N2 N 2 P N 3 N 3 Vx N2 N 2 P N 3 N 4 V N3 N 4 V N3 N 4 is A N3 N 3 P N 4 N 3 P N 4 N 4 P N 3 N 5 V N4 N 4 P N 6 N 1 P N 2 N 1 P N 2 N 1 P N 2 N 1 P N 2 N 1 P N 2 N 1 P N 2 N 3 Vx N2 N 3 Vx N2 N 3 V N2 N 3 Vx N2 N 2 P N 3 N 2 P N 3 N 3 P N 4 N 4 is A N3 N 3 N 4 - N 4 Vx N3 N 4 is A N3 N 4 V N3 N 4 P N 5 QNT N 4 P N 5 N 1 P N 2 N 1 N 2 - N 1 N 2 - N 1 N 2 - QNT QNT QNT N 2 P N 3 N 3 V N2 N 1 P N 3 N 4 V N3 N 4 is A N3 N 3 P N 4 N 4 P N 5 N 4 N 5 - N 3 B N 5APPENDIX 3 Legend to Appendix 3 ф --position of the concrete noun QNT --quantifier PART --portion of the whole STR --structured UNSTR --non-structured QLT --qualifier PRI --process intransitive (deverblal noun) PRTR --process transitive (deverbial noun)THE SEMANTIC COMPONENT SEQUENCE If an inanimate concrete noun is preceded by another noun(s), the following sequence pattern of semantic sub-classes is observed:If the noun is singular, the sequence on the left side applies; if the noun is plural, or "Massive", the right sequence applies.The zero stands for the position occupied by the given noun. The rest of the numbers indicate the expected positional sequences. If some of the indicated positions are zeroed, the higher position "shifts" accordingly, i.e. relates directly to the lower position (if present) or to the noun itself if there are no lower positions. Arrows indicate this possibility.The minus sign indicates that the designated positions of semantic subclasses precede the zero position. The plus sign indicates the opposite.
null
null
null
null
Main paper: introduction: THE GAT (Georgetown Automatic Translation) programs for Russian/English Machine Translation have, up to the present time, provided for three levels of linguistic analysis (morphological, syntagmatic, syntactic) .# The machine translation output produced by these programmes has been subjected to further structural analysis in order to ascertain its strengths and weaknesses.The first result of this analysis was reported in Los Angeles at the National Symposium on Machine Translation, Session 6, on February 4th, 1960.The purpose of this paper is to present structural data in order to show why it is necessary to introduce a fourth level into the analysis of the input language to significantly improve the output in the target language. The improvements would affect the following:1. The Russian case endings would be transferred into English predominantly on the basis of the kernel structures within which they occur, rather than on the present basis of syntagmatically related words. Thus the span of the linear search to select a proper equivalent for the Russian case endings would be increased.2. The rearrangement of the English output would be based on generalised structural patterns, reducing reliance upon the specific lists. The result will be fewer exceptions to the rearrangement rules.The routines, which would be worked out according to these conditions, would facilitate the introduction of the analysis of semantic components within a kernel structure on the operational level.In our experimental approach to MT, we found that certain assumptions had to be modified in the light of experience. As an example, I refer to the structure of a genitive noun-noun government string.In translating the genitive case from Russian into English, the following rules served as a basis for the algorithm:*The substance of the genitive transfer routine is as follows:1. If a word in the genitive case is the first one in the government structure, the translation of the genitive case is zeroed; 2. If not, and if the word is not listed as an exception, the genitive case is translated by the preposition "of". An analysis of a translated corpus recently brought to our attention problems which make it necessary for us to initiate not only quantitative changes, such as increasing the list of complex prepositions, but also qualitative changes which will replace the given routine by a new one.The genitive transfer routine of the noun in the genitive case (N c2 ) was based on computer generated codes. It was assumed that in a string of two or more nouns, the second (or third, etc.) noun in the genitive case belonged semantically to the first. This assumption proved inadequate in practice.Structurally, it became apparent that two or more nouns in the genitive case do not automatically signal a semantic relationship. The conditions which prevented two nouns in the genitive case from being considered as a noun phrase were the following:1. The second noun belongs to a nested structure; Example 1: Все скопившиеся за день тучки All the small clouds which gathered during the day. 2. The second noun (or the third, etc.) is governed by the predicate of the sentence. Example 2: Дураки нанесли лесу ущерба не меньше хищниковVandals have done as much harm to the forests as commercial exploiters. The above examples indicate that the phrase structure exists within the sentence structure. Therefore, the problem of the hierarchy of government structures is introduced.It is our belief that the sentence type has to be determined before the subsentence units (phrases) are determined. This in turn raises the perennial problem of the relation of meaning and form.In order to determine the grammatical function of a given form, one has to know its ontological meaning. Similarly, to select its ontological meaning, one has to know its grammatical function. Theoretically this seems to be a vicious circle. However, experimentally, in any given sentence, if one knows the subject matter and the sentence nuclei in Russian, there is little or no problem in determining both the function of the form and the ontological meaning of the word.The above-mentioned problem is illustrated by translation samples of the nouns in the genitive case. If a genitive Russian string is translated only slightly differently (for example, as to the order of words, or the suppression of the ending of a noun in the genitive case) the translator would be tempted to think of ad hoc solutions.Destroying a part of the productive forces. On the other hand, if the given genitive Russian structure is transferred by a sentence the difference is more apparent. Example 4: Перед наступлением кризиса Before the crisis occurs. It is suggested that: 1. The genitive string might have been formed from a sentence; and 2. The information conveyed in such a genitive string could be usefully analyzed to discern the semantic components of the genitive string as well as of the sentence. To summarize:Transformation of the genitive string into: 1. The sentence kernel facilitates the analysis of structural genitive relations; 2. The genitive string aids in analyzing the semantic components of the sentence structure. Therefore, if the binary genitive structure (i.e., in terms of each successive pair of nouns) is reduced to the sentence kernels from which the genitive string was formed, any genitive structure could be operationally classified by sub-classes based on the type of kernel into which the genitive string is transformable. Each kernelized sub-class of the genitive string could be again operationally subdivided into sub-classes of governing and governed nouns.Experimentally, kernels created from genitive strings were observed as follows:We start with a two-positional string in which only one of the nouns must be in the genitive case (usually the second). We call the first noun N 1 and the second N 2 . a)(N 1 N 2 )  (N 2 VX N1 )* Example обсуждение тезисов тезисы обсуждаются b) (N 1 N 2 )  (N 2 V N1 ) Example постановление пленума пленум постановил С) (N 1 N 2 )  (N 2 is A N1 ) Example возможность реализации реализация возможна d) (N 1 N 2 )  (N 1 P N 2 )Example программа подъема программа по подъемуe) (N l N 2 )  (N l N 2 )Example в ряде районов в ряде районовFrom the kernelization procedure it is obvious that the noun in the genitive case occupied the subject position and the other noun the predicate position. This correlation is operationally important for the selection of the translation for the genitive morpheme. Once the subject-predicate positions are established, the remaining positions would be distributed among the identity sub-classes such as adverbs, adjectives, particles, and conjunctions.If the genitive string exceeds two positions (a position is defined as that which is occupied by a noun-like word), a test is conducted to determine:1. Whether more than one kernel formed the genitive string; 2. Whether one kernel had identity sub-classes; 3. Whether the multiple genitive string is not a kernelizable unit.1. (N l N 2 N 3 ) (N l P N 2 ) + (N 3 Vx N1 ) (N 1 N 2 ) (N 1 P N 2 ) система для организации (N 2 N 3 ) (N 3 Vx N1 ) ТРУД ОРГАНИЗУЕСЯ Example система организации труда  система для организации  труд организуется 2. (N l N 2 N 3 ) (N l N 2 ) + (N 3 Vx N2 ) (N 1 N 2 ) (N 1 N 2 ) число самоубийств (N 2 N 3 ) (N 3 V N 2 ) люди кончают самоубийствомExample число самоубийств людей  люди кончают самоубийством * Vx = verb reflexive; P = preposition; "is" = any auxiliary verb.3.(N 1 N 2 N 3 )  (N l P N 2 ) + (N 2 P N 3 ) (N 1 N 2 )  (N 2 P N 2 ) волна от банкротств (N 2 N 3 )  (N 2 P N 3 ) банкротства на предприятияхExample прокатывается волна банкротств промышленных предприятий The patterns of combinatory kernelizations are listed in Appendix 2.It has been found that in certain instances the genitive case cannot be translated solely on the basis of a pair of nouns co-occurring in a genitive string. Example Затрата многих десятков дней труда большого числа рабочихThe expenditure of many weeks of labour by_a large number of workers.The English preposition "by" is not conditioned by the words "labour" and "number" but rather by the word "expenditure".Since the translation of a noun in the genitive case may depend on more than two co-occurring nouns, it can be concluded that the entire genitive string should be analyzed before the English translation is selected.Analysis of the genitive structure as a unit means that the sequence of the semantic components in a structure occupies two or more positions from which the genitive string is formed.The preliminary analysis of approximately 10,000 genitive structures demonstrated that it is the sequence of the sub-classes of the nouns rather than the class of the nouns itself which determines the classification of the semantic components within the genitive structure. Furthermore, it was shown that the sequence of semantic components is rigorously structured.The sub-class of inanimate concrete nouns (discernable by the human senses; for example стол) shows certain patterns of predictable sequences. These are listed in Appendix 3 with accompanying examples.The following five problems were considered as relevant for the transfer into English (the translation by "of" and "" has been previously mentioned: see p.4). 980261. The translation of the genitive morpheme by the following set of English prepositions: "for", "in", "by", "on". Examples:AS TRANSLATED BY THE HUMAN TRANSLATOR Wars on a world-wide scale.2. The noun in the genitive case is zeroed between two nouns. Previously the genitive case was zeroed only if it occurred with the first word in a prepositional structure. Example:путем сокращения N 1 времени N 2 обращения N 3капитала N 4 If kernelized, the structure would break down as follows:(1) N 1 N 2  N 2 Vx N1 время сокращается (2) N 2 N 3  N 1 P N 2  N 3 is A N2 время для обращения, "обращающееся время", "oбращаемое время" (3) N S N 4  N 4 Vх N3капитал обращается It is clear that (1) and (3) are kernels. Note that the verb is used transitively in (1) and intransitively in (2). This suggests the rule:(1) If the predicate equivalent in the genitive string is formed from a transitive verb, the genitive case of the following governing noun would be zeroed. (2) If the predicate equivalent is used intransitively, the following governing noun would receive the preposition "of"; Thus the above genitive string would be translated: "by curtailing the circulation time of capital" The reverse was effected by transformation (2) which resulted in N 2 P N s and N 2 N 3  N 3 is A N2 .3. The noun in the genitive case is transformed into an adjective and 98026its governing noun is rearranged into the second position. This constitutes a simple reverse. Examples: потогонная система организации труда sweatshop system of work organizationIf an adjective precedes such a noun in the genitive case, this would be a multiple reverse. Example:двигатели внутреннего сгорания internal combustion engines 4. There may be a number of problems within a single genitive structure of more than two positions. In such cases, the order of testing solutions becomes important.* This constitutes zeroing plus reverse. Example: при сохранении капиталистической системы хозяйства while retaining the capitalistic economic system 5. The genitive structure could be replaced by an English sentence. Example:до того как кризис наступил -до наступления кризиса before the crisis occursThe additional coding of nouns will include markers indicating each stem's derivational capacity, i.e. whether or not the given noun-stem is transformable into V or A. This code will be utilized in kernelization formulas (algorithms).The semantic sub-classes of nouns will also be coded. This code is operationally produced as is apparent from Appendix 4.This code will be used for the generalization of preposition selection in translating the genitive case in such cases where a pair of nouns is not kernelizable or the kernelization is insufficient. Step 1: If the item is C 2 and it carries the code 5122 or 512x and it is first in the string, transfer to ZERO.Step 2: If the C 2 does not carry the code 5122, but carries the code 1122 and at i-1 there is ":", or "," or U-6, transfer by ZERO.Step 3: If the item is C 2 and it does not carry the code 5122 and it carries the code 1122 and there is no ":", or "," or U-6 at i-1, but carries the code 3112 and there is ":" or "," or U-6 at i-n (before the first item carrying the code 3112), transfer by ZERO.Step 4: If the item does not carry the code 5122 or 1122, but it does carry the code 2122 or 4122, transfer by ZERO, if the item is the first noun in the stretch.Step 5: If the C 2 and i-1 is два or три or четыре or a number smaller than 1, 2, transfer by ZERO.Step 6: If the С 2 carries the code 3112 and the item before the 3112 stretch is целью transfer by ZERO.Step 7: If the C 2 and the item at i-1 is вследствие transfer by ZERO.Step 8: if the C 2 and i-1 is кривые transfer by "FOR".Step 9: If the C 2 and i-1 or i-2 is отношении transfer by "TO".Step 10: If the item is C 2 and it carries the code 1122 and it does not carry the code 5122 and it does not carry the code 3112 and there is no ":", or "," or U-6 at i-1, transfer it by "OF", and insert it immediately before the i -item.Step 11: If the item is C 2 and it carries the code 1122 and there is no ":", or "," or U-6 at i-1 and there is code 3112 and there is no ":", or "," or U-6 at i-n (before the first item carrying the code 3112), transfer by "OF" and insert it immediately before the first item carrying the code 3112.N 2 is A N1N 2 P N 3 N 3 V N2 N 3 P N 4 N 3 P N 4 N 4 P N 5 (98026) N 2 V N1 N 2 V N1 N 2 V N1 N 2 V N1 N 2 V N1 N 2 P N 3 N 3 is A N2 N 2 P N 3 N 3 Vx N2 N 2 P N 3 N 4 V N3 N 4 V N3 N 4 is A N3 N 3 P N 4 N 3 P N 4 N 4 P N 3 N 5 V N4 N 4 P N 6 N 1 P N 2 N 1 P N 2 N 1 P N 2 N 1 P N 2 N 1 P N 2 N 1 P N 2 N 3 Vx N2 N 3 Vx N2 N 3 V N2 N 3 Vx N2 N 2 P N 3 N 2 P N 3 N 3 P N 4 N 4 is A N3 N 3 N 4 - N 4 Vx N3 N 4 is A N3 N 4 V N3 N 4 P N 5 QNT N 4 P N 5 N 1 P N 2 N 1 N 2 - N 1 N 2 - N 1 N 2 - QNT QNT QNT N 2 P N 3 N 3 V N2 N 1 P N 3 N 4 V N3 N 4 is A N3 N 3 P N 4 N 4 P N 5 N 4 N 5 - N 3 B N 5APPENDIX 3 Legend to Appendix 3 ф --position of the concrete noun QNT --quantifier PART --portion of the whole STR --structured UNSTR --non-structured QLT --qualifier PRI --process intransitive (deverblal noun) PRTR --process transitive (deverbial noun)THE SEMANTIC COMPONENT SEQUENCE If an inanimate concrete noun is preceded by another noun(s), the following sequence pattern of semantic sub-classes is observed:If the noun is singular, the sequence on the left side applies; if the noun is plural, or "Massive", the right sequence applies.The zero stands for the position occupied by the given noun. The rest of the numbers indicate the expected positional sequences. If some of the indicated positions are zeroed, the higher position "shifts" accordingly, i.e. relates directly to the lower position (if present) or to the noun itself if there are no lower positions. Arrows indicate this possibility.The minus sign indicates that the designated positions of semantic subclasses precede the zero position. The plus sign indicates the opposite. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
761
0
null
null
null
null
null
null
null
null
da3ae9324b6e630f02bcf2c0952c19eb8d2e0b8d
44955190
null
Structure at the lexical level and its implication for transfer grammar
1. The girl was dead. 2. He became president. 3. He worked all day. 4. The time elapsed quickly. 5. Solving the problem fatigued him. 6. He polishes the arrow. 7. He shot a hole in the wall. 8. Both of the brothers built a house. 9. He shook his finger. 10. He knows the answer. 11. He knows that you were there.
{ "name": [ "Klima, Edward S." ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Machine Translation and Applied Language Analysis
1961-09-01
4
9
null
IN the following discussion I shall present preliminary results from an investigation of structuring within the lexicon of a language. These results suggest that in certain areas of the lexicon lexical items must be characterized in terms of the presence or absence of specific recurring lexico-semantic components. Furthermore, there seems to be some promise that correspondence between lexical items of different languages may be reducible to mutual correspondence between their more discrete lexico-semantic components. Take, for example, the verbs in expressions of the following types: LEARN A WORD, KNOW A WORD, LOOK AT A PERSON, SEE A PERSON, LISTEN TO A SOUND, HEAR A SOUND, GET SOMETHING, HEAR SOMETHING, etc. Granted the pairing into LEARN: KNOW, LOOK AT: SEE etc., I shall show that far from their representing discrete pairs unrelated further in lexical structure, the first members of the pairs differ uniformly from the second members; i.e., LOOK AT is to SEE as LISTEN TO is to HEAR. Preliminary investigation of certain other languages shows that a comparable relationship holds among pairs like the French REGARDER: VOIR, ECOUTER: ENTENDRE etc. Recognition of such interlanguage correspondence provides the basis of a structural explanation for questions like the following: In what sense does "Je vois cela" correspond more closely to "I see that" than does "Je regarde cela"?In this discussion, the more detailed descriptive statements about English as well as the general remarks about correspondence between English sentences and those of some other language should be considered in the framework of what I shall call transfer grammar, a term which has already been used by Z. Harris 1 , though with certain differences. A transfer grammar consists of the rules appropriate for carrying the sentences of one language, given their structure, into the corresponding sentences of another language, also given their structure. Such a grammar thus describes, i.e. analyzes, the relationship of "correspondence" holding between certain structures of one language and those of another. For the moment, we can consider as a corresponding sentence, one which a bilingual speaker would offer as such. We shall not consider any complicated or border-line cases. Above and beyond the simple word-for-word rules (or even part-of-speechfor-part-of-speech) implying identical higher structure, the description of correspondence between different natural languages must meet demands made by differences in constituent structure and by the abstractness of certain construction types; i.e., by the absence at the word level of unambiguous markers of higher level differences. In the field of machine translation in particular, much of the recent refinement in describing interlanguage correspondence has been in that direction. In this paper, attention will be directed in another direction: toward possible refinements in correspondence analysis entailed by further structural characterization of lexical Items, and in particular, of verbs in terms of their relationship to subject and object.Interest in the problem of verbal categories is not new; it dates back to classical Greek philosophy. Here only a few more or less recent, selected remarks from linguistics and linguistic philosophy will be mentioned. Consider the following sentences:1. The girl was dead. 2. He became president. 3. He worked all day. 4. The time elapsed quickly. 5. Solving the problem fatigued him. 6. He polishes the arrow. 7. He shot a hole in the wall. 8. Both of the brothers built a house. 9. He shook his finger. 10. He knows the answer. 11. He knows that you were there.The problem involves the following notions about such sentences: a) that "grammatically" all of the sentences are the same in having a subject and a predicate and that (5) through (10) at least, but not (2) and 3are grammatically the same in having a transitive verb and a direct object, b) that "notionally" or "semantically" the verbal categories are not the same: KNOW as in (10) and (11) refers to a state, and similarly the predicate in (1), WORKED as in (3) to an activity, BECOME as in (2) to a transition: and that the relationship between different verbs and their objects is not the same: in sentences like (6) the object can be described as one of effect(i.e. the arrow is affected by the polishing), in (7) on the other hand the object is one of result (i.e. the shooting results in the hole) as is the case in (8), and in (9) the object is one of instrument (i.e. the finger was used in the action). Opinion has differed considerably as to the structural status of such observations, and particularly of those in (b). Certain linguists have remarked that purely notional characterization of these differing relationships could be made according to any number of criteria. As Jespersen writes: "...on account of the infinite variety of meanings inherent in verbs the notional (or logical) relations between verbs and their objects are so manifold that they defy any attempt at analysis or classification" 2 In the form presented here (which is essentially the same way that they are described in the grammars that mention them) there is some question whether these distinctions are a grammatical matter at all. The following remarks, though not made specifically about English, are also relevant here: "Dabei sind die Begriffe des Zieles, des Objekts, der Zeitdauer usw. in der Grammatlk nicht weiter zu definieren, sondern sie sind als Realitäten anzusehen, welche in der Anschauung der Sprechenden vorhanden sind ... " "... man kommt natürlich immer wieder zu der Erkenntnis, dass in der Sprache selbst nichts gegeben ist als der Verbalbegriff und der Nominalbegriff und dass eine Eintheilung des Stoffes zwar unvermeidlich, eine jede aber nicht frei von Willkür ist." 3 Hirt, in criticizing Behaghel's use of "Beruhrtes und erzeugtes Objekt (object of affect and result, resp.), goes so far as to claim that the opposition is of no significance whatsoever. 4 While the observation of such differences is certainly not counterintuitive, still the criticism that these distinctions are not part of linguistic structure is justified when their assumption has no further consequence, i.e., when nothing is gained but satisfaction of Sprachgefühl by ascribing a structural nature to such distinctions, as undoubtedly would be the case in the possible classification into "legal and illegal" depending on the activity associated with the word. (That the subject-verb and verb-object relationships seem to be more basic is no valid argument, since the impression that certain distinctions are more basic to the language is one of the things we hope to make more explicit by structural description.) 98026Whorf has his own characteristic interpretation of the subject-predicate relationship, an interpretation very much in line with his notion of language shaping thought. What he does is to reject the intuited notional differences and project one particular dominant notional characterization over the whole system. In the article "Language, Mind, and Reality" Whorf compares the sentences "I strike it" and "I hold it" and says of the latter that though HOLD "in plain fact is no action, we ascribe action to what we call HOLD because the formula, "substantive + verb + actor = his action" is fundamental in our sentences." 5 Even if we grant the basic correctness of his observation about the similarity between HOLD and STRIKE, the nature of what he calls "action" in the relationship "actor + his action" is not at all clear, for what Whorf intends by the word "action" on the one hand is nowhere explicitly stated and on the other hand is certainly not what we regularly understand by the word. That is to say, as it stands now, HOLD according to Whorf is an action which "in plain fact is not an action". Without a characterization of this special sense of "action", the statement is self contradictory and viewed from outside the language where paradoxes like this are deprived of the flashes of intuition capable of resolving them, at best reflects cognizance that some significant similarity or other exists here.There is a contemporary school of philosophy, so-called linguistic philosophy, which aims at ridding philosophical discussion of just such misuses of ordinary language. Much attention is paid to distinctions among verbs suggesting processes, states, occurrences, etc., the objective being the description of the concepts which result in our particular use of such verbs. 6 Vendler presents an interpretation in terms of a system of time relations based on a classification of "verbs" into four types: activity terms like "pushing a cart", accomplishment terms like "drawing a circle", achievement terms like "reaching the top" and state terms like "knowing geography". 7 The classification is based on differences in usage. Many of the observations used to support his temporal interpretation are linguistic in nature. He points out that some "verbs" (e.g. in "He reached the top") are incompatible with certain lexically paraphrasable expressions implying duration of time (e.g. "for three hours") and that certain "verbs" (e.g. "He knows a good restaurant") do not occur with elements more properly syntactic and without any one consistent structurally equivalent paraphrase (e.g. the continuous tense). These two types of criteria, unfortunately, are treated as if they were equally well within our command. In fact, the observations on the whole are made in a framework without any defined linguistic structure. That he often uses "verb" in the sense of predicate or verb phrase is just a terminological matter, but from a linguistic point of view it frequently obscures the fact that rather minor variations in (98026) sentence structure entail radical differences according to his classification. The absence of a complement in "He pushed the cart (into the garage)" makes the difference between an accomplishment term and an activity term. Singular number versus plural number in the direct object as in "He drew a circle" versus "He drew circles" represents the same difference. One notes the great complexity of the interrelation between grammatical devices and notions of time. The part of Vendler's paper that touches the subject-verb relationship with which we are concerned centres around the "well known differences between verbs that possess continuous tenses and verbs that do not ... This difference suggests that running, writing, (as opposed to knowing, recognizing) are processes going on in time, i.e. roughly that they consist of successive phases following one another in time." Included in processes going on in time are the "pushing a cart"-type, the "drawing a circle"type, but not the "reaching the top"-type or the "knowing geography"-type, but Vendler's interpretation of the time notion which he supposes to be associated with the so-called continuous tense excludes the occurrence of that tense with achievement terms like "reaching the top", although in fact we do in normal speech say "He is reaching the top" and "He is winning the game". Furthermore, the notion "process" (or its further clarification as "phases following one another") is hardly very revealing when used to characterize a verb such as that in "The old man is leaning against the wall"*.Let us return now to the set of sentences on page 4 and consider the structural correlates to the notion expressed there of grammatical similarity:4) The time elapsed quickly 5) Solving the problem fatigued himIn all of the sentences on page 4, there is fairly strong evidence of grammatical nature for assuming that HE, THE GIRL, SOLVING THE PROBLEM and the other words and phrases that we conventionally call "subject" are, in fact, all representatives of a single grammatical category and that all of these sentences have in common the structural break-down into subject + predicate, despite not only differences in the constituents themselves but also various environmental incompatibilities (e.g. the socalled subjects vary from single words like HE to whole phrases like * Joos, as I recall, mentions special features in verbs like SIT, LIE, LEAN, shared by those like KNOW, HEAR in that with these the simple present cannot occur with the future adverb TOMORROW. "We leave for Washington tomorrow" but not "I know the song tomorrow", only "I will know the song tomorrow." Martin Joos, "Process and Relation Verbs in English" -oral presentation of a paper at the December 1959 meeting of the Linguistic Society of America.(98026) SOLVING THE PROBLEM; among examples of the second type could be cited the fact that not all subjects are compatible with all predicates: although we have (4), we do not have "John elapsed quickly"; i.e., there is not even mutual interchangeability between elements which we suggest represent the same grammatical category). Among the evidence motivating a common analysis could be cited the occurrence of the "subject" forms of the pronoun: HE instead of HIM, etc., agreement in number on the part of the verb, and various syntactic phenomena in which the basic relationship of subject and predicate is maintained, regardless of the particular forms that the latter constituents assume (e.g. as well "They expected solving the problem to fatigue him", related to (5) as "They expected the time to elapse quickly" related to (4)). In very much the same way, from a grammatical point of view, a uniform structural analysis corresponding to what is commonly called "the verb and its object" is motivated by general correspondence of passive sentences (thus unlike sentences with the copula, e.g. "He was the criminal"), by the occurrence of related sentences with WHAT or WHO(M) instead of the object, and so on. Notwithstanding such similarities in grammar between the two groups as the breakdown into subject + predicate or the membership in both of predicateswith the analysis verb + object, the sentences of (A) differ from those of (B) in their behaviour with respect to the following type of construction: "What he did was to strike the child" or, without the infinitival marker "to", "What he did was strike the child". For all sentences of class (A) there are related sentences with the DO-locution, while those of class (B), in ordinary usage, lack the same correspondents; e.g. "What he did was learn the answer" but not "What he did was know the answer"; "What he did was make the chair" but not "What he did was see the chair". "What he did was buy a car" but not "What he did was have a car". Similarly, with a second set of examples there is a related differentiation in the occurrence of "What he is doing is learning the answer" but not "What he is doing is knowing the answer"; "What he is doing is buying a car" but not "What he is doing is having a car", etc. The second set of examples of differentiation is not so significant in that the occurrence of the more complicated construction "What he is doing is Verb + ing" is dependent on the possibility of occurrence of the simpler construction "He is Verb + ing", and thus "What he is doing is having a car" could be considered, if taken alone, as excluded on the basis simply of the non-occurrence. The first set of examples, where the presence or absence of the progressive is not relevant, shows that there are independent reasons for considering sentences like "He had a car" different from those like "He bought a car". (This will be relevant is describing the interlanguage correspondences where the other language does not have a corresponding grammatical form). In the differentiation between expressions which occur with "What he did was..." and those which do not we have a structural correlate, though as yet unanalyzed, to one of the favourite notional characterizations of difference between the subject-verb relationship: that in which the verb expresses a state and that in which it expresses a process. The difference within the pairs SEE: LOOK AT, HEAR: LISTEN TO, HAVE: GET is matched by the absence: presence of this structural feature. The analysis of the structural difference observed here presents some interesting problems. The assignment of the difference to the verb which will be the analysis proposed here, rather than to the subject or even to the object is not so obvious when we consider the following observations. While it is true that the form CAR appears as grammatical object in both the "doing-something" set (A), (6) and 7, and in set (B) (7) and can also appear as grammatical subject in both types of constructions (e.g. "the car slid into a ditch" and "the car is very fast"), the same holds for the form HEAR (e.g. "the judge is hearing the case" and also "the Judge hears a sound") or HAVE in "the boy is having a big dinner" and "the boy has a lot of money", or "They felt the inner surface with their hand" and "The inner surface felt rough". The use of the neutral word "form" to refer to these examples is intentional, for there is structural evidence that the occurrences of (98086) the nouns with the form CAR appearing in either construction are still instances of the same lexical item while the particular verbs in question are to be considered different at the lexical level. The evidence is the freedom in conjoining diverse constituents with the same lexical item, e.g. "the car that I saw and then bought..." or "the car that I had and then sold..." but impossibility of so telescoping different lexical Items which happen to have the same form -i.e. "The judge heard the case " and "The judge heard the crying" cannot be telescoped into "The judge heard the case and the crying" without disproportionate distortion of sense in one or the other. Thus assignment of the feature "doing something" or non-"doing something" to the verb is not arbitrary. And we can even accept difference with respect to this feature as a sufficient condition for considering instances of the same form as different lexical items.Assigning the presence or absence of the feature to the verb, we can describe a structural relation between such pairs as (a) "He is looking at the car" and (b) "What he is doing is looking at the car". Such a statement can be considered as the rules for embedding (a) in some such envelope as "What he did was that":He looked at the car What he did was that yielding:What he did was look at the car.One might well question the arbitrariness of raising to such a crucial position in the description of verbs their behaviour with respect to a construction involving the particular word DO. Why not, for example, rather grant this position to PERFORM or INDULGE IN, which amount to about the same thing? Why not begin with "What he is indulging in is buying clothes'? The reason is that the form of the locution with DO is much more "highly grammaticalized" than is the case with that of INDULGE IN. By "highly grammaticalized" I mean that the form of the construction is not derivable by the regular expansion of some constituent but is dependent to a high degree on special features in the grammatical structure of elements around the construction. With INDULGE IN the noun phrase BUYING CLOTHES is just a regular object (e.g. "He indulged in buying clothes" or "What he is indulging in is fantasies" with a corresponding "He indulges in fantasies.) With DO, on the other hand, while WHAT and SOMETHING as well as IT and certain other substitute forms are its formal objects, BUYING CLOTHES is not a possible object. "What he is doing is buying clothes" but not "He is doing buying clothes". Furthermore, the agreement in aspect and tense between the DO construction and the verb phrase that follow is not characteristic of other constructions superficially similar to that with DO: "What he is doing is hitting me" or "What he did is hit me" but not *"What he is doing is hit me". It appears that the structure of HITTING ME in these constructions (unlike HITTING ME in "He (98026) "What he did to the book was buy it", "What he did to the sound was listen to it". For reasons similar to those mentioned above for the assignment of the feature "doing-something" to the verbs, we can assign the feature "doing-something-to" to the appropriate sub-class of the former. Motivated by the peculiarities of occurrence mentioned above, this feature provides a possible structural correlate to the notion "object of affect". It is true, however, that the area of hazy borderline cases becomes very large when we attempt to characterize some random examples as a "doing-somethingto" verb or not one. This great area of indeterminacy is perhaps even more exaggerated in other linguistic structures associated with verb-object differences within the large class of "doing-something" verbs. Among the "doing-something-with" verbs are certainly included those in "What he did with it is put it in the drawer". "What he did with it is throw it away", "What he is doing with it is holding it", "What he did with them was hide them", "What he did with the paper was lose it", and (interesting enough) "What he did with the cake was eat it" and "What he did with the milk was drink it". Excluded from this class are probably those in sentences like *"What he did with it was discover it" *"What he did with him was visit him" *"What he did with her is forget her". Similar constructions involve DO SOMETHING ABOUT SOMETHING and DO SOMETHING FOR SOMEONE, but these become extremely general. The large area of indeterminacy, however, need only be indication that this particular type of differentiation does not embrace the whole verbal system. Within the area where the distinctions hold, their explanatory power is considerable, as is the case where they provide a general explanation in terms of some general recurring feature for the difference between "He removed the spot from the table" and "He removed the book from the table", "He shot the arrow" and "He shot the man".In the preceding discussion we have been concerned with differences in the use of verbs in English and in particular in discovering those differences which are of a more general systematic nature. Reference was made to differences in the occurrence of the continuous tense and to the use of present for future. Differences with respect to compatibility with the "doing-something" constructions were presented and discussed at greater length. The former two, however, differ from the latter in being simple grammatical reflexes of the verb categories in question, whereas the constructions with DOING SOMETHING can be thought of as pro-forms. These pro-forms are themselves equivalent to the verbs in question in the sense that they are substitutable for them. They are the grammatical paraphrase, in a sense, of the class of (98026) forms they replace. (The pro-form character of the "doing-something" construction is seen even more clearly in its related form: "He pushed her today and did the same thing to me before"). Similar constructions occur in German and French: "Die Form ist also auch nicht so aufzufassen, wie das dieser Forscher tut." and "Piquez-le comme vous venez de la faire à l'autre". Neither of these languages possess a syntactic correspondent to the English periphrastic ING-form, but on the basis of rough correspondences between the English "doing-something" form and the French and German constructions, general similarities in lexical structure show promise of being described.
null
null
null
null
Main paper: introduction: IN the following discussion I shall present preliminary results from an investigation of structuring within the lexicon of a language. These results suggest that in certain areas of the lexicon lexical items must be characterized in terms of the presence or absence of specific recurring lexico-semantic components. Furthermore, there seems to be some promise that correspondence between lexical items of different languages may be reducible to mutual correspondence between their more discrete lexico-semantic components. Take, for example, the verbs in expressions of the following types: LEARN A WORD, KNOW A WORD, LOOK AT A PERSON, SEE A PERSON, LISTEN TO A SOUND, HEAR A SOUND, GET SOMETHING, HEAR SOMETHING, etc. Granted the pairing into LEARN: KNOW, LOOK AT: SEE etc., I shall show that far from their representing discrete pairs unrelated further in lexical structure, the first members of the pairs differ uniformly from the second members; i.e., LOOK AT is to SEE as LISTEN TO is to HEAR. Preliminary investigation of certain other languages shows that a comparable relationship holds among pairs like the French REGARDER: VOIR, ECOUTER: ENTENDRE etc. Recognition of such interlanguage correspondence provides the basis of a structural explanation for questions like the following: In what sense does "Je vois cela" correspond more closely to "I see that" than does "Je regarde cela"?In this discussion, the more detailed descriptive statements about English as well as the general remarks about correspondence between English sentences and those of some other language should be considered in the framework of what I shall call transfer grammar, a term which has already been used by Z. Harris 1 , though with certain differences. A transfer grammar consists of the rules appropriate for carrying the sentences of one language, given their structure, into the corresponding sentences of another language, also given their structure. Such a grammar thus describes, i.e. analyzes, the relationship of "correspondence" holding between certain structures of one language and those of another. For the moment, we can consider as a corresponding sentence, one which a bilingual speaker would offer as such. We shall not consider any complicated or border-line cases. Above and beyond the simple word-for-word rules (or even part-of-speechfor-part-of-speech) implying identical higher structure, the description of correspondence between different natural languages must meet demands made by differences in constituent structure and by the abstractness of certain construction types; i.e., by the absence at the word level of unambiguous markers of higher level differences. In the field of machine translation in particular, much of the recent refinement in describing interlanguage correspondence has been in that direction. In this paper, attention will be directed in another direction: toward possible refinements in correspondence analysis entailed by further structural characterization of lexical Items, and in particular, of verbs in terms of their relationship to subject and object.Interest in the problem of verbal categories is not new; it dates back to classical Greek philosophy. Here only a few more or less recent, selected remarks from linguistics and linguistic philosophy will be mentioned. Consider the following sentences:1. The girl was dead. 2. He became president. 3. He worked all day. 4. The time elapsed quickly. 5. Solving the problem fatigued him. 6. He polishes the arrow. 7. He shot a hole in the wall. 8. Both of the brothers built a house. 9. He shook his finger. 10. He knows the answer. 11. He knows that you were there.The problem involves the following notions about such sentences: a) that "grammatically" all of the sentences are the same in having a subject and a predicate and that (5) through (10) at least, but not (2) and 3are grammatically the same in having a transitive verb and a direct object, b) that "notionally" or "semantically" the verbal categories are not the same: KNOW as in (10) and (11) refers to a state, and similarly the predicate in (1), WORKED as in (3) to an activity, BECOME as in (2) to a transition: and that the relationship between different verbs and their objects is not the same: in sentences like (6) the object can be described as one of effect(i.e. the arrow is affected by the polishing), in (7) on the other hand the object is one of result (i.e. the shooting results in the hole) as is the case in (8), and in (9) the object is one of instrument (i.e. the finger was used in the action). Opinion has differed considerably as to the structural status of such observations, and particularly of those in (b). Certain linguists have remarked that purely notional characterization of these differing relationships could be made according to any number of criteria. As Jespersen writes: "...on account of the infinite variety of meanings inherent in verbs the notional (or logical) relations between verbs and their objects are so manifold that they defy any attempt at analysis or classification" 2 In the form presented here (which is essentially the same way that they are described in the grammars that mention them) there is some question whether these distinctions are a grammatical matter at all. The following remarks, though not made specifically about English, are also relevant here: "Dabei sind die Begriffe des Zieles, des Objekts, der Zeitdauer usw. in der Grammatlk nicht weiter zu definieren, sondern sie sind als Realitäten anzusehen, welche in der Anschauung der Sprechenden vorhanden sind ... " "... man kommt natürlich immer wieder zu der Erkenntnis, dass in der Sprache selbst nichts gegeben ist als der Verbalbegriff und der Nominalbegriff und dass eine Eintheilung des Stoffes zwar unvermeidlich, eine jede aber nicht frei von Willkür ist." 3 Hirt, in criticizing Behaghel's use of "Beruhrtes und erzeugtes Objekt (object of affect and result, resp.), goes so far as to claim that the opposition is of no significance whatsoever. 4 While the observation of such differences is certainly not counterintuitive, still the criticism that these distinctions are not part of linguistic structure is justified when their assumption has no further consequence, i.e., when nothing is gained but satisfaction of Sprachgefühl by ascribing a structural nature to such distinctions, as undoubtedly would be the case in the possible classification into "legal and illegal" depending on the activity associated with the word. (That the subject-verb and verb-object relationships seem to be more basic is no valid argument, since the impression that certain distinctions are more basic to the language is one of the things we hope to make more explicit by structural description.) 98026Whorf has his own characteristic interpretation of the subject-predicate relationship, an interpretation very much in line with his notion of language shaping thought. What he does is to reject the intuited notional differences and project one particular dominant notional characterization over the whole system. In the article "Language, Mind, and Reality" Whorf compares the sentences "I strike it" and "I hold it" and says of the latter that though HOLD "in plain fact is no action, we ascribe action to what we call HOLD because the formula, "substantive + verb + actor = his action" is fundamental in our sentences." 5 Even if we grant the basic correctness of his observation about the similarity between HOLD and STRIKE, the nature of what he calls "action" in the relationship "actor + his action" is not at all clear, for what Whorf intends by the word "action" on the one hand is nowhere explicitly stated and on the other hand is certainly not what we regularly understand by the word. That is to say, as it stands now, HOLD according to Whorf is an action which "in plain fact is not an action". Without a characterization of this special sense of "action", the statement is self contradictory and viewed from outside the language where paradoxes like this are deprived of the flashes of intuition capable of resolving them, at best reflects cognizance that some significant similarity or other exists here.There is a contemporary school of philosophy, so-called linguistic philosophy, which aims at ridding philosophical discussion of just such misuses of ordinary language. Much attention is paid to distinctions among verbs suggesting processes, states, occurrences, etc., the objective being the description of the concepts which result in our particular use of such verbs. 6 Vendler presents an interpretation in terms of a system of time relations based on a classification of "verbs" into four types: activity terms like "pushing a cart", accomplishment terms like "drawing a circle", achievement terms like "reaching the top" and state terms like "knowing geography". 7 The classification is based on differences in usage. Many of the observations used to support his temporal interpretation are linguistic in nature. He points out that some "verbs" (e.g. in "He reached the top") are incompatible with certain lexically paraphrasable expressions implying duration of time (e.g. "for three hours") and that certain "verbs" (e.g. "He knows a good restaurant") do not occur with elements more properly syntactic and without any one consistent structurally equivalent paraphrase (e.g. the continuous tense). These two types of criteria, unfortunately, are treated as if they were equally well within our command. In fact, the observations on the whole are made in a framework without any defined linguistic structure. That he often uses "verb" in the sense of predicate or verb phrase is just a terminological matter, but from a linguistic point of view it frequently obscures the fact that rather minor variations in (98026) sentence structure entail radical differences according to his classification. The absence of a complement in "He pushed the cart (into the garage)" makes the difference between an accomplishment term and an activity term. Singular number versus plural number in the direct object as in "He drew a circle" versus "He drew circles" represents the same difference. One notes the great complexity of the interrelation between grammatical devices and notions of time. The part of Vendler's paper that touches the subject-verb relationship with which we are concerned centres around the "well known differences between verbs that possess continuous tenses and verbs that do not ... This difference suggests that running, writing, (as opposed to knowing, recognizing) are processes going on in time, i.e. roughly that they consist of successive phases following one another in time." Included in processes going on in time are the "pushing a cart"-type, the "drawing a circle"type, but not the "reaching the top"-type or the "knowing geography"-type, but Vendler's interpretation of the time notion which he supposes to be associated with the so-called continuous tense excludes the occurrence of that tense with achievement terms like "reaching the top", although in fact we do in normal speech say "He is reaching the top" and "He is winning the game". Furthermore, the notion "process" (or its further clarification as "phases following one another") is hardly very revealing when used to characterize a verb such as that in "The old man is leaning against the wall"*.Let us return now to the set of sentences on page 4 and consider the structural correlates to the notion expressed there of grammatical similarity:4) The time elapsed quickly 5) Solving the problem fatigued himIn all of the sentences on page 4, there is fairly strong evidence of grammatical nature for assuming that HE, THE GIRL, SOLVING THE PROBLEM and the other words and phrases that we conventionally call "subject" are, in fact, all representatives of a single grammatical category and that all of these sentences have in common the structural break-down into subject + predicate, despite not only differences in the constituents themselves but also various environmental incompatibilities (e.g. the socalled subjects vary from single words like HE to whole phrases like * Joos, as I recall, mentions special features in verbs like SIT, LIE, LEAN, shared by those like KNOW, HEAR in that with these the simple present cannot occur with the future adverb TOMORROW. "We leave for Washington tomorrow" but not "I know the song tomorrow", only "I will know the song tomorrow." Martin Joos, "Process and Relation Verbs in English" -oral presentation of a paper at the December 1959 meeting of the Linguistic Society of America.(98026) SOLVING THE PROBLEM; among examples of the second type could be cited the fact that not all subjects are compatible with all predicates: although we have (4), we do not have "John elapsed quickly"; i.e., there is not even mutual interchangeability between elements which we suggest represent the same grammatical category). Among the evidence motivating a common analysis could be cited the occurrence of the "subject" forms of the pronoun: HE instead of HIM, etc., agreement in number on the part of the verb, and various syntactic phenomena in which the basic relationship of subject and predicate is maintained, regardless of the particular forms that the latter constituents assume (e.g. as well "They expected solving the problem to fatigue him", related to (5) as "They expected the time to elapse quickly" related to (4)). In very much the same way, from a grammatical point of view, a uniform structural analysis corresponding to what is commonly called "the verb and its object" is motivated by general correspondence of passive sentences (thus unlike sentences with the copula, e.g. "He was the criminal"), by the occurrence of related sentences with WHAT or WHO(M) instead of the object, and so on. Notwithstanding such similarities in grammar between the two groups as the breakdown into subject + predicate or the membership in both of predicateswith the analysis verb + object, the sentences of (A) differ from those of (B) in their behaviour with respect to the following type of construction: "What he did was to strike the child" or, without the infinitival marker "to", "What he did was strike the child". For all sentences of class (A) there are related sentences with the DO-locution, while those of class (B), in ordinary usage, lack the same correspondents; e.g. "What he did was learn the answer" but not "What he did was know the answer"; "What he did was make the chair" but not "What he did was see the chair". "What he did was buy a car" but not "What he did was have a car". Similarly, with a second set of examples there is a related differentiation in the occurrence of "What he is doing is learning the answer" but not "What he is doing is knowing the answer"; "What he is doing is buying a car" but not "What he is doing is having a car", etc. The second set of examples of differentiation is not so significant in that the occurrence of the more complicated construction "What he is doing is Verb + ing" is dependent on the possibility of occurrence of the simpler construction "He is Verb + ing", and thus "What he is doing is having a car" could be considered, if taken alone, as excluded on the basis simply of the non-occurrence. The first set of examples, where the presence or absence of the progressive is not relevant, shows that there are independent reasons for considering sentences like "He had a car" different from those like "He bought a car". (This will be relevant is describing the interlanguage correspondences where the other language does not have a corresponding grammatical form). In the differentiation between expressions which occur with "What he did was..." and those which do not we have a structural correlate, though as yet unanalyzed, to one of the favourite notional characterizations of difference between the subject-verb relationship: that in which the verb expresses a state and that in which it expresses a process. The difference within the pairs SEE: LOOK AT, HEAR: LISTEN TO, HAVE: GET is matched by the absence: presence of this structural feature. The analysis of the structural difference observed here presents some interesting problems. The assignment of the difference to the verb which will be the analysis proposed here, rather than to the subject or even to the object is not so obvious when we consider the following observations. While it is true that the form CAR appears as grammatical object in both the "doing-something" set (A), (6) and 7, and in set (B) (7) and can also appear as grammatical subject in both types of constructions (e.g. "the car slid into a ditch" and "the car is very fast"), the same holds for the form HEAR (e.g. "the judge is hearing the case" and also "the Judge hears a sound") or HAVE in "the boy is having a big dinner" and "the boy has a lot of money", or "They felt the inner surface with their hand" and "The inner surface felt rough". The use of the neutral word "form" to refer to these examples is intentional, for there is structural evidence that the occurrences of (98086) the nouns with the form CAR appearing in either construction are still instances of the same lexical item while the particular verbs in question are to be considered different at the lexical level. The evidence is the freedom in conjoining diverse constituents with the same lexical item, e.g. "the car that I saw and then bought..." or "the car that I had and then sold..." but impossibility of so telescoping different lexical Items which happen to have the same form -i.e. "The judge heard the case " and "The judge heard the crying" cannot be telescoped into "The judge heard the case and the crying" without disproportionate distortion of sense in one or the other. Thus assignment of the feature "doing something" or non-"doing something" to the verb is not arbitrary. And we can even accept difference with respect to this feature as a sufficient condition for considering instances of the same form as different lexical items.Assigning the presence or absence of the feature to the verb, we can describe a structural relation between such pairs as (a) "He is looking at the car" and (b) "What he is doing is looking at the car". Such a statement can be considered as the rules for embedding (a) in some such envelope as "What he did was that":He looked at the car What he did was that yielding:What he did was look at the car.One might well question the arbitrariness of raising to such a crucial position in the description of verbs their behaviour with respect to a construction involving the particular word DO. Why not, for example, rather grant this position to PERFORM or INDULGE IN, which amount to about the same thing? Why not begin with "What he is indulging in is buying clothes'? The reason is that the form of the locution with DO is much more "highly grammaticalized" than is the case with that of INDULGE IN. By "highly grammaticalized" I mean that the form of the construction is not derivable by the regular expansion of some constituent but is dependent to a high degree on special features in the grammatical structure of elements around the construction. With INDULGE IN the noun phrase BUYING CLOTHES is just a regular object (e.g. "He indulged in buying clothes" or "What he is indulging in is fantasies" with a corresponding "He indulges in fantasies.) With DO, on the other hand, while WHAT and SOMETHING as well as IT and certain other substitute forms are its formal objects, BUYING CLOTHES is not a possible object. "What he is doing is buying clothes" but not "He is doing buying clothes". Furthermore, the agreement in aspect and tense between the DO construction and the verb phrase that follow is not characteristic of other constructions superficially similar to that with DO: "What he is doing is hitting me" or "What he did is hit me" but not *"What he is doing is hit me". It appears that the structure of HITTING ME in these constructions (unlike HITTING ME in "He (98026) "What he did to the book was buy it", "What he did to the sound was listen to it". For reasons similar to those mentioned above for the assignment of the feature "doing-something" to the verbs, we can assign the feature "doing-something-to" to the appropriate sub-class of the former. Motivated by the peculiarities of occurrence mentioned above, this feature provides a possible structural correlate to the notion "object of affect". It is true, however, that the area of hazy borderline cases becomes very large when we attempt to characterize some random examples as a "doing-somethingto" verb or not one. This great area of indeterminacy is perhaps even more exaggerated in other linguistic structures associated with verb-object differences within the large class of "doing-something" verbs. Among the "doing-something-with" verbs are certainly included those in "What he did with it is put it in the drawer". "What he did with it is throw it away", "What he is doing with it is holding it", "What he did with them was hide them", "What he did with the paper was lose it", and (interesting enough) "What he did with the cake was eat it" and "What he did with the milk was drink it". Excluded from this class are probably those in sentences like *"What he did with it was discover it" *"What he did with him was visit him" *"What he did with her is forget her". Similar constructions involve DO SOMETHING ABOUT SOMETHING and DO SOMETHING FOR SOMEONE, but these become extremely general. The large area of indeterminacy, however, need only be indication that this particular type of differentiation does not embrace the whole verbal system. Within the area where the distinctions hold, their explanatory power is considerable, as is the case where they provide a general explanation in terms of some general recurring feature for the difference between "He removed the spot from the table" and "He removed the book from the table", "He shot the arrow" and "He shot the man".In the preceding discussion we have been concerned with differences in the use of verbs in English and in particular in discovering those differences which are of a more general systematic nature. Reference was made to differences in the occurrence of the continuous tense and to the use of present for future. Differences with respect to compatibility with the "doing-something" constructions were presented and discussed at greater length. The former two, however, differ from the latter in being simple grammatical reflexes of the verb categories in question, whereas the constructions with DOING SOMETHING can be thought of as pro-forms. These pro-forms are themselves equivalent to the verbs in question in the sense that they are substitutable for them. They are the grammatical paraphrase, in a sense, of the class of (98026) forms they replace. (The pro-form character of the "doing-something" construction is seen even more clearly in its related form: "He pushed her today and did the same thing to me before"). Similar constructions occur in German and French: "Die Form ist also auch nicht so aufzufassen, wie das dieser Forscher tut." and "Piquez-le comme vous venez de la faire à l'autre". Neither of these languages possess a syntactic correspondent to the English periphrastic ING-form, but on the basis of rough correspondences between the English "doing-something" form and the French and German constructions, general similarities in lexical structure show promise of being described. Appendix:
null
null
null
null
{ "paperhash": [ "harris|transfer_grammar" ], "title": [ "Transfer Grammar" ], "abstract": [ "0. Introduction 1. Defining difference between languages 2. Structural transfer 2.1. Corresponding morpheme classes 2.2. Corresponding morphological structures 3. Phonetic and phonemic similarity 3.1. Phonetic correspondences 3.2. Corresponding phonemic statuses 4. Morphemes and morphophonemes 5. Morphological translatability 5.1. Pairing by translation 5.2. Translation correspondences 5.3. Common grammatical base 5.4. One-way translation correspondences" ], "authors": [ { "name": [ "Z. Harris" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "224808289" ], "intents": [ [] ], "isInfluential": [ false ] }
null
761
0.011827
null
null
null
null
null
null
null
null
38453aafce51385b4cebfad13a4cf5e8db95faff
244077665
null
Mechanised semantic classification
Replacement" is used rather than "substitution" to emphasise the fact that although the element is changed, the ploy is preserved. (98026) 421 * or groups of similar references. # I am excluding the case here of genuinely fortuitous homonyms between wordsigns in the language.
{ "name": [ "Sparck-Jones, Karen" ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Machine Translation and Applied Language Analysis
1961-09-01
0
0
null
IT is now widely admitted (see, for instance, de Grolier (1)) that a semantic classification will be required for machine translation and information retrieval; and that as mechanised procedures will be carried out on it, it must be detailed, precise, and explicit. This paper is primarily concerned with the construction of such a dictionary, rather than its use, i.e. with applied language analysis as a preliminary for machine translation.Apart from the problem of finding a suitable form of classification, the labour of compiling a dictionary of this kind is very great, and mechanisation of some, if not all, of the drudgery involved is desirable. The need to tackle the whole question has become more urgent, for it has become clear that reasonably high quality machine translation requires a higher standard of dictionary making, and in particular a more detailed, i.e. more realistic, representation of the full range of uses of a word than has hitherto been considered necessary. This is brought out, for example, by the inadequacies of the IBM output which is obtained on a word-for-word basis (2).As a solution to the problem of providing a refined but manipulable classification the Cambridge Language Research Unit has advocated the use of a thesaurus (3, 4, 5) , i.e. a system of conceptual groupings. To construct such a classification, therefore we must i) give a workable procedure for carrying out the extremely refined linguistic analysis required for a complete treatment of the word-uses of a natural language;(this is emphasised by the defects of existing thesauri such as Roget (6) ; )# ii) give criteria for obtaining conceptual groupings from this material.It is clearly desirable that the methods adopted should be as objective as possible. While I do not pretend that the procedure given for carrying out the initial analysis is mechanisable, the subjective element is minimised, and the results are thoroughly suited to machine handling. Once this initial analysis has been made, however, the conceptual groupings are obtained by wholly mechanical means.In the system described below the initial analysis gives classes or "rows" of synonymous word-uses, i.e. word-uses which are mutually replaceable in at least one linguistic context.(For the purposes of the classification the specification of word-uses in terms of their synonymity relations is regarded as adequate.) By using the hypothesis that word-uses with the same sign are in general more like than those with different signs, second-order classes can be obtained representing concentrations of common signs over sets of rows, i.e. representing semantic closeness in sets of rows, i.e. conceptual groupings. Computer experiments on English are then described.The first object of this investigation is to find a way of defining* a word-use which is both semantically adequate and a suitable basis for further classification; i.e. we are looking for an appropriate form of mechanisable dictionary entry.The simplest approach, i.e. that of going direct to the extra-linguistic reference, (at present being studied by M. Masterman) has the disadvantage that difficulties about "the mechanism of reference" immediately arise δ . If however, we look at the way in which a word is used in a sentence, the referential problems need no longer concern us: for although they ultimately arise when the relation of the whole sentence to # Text-scanning has been suggested as a solution to this problem. If treated merely as a device for obtaining examples of word-uses, however, it has to be carried out on a very large scale if adequate coverage is to be obtained; and the resulting material, such as that collected for prepositions by Yngve (7), has still to be classified. The suggestion has also been made that the classification itself may be carried out on the basis of the cooccurrence of words in sentences obtained in this way. But the information required can only be obtained in an even more dilute form than the preceding, and I know of no suggestions for turning this vague idea into a practicable procedure. * Except in the formal system "definition" is used in the sense of "specification" δ See for example, Quine (8) .its reference is considered, we can, if we assume that the sentence is understood, disregard them. This approach is essentially that of linguistic philosophers such as Austin (9) who show how a word is used by giving examples of the kinds of linguistic contexts in which it can occur. The method as it stands is merely illustrative, and therefore unsatisfactory because the resulting samples of text cannot themselves be mechanically handled.* I shall show, however, a) that we can make use of this sort of information without having to give it in full, and b) that the relevant facts about the way in which a word is used can be "encoded" in a suitably compact and tractable form.1. A sentence is a finite sequence of elements (words), bounded by terminal characters, having a property called a ploy (the way in which it is employed). 2. A sentence may have more than one ploy. 3. The same ploy may be common to two or more sentences.#The length of a sentence is the number of elements which it contains.Consider the class S i of sentences specified as having the ploy P j . We will assume that this class has more than one member. Consider the sub-class Σ i of S i containing all the sentences in S i having a particular length L m . We again assume that this class has more than one member.Let σ i be the sub-class of Σ i , again of more than one member, such that: 1) the element at a particular position k in each sentence in σ i differs from that occurring at the corresponding position k in every other sentence in σ i ;the element at every other position in each sentence in σ i is the 2) same as that occurring at the corresponding position in every other sentence in σ i .The elements a,b,c, ... occurring at k in the sentences in σ i will be said to be parallel with respect to k in σ i . 4 . A class of elements which are parallel with respect to some position n in some class σ n will be called a row.We can thus, for every position n and every class σ n obtain a row; for a particular class σ i we can obtain a row for every position; and for any sub-class of a class σ i we can obtain a row for each position which will be different from that obtained for the same position by σ i itself or by any of its other sub-classes.If we make a pairwise comparison between the members of a class σ i , we can say for each pair that at the position k where the elements differ, the element in one of them has been replaced by the element in the other; the two are otherwise, both formally (i.e. in length etc.) and in ploy, the same. We can plausibly and to practical advantage therefore say that we are dealing with one sentence and a class of elements which can replace one another at a particular position in it without changing its ploy. As the members of a row are thus mutually replaceable, the class of elements constituting a row will, as before, be finite and unordered.* A revised definition of row can now be given: 5. A finite class of elements will be called a row if its members are mutually replaceable with respect to a position n in a sentence s n .We have so far used the expressions "element" and "word". The aim of the system, however, is to deal with word-uses, not with words, and it is also clear that in starting from sentences we are in fact concerned with worduses and not words, in the classification, moreover, individual word-uses are treated as separate units. If, therefore, we are to give meaning to "use of a word", which in the introduction we loosely equated with "word use", we must define "word" in terms of word-uses.A sentence was defined as a ployed sequence of words. We should more strictly have said "sequence of word-signs representing word-uses"; i.e., a word-sign represents a word-use because it occurs in a ployed sentence. Our basic assumption that words are best defined in terms of their uses means that the most appropriate definition of a word will be as the class of its uses, i.e. as the class of uses with the same sign. We now formally define "word-use" and "word" as follows: 6. A word-use is the occurrence of a word-sign in a (ployed) sentence.The data for the experiments was obtained from the Oxford English Dictionary. From the point of view of obtaining reliable results from the experiments the problem was that of giving a set of rows which would be both a fair sample linguistically and small enough for reasonably efficient computing. It was decided that the best solution to the linguistic difficulty was as follows: a small number (approximately 20) of words, some with a wide range of uses, some with a narrow, but each having uses in common with some of the others, was selected; a set of rows for the whole range of uses of each of these was then worked out as in the example given below. The total set obtained therefore included a number of heavily overlapping rows, others having only one word in common, and some with no common elements. (No completely independent rows were included.)The rows could not always be "lifted" straight from the O.E.D. as will be seen from the example below; some knowledgeable interpretation on the part of the dictionary maker was required, and the result can therefore be criticised on this ground. But it can be seen that the rows obtained are unlikely to be wrong, though they may be inadequate: and more rows can be inserted if required. The important point is that if the Dictionary is accepted as a "concentrate" of English texts, the results obtained can reasonably be regarded as having a proper empirical basis. The following is a sample "transformation": OED3. In more general sense: Any piece of work that has to be done; something that one has to do (usually Involving labour or difficulty); a matter of difficulty, a "piece of work". Froude's History of England: "He had taken upon himself a task beyond the ordinary strength of man". (They do not present any real theoretical problems.) The very misleading descriptions "arch. and obs". were disregarded. In this example the OED entries were not very row-like: the best example is "impost, tax" under 1. In those cases where some interpretation has been required, it must be remembered that information about other words can legitimately, and indeed, should, be used: for a row defines all its members equally, and although the ones given have been listed with TASK first for convenience, they could be given with their members in any order. These rows could be regarded as satisfactory in that the degree of refinement was uniform, that the uses of some words were exhaustively classified, and that the interconnection between the rows over the whole set could be taken as representative. For the experiments a subset of 180 having the same properties as the initial set was selected. (It is intended (98026) 489as a control, to use more than one subset for the experiments, which will differ in, for example, degree of "inbreeding", average length of rows, etc)Thus a sentence (c.f. Defn. 1) is both a sequence of word-signs and a sequence of word-uses, and a row is both a class of word-uses and a class of word-signs (cf. Defn. 5).We say that we can define a word-use by listing synonymous uses*; this gives us, as required, a definition which is obtained intra-linguistically and which is unstructured, concise, and complete. For although the system is developed in terms of classes of uses, it is clear that as the members of a row are equally synonymous, each member is specified by the class of remaining members. As definitions in proper form of individual uses can thus always be given, there is no harm in taking the classes as our units when further classification is required, particularly as the practical advantages of doing this are obvious.2) The fact that we are dealing with word-uses and not words means that we can construct a classification based on synonymity which is nevertheless far more flexible and far more realistic than the usual logicians' total synonymity will allow#. For it must be emphasised that although the replacement criterion is extremely strict, it need only hold in one case, and its range is therefore extremely limited. We can moreover obtain empirical support for the assertion that our approach is a satisfactory one by reference to standard dictionaries: the entries in the (large) Oxford English Dictionary, for example, often consist of sets of synonyms or near-synonyms which are very like our rows; a typical instance is "CIVIL : humane, gentle, kind".(Other entries can without significant loss of information be reduced to this form: thus, for example, "CLOD: a coherent mass or lump of any solid matter, e.g. of earth, loam, etc." would *In assuming that it will be clear whether two or more uses are synonymous, i.e. that on replacement the ploy of the sentence remains unchanged, we can only rely on the linguistic judgement of the dictionary maker. This may seem inadequate, but we can argue that a subjective element must enter all lexicography at some point, and that here the point at which it enters is carefully defined, and the scope which it is allowed is extremely limited.# The logicians' interpretation of synonymity as "a can always be substituted for b" {10) is connected with discussions of logical truth, analyticity etc., and has therefore a specialised purpose. It must be pointed out however, that these discussions make use of examples from ordinary language, where synonymity in this sense is rare, and are to this extent dangerous.The real nature of synonymity in natural languages is recognised, on the other hand, by A. Naess (11) ; he allows synonymity between two word-uses each of which occurs only once. He is mainly concerned, however, with setting up procedures for testing synonymity in particular cases, and makes no attempt to base a general classification on the "synonymity-facts" which he finds.(98026) 422become "CLOD": mass, lump"*) In actually constructing a classification for English as described in SECTION III, therefore, we can plausibly make use of the Dictionary, either taking the entries as they stand, or using them as the basis of a more elaborate classification still. The fact that we can thus utilise, in the most straightforward way, the very detailed and highly documented information contained in the O.E.D. is important; for although it has been frequently observed that the Dictionary is a valuable source of linguistic information, no suggestions have hitherto been made as to how to encode this material in mechanisable form.By applying the procedure described above we can, in principle, obtain a row for every position in every sentence. Although we may not go to this extreme, and although we do not distinguish identical rows derived from different sources#, it is clear that carrying through an analysis of this kind on a large scale will result in the creation of a very great number of rows.(There will clearly be far more rows than words.) However, as our object at this stage is adequate definition and distinction, the degree of refinement represented by the procedure is an advantage; for the multiplicity of rows directly reflects the multiplicity of distinctions made in the language, and if high-quality machine translation is to be achieved, we cannot afford to ignore such a basic feature of language. Nevertheless, if the classification so far constructed is to be really useful, we must derive from these first-order classes a much smaller number of secondorder classes; and the latter must, if the system is to be thesauric in character, in some sense represent conceptual groupings.(By conceptual groupings we mean, to put it crudely, groups of rows which refer to similar extra-linguistic situations δψ.) These classes must, moreover, be * Some O.E.D. definitions are "irreducible" descriptions of the Aristotelian type; in our system this means that the words are not replaceable, i.e. are undefined, such words, or "technical terms", do not, however, represent a breakdown in the system: for they are intended to be, to an unusual extent, precise in reference and unambiguous in use, and synonyms are excluded to avoid the possibility of confusion, or because they would be redundant. Technical terms cannot in fact be adequately handled in any purely intralinguistic classification, and must be given special treatment. It should be noted, on the other hand, that, in contrast to "ordinary" words, they rarely present problems in translation. # Rows which are sub-rows of other rows are kept separate. δ They need not be, and almost certainly will not be, mutually exclusive. ψ For formalisation of the notion of extra-linguistic situations see Masterman (12) .obtainable intra-linguistically by objective and mechanisable means, or the first-stage restrictions on subjectivity and intuition will be wasted. Yet the only intra-linguistic information available for connecting rows is that represented by words, i.e. the recurrence of common signs: and from this we cannot on the face of it, deduce anything about the semantic relations of the rows or even, indeed, of the semantic relations between the uses of a word.However, if we look at a collection of rows, groups which overlap in containing common signs nevertheless strike one as representing conceptual groupings. We shall suggest that this is a consequence of the fundamental fact that, in a language, there is a finite number of signs for a much larger, and constantly expanding, set of situations, and that if this were not so, effective communication in ordinary circumstances would be impossible. In a given sentence-position the members of a row are, by definition, mutually replaceable: i.e. there is a choice among the different members of the row. We shall say that this choice is one between different signs for a particular "Word-use". The point of this interpretation is that the Word-use is determined by the relevant extra-linguistic situation, although the choice of signs is not. The Word-use, therefore, in contrast to the signs, is genuinely interlingual, and, when we communicate, is what we want to get across. Thus the second-order classes we require will be genuinely interlingual classes of Word-uses.The reasons why we can derive conceptual groupings from overlapping signs are best understood if we first consider what happens in other kinds of language.In a language in which a Word-use is represented by a single arbitrary sign, such as a technical language, or a code like the International Code of Signals, there is no intra-linguistic information, not even recurrent signs, on which to base conceptual groupings. The latter can only be obtained by considering the situations to which the signs refer, i.e. by subjective and extra-linguistic means. A conceptual grouping, moreover, can only be specified by listing its members: there is no intra-linguistic aid to remembering the relations between them. A language like this is indeed worthwhile only where unambiguity is more important than convenience, and only usable if it is comparatively small and used in well-defined circumstances. Given such a language with a much larger number of situation references, it is clear that conceptual groupings can only be handled if an economy in the number of signs is somehow effected. To achieve this economy we might (1) use the same sign for very distinct Word-uses. As the latter are not semantically related, however, we can only interpret the sign by listing (98026) the uses. The economy is not, therefore, a very helpful one. Moreover, conceptual groupings can only be obtained, as before, by going outside the language. ii) use the same sign for similar Word-uses (any ambiguity is thus almost harmless); i.e. we can treat a sign as a "shorthand" for a set of similar references. We could also use this information to pick up conceptual groupings, for we know that the members of a set of Word-uses with one sign are semantically related. (The groupings themselves will be more easy to handle, for the number of signs will be smaller than the number of references.) The extent to which we can build up conceptual groupings in this intra-linguistic way is, however, limited: for we can only group sets of Word-uses by considering the relations between the corresponding sets of extra-linguistic situations. Moreover, Word-uses with the same sign can only be distinguished by external reference.Bearing these points about languages in which there is only one sign for a Word-use in mind, we can now consider a language of the kind dealt with by our primary classification in which Word-uses correspond to classes of word-uses i.e. in which there are both several signs for each situationreferences and also a very large number of situation-references.We can clearly argue that since such a system represents a natural language, and that such a language must, if it is not to be unusable, economise on signs, each word is a shorthand for word-uses with similar references*: i.e. we make the Fundamental Assumption that it is in general true that word-uses represented by the same signs are semantically close. The fact that a particular sign is used for certain word-uses is thus not arbitrary, and we can give a semantic interpretation to the definition of "word"#. This situation is clearly like the one described above, in which we used one sign for several similar Word-uses. That was, however, unsatisfactory: firstly, because Word-uses with the same sign could not be distinguished intra-linguistically; and secondly, because there were no intra-linguistic connections between the sets of Word-uses although the sets themselves could be intra-linguistically obtained. In contrast, the system represented by our primary classification does not suffer from these disadvantages. For if, and we have assumed that this is both possible and normal in natural languages, we specify word-uses by others, the uses of a particular word are distinguished by the differences in membership of the rows in which they occur; i.e. the distinctive character of a word is represented by the particular class of Word-uses (rows) into which its uses fall, and each of these Word-uses is specified by the particular class of word-uses which make up the row. Moreover, the fact that we are dealing with combinations of word-uses makes it possible to specify likeness between Word-uses, and therefore to obtain conceptual groupings, by wholly intra-linguistic means: for as the members of a row are by definition synonymous, i.e. semantically the same, and as each word-use in a row is connected through its sign to other uses which are by our Assumption semantically similar, we can pick up semantic connections between a row and others which do not all contain the same sign. We are thus not limited to the class of Word-uses with a particular sign, but can link a Word-use with different signs to the different classes of Word-uses associated with each sign in the original; i.e. from a Word-use with sign a we can only go to others with a, but if we start with an a and b, we can go to others with a and others with b.It is clear, however, that semantic connections depending on one sign alone will not be strong enough to give us very satisfactory conceptual groupings: for although we have assumed that the need for economy forces us to use the same sign for similar word-uses -I will call this the Economy Device -we cannot deduce from this anything very definite about the degree of similarity between the uses. We know, at most, that in general these uses will be more like than those represented by different signs. In classifying rows on this basis, therefore, we can only infer that rows linked by the same sign are more likely to refer to similar situations than those without any common signs; and if the connection (provided it exists at all), between pairs of rows in a potential group is of this weak kind, the group as a whole will not be a very "coherent" one.But although the Economy Device is in any particular case a somewhat weak semantic tool, if it is generalised we can use it to better advantage: for we can draw the conclusion that the greater the proportion of common signs, the more alike two Word-uses will be; i.e. that if a,b,c and d, members of row A, are synonymous, and a,b,c and e, members of row B, are synonymous, and a qua member of A is probably like a qua member of B, b qua member of A probably like b qua member of B, c qua member of A probably like c qua member of B, this strongly suggests that although d and e are different, the Word-uses of A and B are very similar. We are thus saying that although it may be an accident that one sign occurs in each of two rows, it can hardly be an accident that several do. (This argument is reinforced by common sense).By using these multiple overlaps, therefore, it is clear that we can obtain genuine conceptual groupings, and, moreover, by wholly intralinguistic and mechanisable means, i.e. by operations on the signs alone. For the general conclusion about the similarity of pairs of rows can be used as the starting point from which definitions of similarity over sets (98026) of rows can be developedIn order to carry out concrete experiments on these lines we thus require: i) a precise measure of the similarity of a pair of rows; ii) a precise criterion of the degree of similarity which must hold over a set of rows if it is to be regarded as a conceptual grouping.A large number of alternative measures and criteria can be constructed. The measures and criteria actually used in the experiments described below were chosen because programmes based on them already existed. They are taken from work on classification called the theory of clumps by A.F. Parker-Rhodes and R.M. Needham, and will only be described in sufficient detail to make the experiments clear. For further information see the Cambridge Language Research Unit progress reports by Parker-Rhodes and Needham.The similarity function for a pair of rows was: S = Number of word-signs in common Total number of different word-signsThis definition is due to T. T. Tanimoto (13) .The grouping or "clump" criteria were: The experiments are still in progress and only tentative conclusions can be drawn.i) B-Clump A set C
null
null
null
The experiments were carried out on EDSAC II, the Cambridge University Mathematical Laboratory Computer, as part of the research into the theory of clumps. It is expected that with present techniques experiments can be carried out on up to 1000 rows; work, is in progress on more powerful methods for handling larger quantities of data.A similarity matrix using the function given was computed.For example with row 1 ACT DOING row 2 ACT PERFORMANCE WORKING OPERATION the entry S 12 would be 1/5.The order of the criteria corresponds to the difficulty of finding groups which satisfy them. B-Clumps are mutually exclusive, and though Kuhns' Clumps are not exclusive, there are so many of them that they may not effect any reduction in the data (that is, there may be more of them than there are rows). GR-Clumps do not appear to suffer from these defects; but they cannot at the moment be found in a large set without a lead on where to look. B-Clumps and Kuhns' Clumps, which can be so used, are also, in such a new field of classification, interesting in themselves.A search was made for B-Clumps with thresholds  = .062 (.062) .496*.At the last point many of the individual rows were isolated. There were also 8 small groups, 1 large one, and approximately 70 single rows. This was not very satisfactory; since all similarities, except that of a row to itself, are less than 1, it is obvious that the total set must break up as the threshold is increased, eventually into single elements. Because there is no a priori way of determining a suitable threshold, B-Clumps can only be regarded as significant if they all appear together at a particular increase in the threshold. This condition did not hold for the clumps found. Thus, although the clumps found looked fairly sensible, there was no indication of whether they were the only ones which could have been found. One would only expect to find B-Clumps with material of this kind if, to take an extreme example, it consisted of sets of rows dealing with subjects as disparate as nuclear physics and egyptology. * i.e. roughly 1/16 (1/16) 1/2; the step was slightly diminished for computing reasons.A search was made with various thresholds: .2, .25, .3 and .34. The latter threshold was very high as it excluded any clump containing more than one two-member row, and in fact gave very few and rather small clumps. The clumps obtained for .25 appeared sensible: a number of these were what could be described as various versions of essentially the same clump. There was a very large number of clumps.The reasons for proceeding from these two kinds of clump to GR-Clumps will now be apparent: 1) both B-and Kuhns' Clumps depend heavily on a threshold which cannot be other than arbitrary; ii) there were too many Kuhns' Clumps.As mentioned above, however, a lead is required for finding GR-Clumps, and this could be provided by using the larger Kuhns' Clumps as "seeds".In this experiment 7 "seeds" were used giving rise to 4 quite different satisfactorily large clumps (4 of the seeds led to exactly the same clump). All but one of them proved to consist of rows all containing the same one common word, though not necessarily all the rows containing this word: this is a natural consequence of using a small sample, and of using a sample which was obtained in the way described (i.e. starting with a small number of words and finding all their rows): for a large number of rows will not be "pulled" in any other direction because their remaining members do not occur elsewhere. Granted that the only tests at present available for whether mechanically generated clumps are "correct" are intuitive ones, the results of the experiments were satisfactory. We have thus shown that by using mechanical aids it may be possible to obtain, in a precise and self-consistent way, the kinds of semantic classification required for machine translation and information retrieval.It is thought that present techniques will be suitable for finding clumps in systems of up to 1000 Items, and much larger experiments will accordingly be carried out as soon as possible. (As noted above, a variety of different samples will be used in these experiments).As a practical matter, it is much easier to find not clumps of rows based on common words, but clumps of words based on common rows. There is a clear duality between the two procedures: i.e. they will extract the same information. If this alternative approach was adopted, a different definition of similarity would perhaps be more natural than the present one: Suppose we take the ratio:Number of rows containing a pair of elements a,b.This is clearly the conditional probability that given that a word a is appropriate in a particular sentence-position, we could replace it by b. This is unsuitable as it stands because it is asymmetrical, but we may conveniently substitute as the similarity of a and b the geometric mean of the two probabilities:Number of rows containing a and b_________________________ Number of rows containing a x Number of rows containing bAlthough experience suggests that the results obtained are not heavily influenced by the choice (within reason) of similarity function, a function such as the one just given which has an obvious interpretation in (98026) the system to which it is to be applied will clearly be more suitable. Further investigation of this question will be one line of future work.* This is also true of Aristotelian definitions in which the extra-linguistic reference is described. # For practical reasons we shall consider written texts only.(98086)* "Replacement" is used rather than "substitution" to emphasise the fact that although the element is changed, the ploy is preserved.(98026)* or groups of similar references. # I am excluding the case here of genuinely fortuitous homonyms between wordsigns in the language.(98026)
Main paper: primary classification: The first object of this investigation is to find a way of defining* a word-use which is both semantically adequate and a suitable basis for further classification; i.e. we are looking for an appropriate form of mechanisable dictionary entry.The simplest approach, i.e. that of going direct to the extra-linguistic reference, (at present being studied by M. Masterman) has the disadvantage that difficulties about "the mechanism of reference" immediately arise δ . If however, we look at the way in which a word is used in a sentence, the referential problems need no longer concern us: for although they ultimately arise when the relation of the whole sentence to # Text-scanning has been suggested as a solution to this problem. If treated merely as a device for obtaining examples of word-uses, however, it has to be carried out on a very large scale if adequate coverage is to be obtained; and the resulting material, such as that collected for prepositions by Yngve (7), has still to be classified. The suggestion has also been made that the classification itself may be carried out on the basis of the cooccurrence of words in sentences obtained in this way. But the information required can only be obtained in an even more dilute form than the preceding, and I know of no suggestions for turning this vague idea into a practicable procedure. * Except in the formal system "definition" is used in the sense of "specification" δ See for example, Quine (8) .its reference is considered, we can, if we assume that the sentence is understood, disregard them. This approach is essentially that of linguistic philosophers such as Austin (9) who show how a word is used by giving examples of the kinds of linguistic contexts in which it can occur. The method as it stands is merely illustrative, and therefore unsatisfactory because the resulting samples of text cannot themselves be mechanically handled.* I shall show, however, a) that we can make use of this sort of information without having to give it in full, and b) that the relevant facts about the way in which a word is used can be "encoded" in a suitably compact and tractable form.1. A sentence is a finite sequence of elements (words), bounded by terminal characters, having a property called a ploy (the way in which it is employed). 2. A sentence may have more than one ploy. 3. The same ploy may be common to two or more sentences.#The length of a sentence is the number of elements which it contains.Consider the class S i of sentences specified as having the ploy P j . We will assume that this class has more than one member. Consider the sub-class Σ i of S i containing all the sentences in S i having a particular length L m . We again assume that this class has more than one member.Let σ i be the sub-class of Σ i , again of more than one member, such that: 1) the element at a particular position k in each sentence in σ i differs from that occurring at the corresponding position k in every other sentence in σ i ;the element at every other position in each sentence in σ i is the 2) same as that occurring at the corresponding position in every other sentence in σ i .The elements a,b,c, ... occurring at k in the sentences in σ i will be said to be parallel with respect to k in σ i . 4 . A class of elements which are parallel with respect to some position n in some class σ n will be called a row.We can thus, for every position n and every class σ n obtain a row; for a particular class σ i we can obtain a row for every position; and for any sub-class of a class σ i we can obtain a row for each position which will be different from that obtained for the same position by σ i itself or by any of its other sub-classes.If we make a pairwise comparison between the members of a class σ i , we can say for each pair that at the position k where the elements differ, the element in one of them has been replaced by the element in the other; the two are otherwise, both formally (i.e. in length etc.) and in ploy, the same. We can plausibly and to practical advantage therefore say that we are dealing with one sentence and a class of elements which can replace one another at a particular position in it without changing its ploy. As the members of a row are thus mutually replaceable, the class of elements constituting a row will, as before, be finite and unordered.* A revised definition of row can now be given: 5. A finite class of elements will be called a row if its members are mutually replaceable with respect to a position n in a sentence s n .We have so far used the expressions "element" and "word". The aim of the system, however, is to deal with word-uses, not with words, and it is also clear that in starting from sentences we are in fact concerned with worduses and not words, in the classification, moreover, individual word-uses are treated as separate units. If, therefore, we are to give meaning to "use of a word", which in the introduction we loosely equated with "word use", we must define "word" in terms of word-uses.A sentence was defined as a ployed sequence of words. We should more strictly have said "sequence of word-signs representing word-uses"; i.e., a word-sign represents a word-use because it occurs in a ployed sentence. Our basic assumption that words are best defined in terms of their uses means that the most appropriate definition of a word will be as the class of its uses, i.e. as the class of uses with the same sign. We now formally define "word-use" and "word" as follows: 6. A word-use is the occurrence of a word-sign in a (ployed) sentence.The data for the experiments was obtained from the Oxford English Dictionary. From the point of view of obtaining reliable results from the experiments the problem was that of giving a set of rows which would be both a fair sample linguistically and small enough for reasonably efficient computing. It was decided that the best solution to the linguistic difficulty was as follows: a small number (approximately 20) of words, some with a wide range of uses, some with a narrow, but each having uses in common with some of the others, was selected; a set of rows for the whole range of uses of each of these was then worked out as in the example given below. The total set obtained therefore included a number of heavily overlapping rows, others having only one word in common, and some with no common elements. (No completely independent rows were included.)The rows could not always be "lifted" straight from the O.E.D. as will be seen from the example below; some knowledgeable interpretation on the part of the dictionary maker was required, and the result can therefore be criticised on this ground. But it can be seen that the rows obtained are unlikely to be wrong, though they may be inadequate: and more rows can be inserted if required. The important point is that if the Dictionary is accepted as a "concentrate" of English texts, the results obtained can reasonably be regarded as having a proper empirical basis. The following is a sample "transformation": OED3. In more general sense: Any piece of work that has to be done; something that one has to do (usually Involving labour or difficulty); a matter of difficulty, a "piece of work". Froude's History of England: "He had taken upon himself a task beyond the ordinary strength of man". (They do not present any real theoretical problems.) The very misleading descriptions "arch. and obs". were disregarded. In this example the OED entries were not very row-like: the best example is "impost, tax" under 1. In those cases where some interpretation has been required, it must be remembered that information about other words can legitimately, and indeed, should, be used: for a row defines all its members equally, and although the ones given have been listed with TASK first for convenience, they could be given with their members in any order. These rows could be regarded as satisfactory in that the degree of refinement was uniform, that the uses of some words were exhaustively classified, and that the interconnection between the rows over the whole set could be taken as representative. For the experiments a subset of 180 having the same properties as the initial set was selected. (It is intended (98026) 489as a control, to use more than one subset for the experiments, which will differ in, for example, degree of "inbreeding", average length of rows, etc) clump-finding: The experiments were carried out on EDSAC II, the Cambridge University Mathematical Laboratory Computer, as part of the research into the theory of clumps. It is expected that with present techniques experiments can be carried out on up to 1000 rows; work, is in progress on more powerful methods for handling larger quantities of data.A similarity matrix using the function given was computed.For example with row 1 ACT DOING row 2 ACT PERFORMANCE WORKING OPERATION the entry S 12 would be 1/5.The order of the criteria corresponds to the difficulty of finding groups which satisfy them. B-Clumps are mutually exclusive, and though Kuhns' Clumps are not exclusive, there are so many of them that they may not effect any reduction in the data (that is, there may be more of them than there are rows). GR-Clumps do not appear to suffer from these defects; but they cannot at the moment be found in a large set without a lead on where to look. B-Clumps and Kuhns' Clumps, which can be so used, are also, in such a new field of classification, interesting in themselves.A search was made for B-Clumps with thresholds  = .062 (.062) .496*.At the last point many of the individual rows were isolated. There were also 8 small groups, 1 large one, and approximately 70 single rows. This was not very satisfactory; since all similarities, except that of a row to itself, are less than 1, it is obvious that the total set must break up as the threshold is increased, eventually into single elements. Because there is no a priori way of determining a suitable threshold, B-Clumps can only be regarded as significant if they all appear together at a particular increase in the threshold. This condition did not hold for the clumps found. Thus, although the clumps found looked fairly sensible, there was no indication of whether they were the only ones which could have been found. One would only expect to find B-Clumps with material of this kind if, to take an extreme example, it consisted of sets of rows dealing with subjects as disparate as nuclear physics and egyptology. * i.e. roughly 1/16 (1/16) 1/2; the step was slightly diminished for computing reasons.A search was made with various thresholds: .2, .25, .3 and .34. The latter threshold was very high as it excluded any clump containing more than one two-member row, and in fact gave very few and rather small clumps. The clumps obtained for .25 appeared sensible: a number of these were what could be described as various versions of essentially the same clump. There was a very large number of clumps.The reasons for proceeding from these two kinds of clump to GR-Clumps will now be apparent: 1) both B-and Kuhns' Clumps depend heavily on a threshold which cannot be other than arbitrary; ii) there were too many Kuhns' Clumps.As mentioned above, however, a lead is required for finding GR-Clumps, and this could be provided by using the larger Kuhns' Clumps as "seeds".In this experiment 7 "seeds" were used giving rise to 4 quite different satisfactorily large clumps (4 of the seeds led to exactly the same clump). All but one of them proved to consist of rows all containing the same one common word, though not necessarily all the rows containing this word: this is a natural consequence of using a small sample, and of using a sample which was obtained in the way described (i.e. starting with a small number of words and finding all their rows): for a large number of rows will not be "pulled" in any other direction because their remaining members do not occur elsewhere. Granted that the only tests at present available for whether mechanically generated clumps are "correct" are intuitive ones, the results of the experiments were satisfactory. We have thus shown that by using mechanical aids it may be possible to obtain, in a precise and self-consistent way, the kinds of semantic classification required for machine translation and information retrieval.It is thought that present techniques will be suitable for finding clumps in systems of up to 1000 Items, and much larger experiments will accordingly be carried out as soon as possible. (As noted above, a variety of different samples will be used in these experiments).As a practical matter, it is much easier to find not clumps of rows based on common words, but clumps of words based on common rows. There is a clear duality between the two procedures: i.e. they will extract the same information. If this alternative approach was adopted, a different definition of similarity would perhaps be more natural than the present one: Suppose we take the ratio:Number of rows containing a pair of elements a,b.This is clearly the conditional probability that given that a word a is appropriate in a particular sentence-position, we could replace it by b. This is unsuitable as it stands because it is asymmetrical, but we may conveniently substitute as the similarity of a and b the geometric mean of the two probabilities:Number of rows containing a and b_________________________ Number of rows containing a x Number of rows containing bAlthough experience suggests that the results obtained are not heavily influenced by the choice (within reason) of similarity function, a function such as the one just given which has an obvious interpretation in (98026) the system to which it is to be applied will clearly be more suitable. Further investigation of this question will be one line of future work.* This is also true of Aristotelian definitions in which the extra-linguistic reference is described. # For practical reasons we shall consider written texts only.(98086)* "Replacement" is used rather than "substitution" to emphasise the fact that although the element is changed, the ploy is preserved.(98026)* or groups of similar references. # I am excluding the case here of genuinely fortuitous homonyms between wordsigns in the language.(98026) a word is the class of occurrences of one word-sign.: Thus a sentence (c.f. Defn. 1) is both a sequence of word-signs and a sequence of word-uses, and a row is both a class of word-uses and a class of word-signs (cf. Defn. 5).We say that we can define a word-use by listing synonymous uses*; this gives us, as required, a definition which is obtained intra-linguistically and which is unstructured, concise, and complete. For although the system is developed in terms of classes of uses, it is clear that as the members of a row are equally synonymous, each member is specified by the class of remaining members. As definitions in proper form of individual uses can thus always be given, there is no harm in taking the classes as our units when further classification is required, particularly as the practical advantages of doing this are obvious.2) The fact that we are dealing with word-uses and not words means that we can construct a classification based on synonymity which is nevertheless far more flexible and far more realistic than the usual logicians' total synonymity will allow#. For it must be emphasised that although the replacement criterion is extremely strict, it need only hold in one case, and its range is therefore extremely limited. We can moreover obtain empirical support for the assertion that our approach is a satisfactory one by reference to standard dictionaries: the entries in the (large) Oxford English Dictionary, for example, often consist of sets of synonyms or near-synonyms which are very like our rows; a typical instance is "CIVIL : humane, gentle, kind".(Other entries can without significant loss of information be reduced to this form: thus, for example, "CLOD: a coherent mass or lump of any solid matter, e.g. of earth, loam, etc." would *In assuming that it will be clear whether two or more uses are synonymous, i.e. that on replacement the ploy of the sentence remains unchanged, we can only rely on the linguistic judgement of the dictionary maker. This may seem inadequate, but we can argue that a subjective element must enter all lexicography at some point, and that here the point at which it enters is carefully defined, and the scope which it is allowed is extremely limited.# The logicians' interpretation of synonymity as "a can always be substituted for b" {10) is connected with discussions of logical truth, analyticity etc., and has therefore a specialised purpose. It must be pointed out however, that these discussions make use of examples from ordinary language, where synonymity in this sense is rare, and are to this extent dangerous.The real nature of synonymity in natural languages is recognised, on the other hand, by A. Naess (11) ; he allows synonymity between two word-uses each of which occurs only once. He is mainly concerned, however, with setting up procedures for testing synonymity in particular cases, and makes no attempt to base a general classification on the "synonymity-facts" which he finds.(98026) 422become "CLOD": mass, lump"*) In actually constructing a classification for English as described in SECTION III, therefore, we can plausibly make use of the Dictionary, either taking the entries as they stand, or using them as the basis of a more elaborate classification still. The fact that we can thus utilise, in the most straightforward way, the very detailed and highly documented information contained in the O.E.D. is important; for although it has been frequently observed that the Dictionary is a valuable source of linguistic information, no suggestions have hitherto been made as to how to encode this material in mechanisable form.By applying the procedure described above we can, in principle, obtain a row for every position in every sentence. Although we may not go to this extreme, and although we do not distinguish identical rows derived from different sources#, it is clear that carrying through an analysis of this kind on a large scale will result in the creation of a very great number of rows.(There will clearly be far more rows than words.) However, as our object at this stage is adequate definition and distinction, the degree of refinement represented by the procedure is an advantage; for the multiplicity of rows directly reflects the multiplicity of distinctions made in the language, and if high-quality machine translation is to be achieved, we cannot afford to ignore such a basic feature of language. Nevertheless, if the classification so far constructed is to be really useful, we must derive from these first-order classes a much smaller number of secondorder classes; and the latter must, if the system is to be thesauric in character, in some sense represent conceptual groupings.(By conceptual groupings we mean, to put it crudely, groups of rows which refer to similar extra-linguistic situations δψ.) These classes must, moreover, be * Some O.E.D. definitions are "irreducible" descriptions of the Aristotelian type; in our system this means that the words are not replaceable, i.e. are undefined, such words, or "technical terms", do not, however, represent a breakdown in the system: for they are intended to be, to an unusual extent, precise in reference and unambiguous in use, and synonyms are excluded to avoid the possibility of confusion, or because they would be redundant. Technical terms cannot in fact be adequately handled in any purely intralinguistic classification, and must be given special treatment. It should be noted, on the other hand, that, in contrast to "ordinary" words, they rarely present problems in translation. # Rows which are sub-rows of other rows are kept separate. δ They need not be, and almost certainly will not be, mutually exclusive. ψ For formalisation of the notion of extra-linguistic situations see Masterman (12) .obtainable intra-linguistically by objective and mechanisable means, or the first-stage restrictions on subjectivity and intuition will be wasted. Yet the only intra-linguistic information available for connecting rows is that represented by words, i.e. the recurrence of common signs: and from this we cannot on the face of it, deduce anything about the semantic relations of the rows or even, indeed, of the semantic relations between the uses of a word.However, if we look at a collection of rows, groups which overlap in containing common signs nevertheless strike one as representing conceptual groupings. We shall suggest that this is a consequence of the fundamental fact that, in a language, there is a finite number of signs for a much larger, and constantly expanding, set of situations, and that if this were not so, effective communication in ordinary circumstances would be impossible. In a given sentence-position the members of a row are, by definition, mutually replaceable: i.e. there is a choice among the different members of the row. We shall say that this choice is one between different signs for a particular "Word-use". The point of this interpretation is that the Word-use is determined by the relevant extra-linguistic situation, although the choice of signs is not. The Word-use, therefore, in contrast to the signs, is genuinely interlingual, and, when we communicate, is what we want to get across. Thus the second-order classes we require will be genuinely interlingual classes of Word-uses.The reasons why we can derive conceptual groupings from overlapping signs are best understood if we first consider what happens in other kinds of language.In a language in which a Word-use is represented by a single arbitrary sign, such as a technical language, or a code like the International Code of Signals, there is no intra-linguistic information, not even recurrent signs, on which to base conceptual groupings. The latter can only be obtained by considering the situations to which the signs refer, i.e. by subjective and extra-linguistic means. A conceptual grouping, moreover, can only be specified by listing its members: there is no intra-linguistic aid to remembering the relations between them. A language like this is indeed worthwhile only where unambiguity is more important than convenience, and only usable if it is comparatively small and used in well-defined circumstances. Given such a language with a much larger number of situation references, it is clear that conceptual groupings can only be handled if an economy in the number of signs is somehow effected. To achieve this economy we might (1) use the same sign for very distinct Word-uses. As the latter are not semantically related, however, we can only interpret the sign by listing (98026) the uses. The economy is not, therefore, a very helpful one. Moreover, conceptual groupings can only be obtained, as before, by going outside the language. ii) use the same sign for similar Word-uses (any ambiguity is thus almost harmless); i.e. we can treat a sign as a "shorthand" for a set of similar references. We could also use this information to pick up conceptual groupings, for we know that the members of a set of Word-uses with one sign are semantically related. (The groupings themselves will be more easy to handle, for the number of signs will be smaller than the number of references.) The extent to which we can build up conceptual groupings in this intra-linguistic way is, however, limited: for we can only group sets of Word-uses by considering the relations between the corresponding sets of extra-linguistic situations. Moreover, Word-uses with the same sign can only be distinguished by external reference.Bearing these points about languages in which there is only one sign for a Word-use in mind, we can now consider a language of the kind dealt with by our primary classification in which Word-uses correspond to classes of word-uses i.e. in which there are both several signs for each situationreferences and also a very large number of situation-references.We can clearly argue that since such a system represents a natural language, and that such a language must, if it is not to be unusable, economise on signs, each word is a shorthand for word-uses with similar references*: i.e. we make the Fundamental Assumption that it is in general true that word-uses represented by the same signs are semantically close. The fact that a particular sign is used for certain word-uses is thus not arbitrary, and we can give a semantic interpretation to the definition of "word"#. This situation is clearly like the one described above, in which we used one sign for several similar Word-uses. That was, however, unsatisfactory: firstly, because Word-uses with the same sign could not be distinguished intra-linguistically; and secondly, because there were no intra-linguistic connections between the sets of Word-uses although the sets themselves could be intra-linguistically obtained. In contrast, the system represented by our primary classification does not suffer from these disadvantages. For if, and we have assumed that this is both possible and normal in natural languages, we specify word-uses by others, the uses of a particular word are distinguished by the differences in membership of the rows in which they occur; i.e. the distinctive character of a word is represented by the particular class of Word-uses (rows) into which its uses fall, and each of these Word-uses is specified by the particular class of word-uses which make up the row. Moreover, the fact that we are dealing with combinations of word-uses makes it possible to specify likeness between Word-uses, and therefore to obtain conceptual groupings, by wholly intra-linguistic means: for as the members of a row are by definition synonymous, i.e. semantically the same, and as each word-use in a row is connected through its sign to other uses which are by our Assumption semantically similar, we can pick up semantic connections between a row and others which do not all contain the same sign. We are thus not limited to the class of Word-uses with a particular sign, but can link a Word-use with different signs to the different classes of Word-uses associated with each sign in the original; i.e. from a Word-use with sign a we can only go to others with a, but if we start with an a and b, we can go to others with a and others with b.It is clear, however, that semantic connections depending on one sign alone will not be strong enough to give us very satisfactory conceptual groupings: for although we have assumed that the need for economy forces us to use the same sign for similar word-uses -I will call this the Economy Device -we cannot deduce from this anything very definite about the degree of similarity between the uses. We know, at most, that in general these uses will be more like than those represented by different signs. In classifying rows on this basis, therefore, we can only infer that rows linked by the same sign are more likely to refer to similar situations than those without any common signs; and if the connection (provided it exists at all), between pairs of rows in a potential group is of this weak kind, the group as a whole will not be a very "coherent" one.But although the Economy Device is in any particular case a somewhat weak semantic tool, if it is generalised we can use it to better advantage: for we can draw the conclusion that the greater the proportion of common signs, the more alike two Word-uses will be; i.e. that if a,b,c and d, members of row A, are synonymous, and a,b,c and e, members of row B, are synonymous, and a qua member of A is probably like a qua member of B, b qua member of A probably like b qua member of B, c qua member of A probably like c qua member of B, this strongly suggests that although d and e are different, the Word-uses of A and B are very similar. We are thus saying that although it may be an accident that one sign occurs in each of two rows, it can hardly be an accident that several do. (This argument is reinforced by common sense).By using these multiple overlaps, therefore, it is clear that we can obtain genuine conceptual groupings, and, moreover, by wholly intralinguistic and mechanisable means, i.e. by operations on the signs alone. For the general conclusion about the similarity of pairs of rows can be used as the starting point from which definitions of similarity over sets (98026) of rows can be developedIn order to carry out concrete experiments on these lines we thus require: i) a precise measure of the similarity of a pair of rows; ii) a precise criterion of the degree of similarity which must hold over a set of rows if it is to be regarded as a conceptual grouping.A large number of alternative measures and criteria can be constructed. The measures and criteria actually used in the experiments described below were chosen because programmes based on them already existed. They are taken from work on classification called the theory of clumps by A.F. Parker-Rhodes and R.M. Needham, and will only be described in sufficient detail to make the experiments clear. For further information see the Cambridge Language Research Unit progress reports by Parker-Rhodes and Needham.The similarity function for a pair of rows was: S = Number of word-signs in common Total number of different word-signsThis definition is due to T. T. Tanimoto (13) .The grouping or "clump" criteria were: The experiments are still in progress and only tentative conclusions can be drawn.i) B-Clump A set C introduction: IT is now widely admitted (see, for instance, de Grolier (1)) that a semantic classification will be required for machine translation and information retrieval; and that as mechanised procedures will be carried out on it, it must be detailed, precise, and explicit. This paper is primarily concerned with the construction of such a dictionary, rather than its use, i.e. with applied language analysis as a preliminary for machine translation.Apart from the problem of finding a suitable form of classification, the labour of compiling a dictionary of this kind is very great, and mechanisation of some, if not all, of the drudgery involved is desirable. The need to tackle the whole question has become more urgent, for it has become clear that reasonably high quality machine translation requires a higher standard of dictionary making, and in particular a more detailed, i.e. more realistic, representation of the full range of uses of a word than has hitherto been considered necessary. This is brought out, for example, by the inadequacies of the IBM output which is obtained on a word-for-word basis (2).As a solution to the problem of providing a refined but manipulable classification the Cambridge Language Research Unit has advocated the use of a thesaurus (3, 4, 5) , i.e. a system of conceptual groupings. To construct such a classification, therefore we must i) give a workable procedure for carrying out the extremely refined linguistic analysis required for a complete treatment of the word-uses of a natural language;(this is emphasised by the defects of existing thesauri such as Roget (6) ; )# ii) give criteria for obtaining conceptual groupings from this material.It is clearly desirable that the methods adopted should be as objective as possible. While I do not pretend that the procedure given for carrying out the initial analysis is mechanisable, the subjective element is minimised, and the results are thoroughly suited to machine handling. Once this initial analysis has been made, however, the conceptual groupings are obtained by wholly mechanical means.In the system described below the initial analysis gives classes or "rows" of synonymous word-uses, i.e. word-uses which are mutually replaceable in at least one linguistic context.(For the purposes of the classification the specification of word-uses in terms of their synonymity relations is regarded as adequate.) By using the hypothesis that word-uses with the same sign are in general more like than those with different signs, second-order classes can be obtained representing concentrations of common signs over sets of rows, i.e. representing semantic closeness in sets of rows, i.e. conceptual groupings. Computer experiments on English are then described. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
761
0
null
null
null
null
null
null
null
null
988e42e833cc53b78b1839a7a310c36140b97a0d
244077656
null
A new model of syntactic description
PREFACE THIS paper expounds the lattice theory of syntax developed by the Cambridge Language Research Unit during the last five years. The idea, that the conceptual apparatus of lattice theory could be put to use in the description of linguistic phenomena originated with M. Masterman in 1956, and was first put before the public in a paper by A.F. Parker-Rhodes, read at a Machine Translation colloquium organized by M.I.T. in Dedham, Mass. in that year. At this stage, the device described here as the "meet algorithm" was the only part of the theory that had been clearly formulated; but the later developments were already in some measure foreseen. Since that occasion, this is the first formal presentation of the theory to be published. A fuller account of the theory, together with a detailed account of the program based on it, is to be brought out shortly, in the following four parts: I. The Lattice Properties of Syntactic A.F. Parker-Rhodes Relations in an Open Language. II. Derivation of Syntactic Relations from A.F. Parker-Rhodes a Lattice Model M. Masterman III. Relation between the Theory and its K.S. Jones Application to Syntax Analysis Programming. IV. The CLRU Syntax Analysis Program
{ "name": [ "Parker-Rhodes, A. F." ], "affiliation": [ null ] }
null
null
Proceedings of the International Conference on Machine Translation and Applied Language Analysis
1961-09-01
0
5
null
This paper describes briefly a new model of grammatical description, devised originally with the purpose of providing a better tool for the machine processing of language material. Particular attention has been given to the advantages likely to accrue, for this purpose, from exploiting to the full whatever features could be found in common between all languages. The need to devise a new model became apparent when it was found how little attention had been given in the past to this point.It seems that previous models of grammatical description fall into four main classes. The oldest of these, which has been called by Hockett (6) the "Word-and-Paradigm" or WP model, originated in antiquity, and is well adapted to the description of inflected languages like Sanskrit, Greek and Latin. It is however, despite Robins' (3) recent reconsideration, far too limited in scope for our purposes. The next, the "Item-and-Process" or IP model in Hockett's terminology, works with the notion of items (word or short phrases) being modified by various processes (suffixation, vowelchange, root-replacement, etc.) to produce all the various forms of the language. This model was first clearly systematized by Sapir (9) ; it is more adaptable than the WP model, but still not sufficiently general. The "Item-and-Arrangement" or LA model was evolved by descriptive linguists; it aims to describe the whole grammar of a language in terms of lists of items and of the ways in which they can be arranged (i.e. constructions). This model lends itself better to expressing the basic hierarchical structure of sentences, first recognised clearly by Husserl (7) , than the previous models, and is somewhat easier to formulate mathematically; but it runs into numerous difficulties which have led to the formulation of yet another type of model. This is the one originated by Harris (4) and greatly strengthened by Chomsky (2); we may call it the Kernel-and-Transformation or KT model. It takes as its starting point, a number of simple standard sentence forms, called "kernels", and seeks to derive every possible correct sentence in the language by developing these kernels through a mechanism of substitution of their components by other kernels. This model has a number of advantages, notably in the description of what I here call interrupted substituents, but it is very refractory to mathematical formulation. This model has received a more extensive application to problems of handling language material and mechanization of language processes than the others. This work is especially associated with the University of Pennsylvania, where it has been ingeniously used by Hiz (5) and by Kaufman (8) . Unfortunately the great complexity produced by these efforts, even though they have been confined (98026) to the description of a single language (English) casts some doubt on the effectiveness of the KT model for our purposes.The new model which I propose here, for the purpose of meeting the needs of machine translation better than those previously have done, will be set out so far as possible in an axiomatic manner, in order to emphasize its internal structure. The task of demonstrating in detail its application to the description of actual languages lies outside the scope of this paper. Evidence that it is so applicable comes from two sources: first, the operation of machine programs embodying ideas drawn from the model for the syntactic analysis of texts; and second, descriptions of various particular languages capable of being compared with each other and with more conventional descriptions. Evidence of both sorts is planned for publication in due course; here, I shall confine myself to exposition alone.First, I shall define an operation called "replacement" by which parts of utterances may be may be substituted by other parts: this does no more than re-state familiar Ideas. Second, I shall use this operation to derive a rigorous definition of grammatical function (in a partly mathematical context this term, unfortunately, is too liable to be misunderstood, and must be replaced: I use the term "paradigm", in an analogically extended sense, for this purpose). Third, I show that the set of all possible paradigms (functions) constitutes a well-defined mathematical system, namely, a lattice; this makes possible major simplifications in the description of syntactic phenomena. Fourth, I shall use the conceptual apparatus to hand to circumscribe the possible diversity of syntactic forms observable in any language, and thereby show how a uniform system of categories can be applied to all languages. Lastly, I shall discuss how the ideas developed can be applied to the mechanical programming of syntactic analysis.We consider a closed language as being a closed corpus consisting of a set of utterances; each utterance is a sequence of signs having a beginning and an end. The signs in any such sequence are understood to have a unique simple ordering. Each sign may be a written letter or ideograph, or a sound; there are thus various possibilities for the realization of the signs, and in some realizations it may be necessary to resort to special conventions in order that they may be unambiguously assigned a simple ordering; this however, is a matter which at the present level of (98026) discourse need not be pursued in detail.Any subset of the signs constituting an utterance, presented in the same order in which they occur in this utterance, is called a segment, if S is a segment of an utterance U, and if between the first and the last sign included in S, every sign in U is also a sign in S, then S is said to be an uninterrupted segment; otherwise, S would be interrupted. We shall have occasion to use the notion of a zero segment, that is, one consisting of no signs; just as the empty set, in set theory, is understood to be a subset of every set, so we shall admit the presence of an empty sub-segment in every other segment. In all the statements which we shall make about segments, the possibility that a zero segment may be referred to should be borne in mind.If an interrupted segment consists of n subsegments, each of which is itself uninterrupted, the latter will be called fragments to distinguish them from general subsegments, which may be themselves interrupted. A fragment, being itself a segment, may also on occasion be a zero segment. We shall use, as a general form for denoting a segment, ...F 1 ...F 2 ..., where F 1 and F 2 are fragments of an interrupted segment. Whenever such a form is used, it must be understood that though two fragments are shown, more than two fragments may in fact be present.A segment ...F 1 ...F 2 ... is said to be replaceable by another segment ...F 1 ...F 2 ... if the following two postulates are fulfilled: (a) for any X, Y, Z such that XF 1 YF 2 Z is an utterance in the language, XF' 1 YF' 2 Z is also an utterance in the language;(b) for any ...G 1 ...G 2 ... in the language, of which ... F 1 ... F 2 ... is a subsegment, there is at least one utterance of the form XF 1 YF 2 Z in the language, which does not contain ... G 1 ... G 2 ...The second condition is required to avoid saying that one segment is replaceable by another if they are only so when they are parts of larger ones.A closed language, as defined above, is a rather unsatisfactory model of actual speech. At the very least it needs to contain an enormous amount of material if it is to provide examples of all possible constructions. Furthermore, in a strict sense, the set of "possible constructions" in any actual language is an open one in that any speaker may coin a new construction without thereby ceasing to speak the given language. We therefore need to pass over from consideration of closed languages, to take account An open language, is, like a closed language, considered as a set of utterances. But whereas in a closed language these utterances form an ostensibly given corpus, which can be examined to determine whether a given sequence is or is not an utterance, in an open language the criterion is, whether or not a given sequence is accepted by a competent speaker as a correct utterance in the given language. The definition of replaceability given above, needs modification in three particulars, in order to adapt it for use in an open language. We have to re-define the term "segment"; we have to consider carefully what is implied by a sequence being an utterance; and we have to re-phrase the definition of replaceability.In effect, we are trying to substitute, for the closed corpus of a closed language, the behavioural response of a competent speaker, to define the compass of an open language. This being so, we cannot simply regard a segment as a sequence of signs, unless we admit as "signs" not only written marks and spoken sounds, but any sensory clue available to the competent speaker during the act of communication. We therefore regard all such clues as imaginary diacritics which could be added to the manifest signs composing a given utterance or segment. In other words, we allow our competent speaker to annotate any text before we subject it to further analysis.The scope of such annotations may be illustrated by the example of the English phrases 'you and not me' and 'shorthand notes'. Both, as they stand, are sequences of written letters, both can be parts of utterances in English, and both contain the uninterrupted sequence 'and not'. By the definition above, this sequence is certainly a segment, of which both phrases contain exponents. We rely on the annotations or diacritics which a competent speaker might add, to recognise that the two letter-sequences are effectively different. This might, for example, be done by underlining the first and last letters of every word, in which case the two sequences would be 'and not' and 'and not'. The particular device adopted does not matter, provided (a) it can be non-contentiously performed, and (b) it leaves the annotated text capable of complete analysis on the assumption that, if a segment S is replaceable by a segment T, S and T are sufficiently identifiable by the sequences of signs (including the diacritics) which they contain.If this principle is applied to actual texts in actual languages, itis possible to find cases where it breaks down. These are cases of irreducible ambiguity. An example is the sentence 'Iceland fish catch drops': it is more than a competent speaker can do to annotate this text so as to distinguish non-contentiously all the meaningful segments in it. For it can bear two distinct meanings, which only a fuller context could disengage: either it concerns animal behaviour, or the fishing idustry, according as 'catch' or 'drops' is taken as the verb. It is therefore necessary to prescind such cases of irreducible ambiguity in the rigorous analysis of open languages.Whereas in a closed language, every sequence of signs either is or is not an utterance, there are four cases which may have to be considered in regard to open languages. These are exemplified by the following phrases:1. 'It's a nice morning'; This is an utterance in English.; Not an utterance; the correct form is 'I'm hungry'. 3. 'Lake three stand'; Not an utterance, no comments occur.The definition given for replaceability in a closed language was based on two postulates. The first of these, when its terms are interpreted in the light of what has been said above about segments and utterances, can stand. The second, aimed to exclude recognition of replacement between segments which are "really" parts of larger segments, between which the replaceability relation is more usefully posited, requires amendment. For in an open language it is no longer sufficient, in order to exclude this situation, to find one instance to the contrary, or even a closed set of instances. Thus, in English, we could say that 'ga' is replaceable by 'ra', adducing instances in which 'gain' is replaceable by 'rain'; this is not any the less silly because we can add a few other instances of the same replacement, such as 'gate' being replaceable by 'rate'. Only if there is an open set of such cases can we count the replaceability as genuine.We are therefore led to the following revised definition:Def.2. a segment ...F 1 ... ...F 2 ... in an open language L is replaceable by another segment ...G' 1 ...G' 2 ... if and only if:(a) for any X, Y, Z in L such that XF 1 YF 2 Z is an utterance in L, XG' 1 YG' 2 Z is a corrigible sequence in L. (b)for any two distinct utterances XF 1 F 2 Z the corresponding XG' 1 YG' 2 Z are also distinct, and(c) for any segment ...G 1 ... G 2 ... containing ... F 1 ... F 2 ... as a proper subsegment, there is an open set of utterances XF 1 YF 2 Z not containing ... G 1 ... G 2 ...; Undecidable.There is no novelty about either (l) or (3). The new cases not paralleled in a closed language are (2) and (4). The last is in fact peculiarly tiresome, in that there are in real life speech situations in which this phrase could be accepted as an utterance, and meaning could be attached to the words 'vern' and 'hollip'. But in the context of any mechanical language processing we have to regard it as not an utterance, because it must always remain unrecognisable, until the words it contains get into the dictionary. The case (2) can be more constructively treated. We shall formulate the following definition:Def.l.: a sequence S in an open language L which differs from some utterance S' in L, if at all, in such a way that in the given context, a competent speaker of L will unambiguously identify S with S', is said to be corrigible to S', which is called its correction. Two different sequences both corrigible to the same utterance are said to be not distinct.This definition has been so formulated that it applies to the cases (1) and (2) of the above list, but not to (3) and (4). Its effect is, that in open languages, the class of corrigible sequences will take the place occupied by utterances in closed languages. (98026)Having established that the system of paradigms of substituents in any language must form a lattice, we have now to show what lattice is in fact formed. This could be done empirically, by applying the definitions given above to a sufficient body of texts in a given language. Even with the best mechanical aids, this would be a virtually impossible task, even for one language. Or, it could be done intuitively; any intelligent person can learn how to do this for a language he knows well enough, but the results carry conviction only to one who has himself gone through the procedure. I shall therefore construct the syntax lattice, step by step, starting from the free clause and introducing progressively finer syntactic contrasts, till it is sufficiently developed to serve as a model of actual language; I shall then show how it can be used in the design of a syntax analysis program.We have already seen that, in a lattice representing the total paradigms of substituents, if a point A 'includes' (or 'precedes') a point B, a substituent whose paradigm is A can form part of a substituent whose paradigm is B. We can assume that the simplest imaginable 'language' has the capacity for some kind of syntactic contrast, and has sentences made up of smaller units. The syntax lattice for such a rudimentary language would therefore be as shown in figure 1. Even in this simple schema, the following lattice properties are illustrated:(1) the side-to-side symmetry of the lattice: i.e. the complementarity of O and S. This distinction we interpret as the subject -predicate dichotomy.(11) the top-to-bottom asymmetry of the lattice: i.e. the partialordering relation (inclusion of paradigms defined as sets). At this stage, this has only a trivial interpretation, but later we shall correlate this with the governor -dependent distinction.(111) the two binary operations definable in every lattice, namely the join and meet of any two points, denoted by  and  respectively. As we have already seen, we are committed to interpret ab as the syntactic function of a substituent capable of being used either like those of function a or like those of function b; and ab as the function of a substituent having components of functions a and b.The lattice just considered, with its two principal points O, S, gives us our basic schema. But it is clear that it is far too primitive as it stands for its use to be extended from our imaginary language to any real one. To obtain a more adequate system, the basic schema is therefore enlarged by the addition of points representing new paradigms, which include, in their capacity as sets, the points S and O respectively. This gives us the lattice shown in figure 2. It is obvious that if we are to retain the substantive-operative distinction represented by the two sides of the lattice, we cannot extend the system in any other way, for the more refined classification we require must represent a sophistication of this basic division. The two new side points SA and OA represent substantive adjuncts and operative adjuncts respectively. For if book has the paradigm S and new book is equipollent with book, the paradigm of new includes both book and new book, whereas that of book does not include new; such examples, in the light of Def. 5, show that SA and OA stand, very roughly, for adjectives and adverbs. Their join gives us the new indeterminate adjunct IA, and this also includes the principle indeterminate I. The lattice now has seven points. This extended lattice is still however, inadequate, and we need to add, above SA, OA, IA a further series of paradigms SB, OB, IB to represent subadjuncts, that is, for words whose use is restricted to qualifying other adjuncts. This gives us a ten-point lattice; but this still fails to account for certain syntactically important types of words, such as prepositions, conjunctions, etc. Prepositions could, without too much arbitrariness, be classified as postverbs, and assigned to the point OB, but conjunctions (connectives, as the logician understands them) are still unaccounted for. Since the join operation is the lattice equivalent of the logical and/or connective, we can expect to represent conjunctions by an additional point at the very top of the lattice, IC. Indeed, there are conjunctions in some languages which can be used to connect words of any syntactic function, and whose paradigm therefore by Def. 5 includes all other paradigms. But this does not go for all "conjunctions". There are those which specifically connect clauses rather than separate words; these have a paradigm ZA immediately including Z. The lattice now has (98026) Figure 3 . Complete Primary Lattice Preliminary empirical investigation has shown that, to a surprising extent, this system contains the hard core of the general syntactic classification we need. It is therefore called the primary syntax lattice; that point of the lattice representing the paradigm (i.e. syntactic function) of a substituent, is called the lattice position indicator or LPI of the substituent.Having set up the classification represented by the primary lattice, we can now start to use it to find the function of a compound substituent. I here introduce the first algorithm derived from the theory, which I call the "meet algorithm". This is simply Lemma 2, in the form: The paradigm of a compound substituent is the meet of those of its components.Let us see how this algorithm works out in the lattice of fig. 3 , The meet of the points SA and S, for instance, is S. That is to say, a group consisting of a substantive and a substantive adjunct has the function of a substantive (a man with long legs: equipollent with a man); similarly, the meet of O and OA is O (you have finished it? I have!).For either side of the lattice, therefore, the algorithm works in a satisfactory way and gives linguistically acceptable results; clearly, if we put together units with a common character, we should expect the group to have the same character.The meet algorithm falls, however, when we consider the meet of any two points on opposite sides of the lattice. For such a meet can only be the point Z, whereas in fact a great variety of syntactic functions can be discharged by substituents having components between which the substantiveoperative contrast is in evidence. Given this complementarity, we can only describe Z, very weakly, as the property of having a function different from those of its components. This is serious, since we began by interpreting Z much more strongly, as the paradigm of a free clause.In order to deal with this situation, we make use of the distinction between an endocentric substituent, which is equipollent with one or more of its components; and an exocentric substituent, which is not equipollent with either. What we have found is that the meet algorithm, applied to the simple primary lattice of fig. 3 , works for endocentric but fails for exocentric substituents. We must now develop the lattice schema to take account of exocentric substituents; in fact, the strength of the theory largely rests in its capacity to give an adequate and precise account of the nature of exocentric substituents.The key to this development is to make use of the lattice relation of duality. Just as all the other points in the primary lattice represent possible parts of what we at first interpreted as a clause, represented by the lower bound Z: so, now that we have to weaken the interpretation of Z to that of an exocentric substituent, we need lattice points to represent all those substituents of which an exocentric group Z is a possible part. Of this new set of points, by lemma 2, Z is thus the upper bound. And since we may expect a substituent of any function to have components replaceable by exocentric groups, the new set of points must contain all those, besides Z, which we have already allowed for in the primary lattice. What we have to do, therefore, is to add to the primary lattice its own dual (consisting of the same points with the converse inclusionrelation between them), the point Z being in common between them. This system, shown in figure 4, is still a lattice; it is divided up into two mutually dual sublattices, the primary lattice which, as we have seen, is concerned with endocentric substituents, and the secondary lattice, as we may now call it, which is concerned with exocentric substituent. This lattice however is still not the one we want; the meet algorithm applied to points in the primary sublattice still gives Z as the result for all exocentric groups. To avoid this, we have to accept the existence of further distinctions. There is not, for instance, just one kind of (98026) operative, O: there must be different kinds, each determining a different function for the exocentric groups which it can enter into. Thus, to the different kinds of exocentric groups represented by the dual secondary lattice hanging from Z, there must be added a parallel set of distinctions hanging from every other point of the primary lattice, representing the different kinds of operatives, of operative adjuncts, and so on.The resulting system, when fully developed, as will be clear to those acquainted with lattice theory, will be the direct product of the primary lattice of fig. 3 with its dual. This lattice is too large for convenient setting out here. But we can take advantage of the fact that in actual practice we can do with a less complete classification of functions for compound substituents than is required for their irreducible components. Thus, so long as we are only interested in compound substituents, we can replace the 12-point primary lattice by the 5-point lattice shown in Simplified Self-dual-product lattice This, by the way, is the smallest possible self-dual-product lattice.We can now see how the meet algorithm works out in this fuller version of the syntax lattice. For endocentric groups there is no problem; for by definition one of the points concerned must include the other, which, as before, is their meet and defines the function of the whole group. But for exocentric groups the situation is more complicated.Some exocentric substituents will have a meet in the lower ideal of ZZ, that is, in the lowest exponent of the secondary lattice. Now we have already indicated that to these points we attach the same syntactic functions as are attached to the corresponding points in the primary lattice. However, if we make no modification in the meet algorithm, we shall be faced 98026with the difficulty that if any such substituent is in turn included as part of a yet larger one, the function of the latter can be represented only by a point yet lower in the lattice; in fact, if an exocentric group forms part of another exocentric group, the latter will be assigned the function Z.IC, which is that of a conjunction and is unlikely to be correct. To cope with this situation we make use once again of the top-tobottom contrast in the lattice. What we need is to get from a point in the lower ideal of ZZ to a point in its upper ideal. Because of the duality relation between these two sublattices, their respective points are already in correspondence; in the notation used in figure 6 , those in the upper ideal have letter-pairs ending in Z, and those in the lower ideal have letter-pairs beginning with Z. Therefore, we make the rule that whenever the meet algorithm leads us to a point in the lower ideal of ZZ, we replace this, as the result of the algorithm, by the corresponding point in the upper ideal of ZZ. This rule I call the polar algorithm.On this basis, exocentric groups can be divided into three classes, all of which share the property that their functions differ from those of any of their components. In the first class, the function of the whole group is ZZ; in the second, it is a point in the lower ideal of ZZ; in the third, it is any other point in the lattice. Linguistically, these three classes represent different degrees of "completeness"; the first are complete clauses, the second may be called subordinate clauses, while the third class contains groups too incomplete to be called clauses at all.As defined above, replaceability is an asymmetrical relation. It can happen that segment S' can replace another segment S while S cannot replace S'. For instance, we can readily show that in English 'them' is replaceable by 'gypsies'. But we cannot replace 'gypsies' by 'them'. For if we make this replacement in the utterance 'the gypsies came', we get 'the them came'. If this is accepted as corrigible, its correction can only be 'they came'. But, 'gypsies came' is also an utterance, dis- 98026tinct from "the gypsies came". If we make the proposed replacement we get 'them came' which is corrigible, but again corrects to 'they came'. It is not therefore distinct from 'the them came' according to Def.l. The replacement therefore fails to satisfy postulate (b) of Def.2.Nevertheless, it is easy to define a symmetrical relation, based on the replacement idea, as follows:Def.3. two segments S, T in L are said to be equipollent if S is replaceable by T, and T by S, in L.This relationship of equipollence is analogous, at the syntactic level, to that of "replacement" as defined by Jones (10) in regard to semantics. Like the latter, equipollence is a similarity relation; for it is reflexive (every segment is equipollent with itself), symmetrical (by definition), and transitive (for if S is replaceable by T, and T by U, then S is replaceable by U; and conversely). It therefore divides the class of segments in given language into classes, whose members share common syntactical properties, just as Jones' "replacement" divides the class of lexemes into classes whose members share a common "meaning".However, not all sequences in a given language are either utterances or segments of utterances; likewise, not all segments are recognisable, either by a "competent speaker" or by a trained linguist, as meaningful units of speech. In order to be able to isolate those segments which can be profitably used as units in the syntactic analysis of a text, we need to define a certain subclass of the domain of equipollence which shall contain only those segments which are useful for this purpose.Def.3. a segment S, interrupted or not, is said to be a substituent in a language L if there is at least one segment T in L, distinct from S, such that: (a) T is equipollent with S (b) there is no sequence U of segments U 1 U 2 ,... such that (b1) for every U 1 there is at least one segment V j in L distinct from and equipollent with U l , and (b2) the sequence U is corrigible to T.The effect of this definition is to recognise as a substituent only segments which are equipollent with simple substituents, i.e. those which are unable to be further divided into substituents. Roughly speaking, 98026this allows any meaningful unit, up to a sentence, to be a substituent, since sentences are in general equipollent with single units like "yes" or "no", and in all languages there exist sentences of so formal and stereotyped a character as to be admissible as simple lexemes. For instance, we do not get a true picture of the meaning of 'How do you do?' if we analyze it into its component parts; such a sentence, while certainly equipollent with genuine sentences like "How is your stomach?" is a perfectly good candidate for inclusion as a whole in a dictionary. It is convenient for some purposes, also, to recognise any sequence of two or more sentences as equipollent with a single sentence; if this is done, the restriction (b) in Def. 3 is hardly needed. However, we aim eventually to consider the syntactic relations between the sentences in a paragraph or conversation, and for this purpose we must make a fairly clear distinction between "sentences" and higher units which Def. 3 succeeds in doing.The reason for introducing corrigibility into the postulate (b2) is to allow for words like the French 'au' which while apparently simple substituents (in that they cannot be analysed as they stand into smaller substituents) are inexpedient to admit as such, because in reality they are compounded of units having separate and definable functions in the sentence. But, of course, there exists the sequence 'à le' which, though not a segment in French, is certainly corrigible to 'au', and which is a sequence of segments each equipollent with at least one other ('à with 'dans'; 'le' with 'un'). The reason why we do not want to have to treat 'au' as a single substituent, is that in an expression such as 'au fond' we would like to recognise as substituents not 'au' and 'fond' but the more logical pair 'à' and 'le fond'. In bracket notation we would wish to analyse 'au' into 'à (le...)'.The following supplementary definition therefore suggests itself for use in connection with substituents: Def. 4. a substituent of S in L is said to be compound if it is the correction of * a sequence U of segments U 1 U 2 ,... such that (a) each U 1 is a substituent in L, and (b) the sequence left on replacing any one U i by the zero segment is also a substituent in L.* note here, that by Def.1 every segment is its own correction.In such a case, the segments U 1 , U 2 ,... are the components of S.We have already mentioned that equipollence is a similarity relation dividing any subclass of its domain, and in particular the class of substituents, into equivalence classes. Members of any of these classes would be said, by linguists, to have the same syntactic function. However, the following definition proves to be more amenable to our purposes: Def. 5 the total paradigm of a substituent S in a language L is the set of all substituents in L which contain either S, or another substituent equipollent with S, as subsegments.It is part of the method of this work to replace the unsatisfactory unit of the "word", already abandoned by most linguistic schools, by the carefully defined concept of "substituent". It is this replacement which justifies the use of the term "paradigm" in this sense. It will shortly appear, that those members of the total paradigm of a "stem" (which, in an inflected language, is in general a simple substituent) which are "words" in the conventional sense form a set almost Identical with the "paradigm" in the traditional linguists' sense.It is evident that if two substituents S, T are equipollent, then according to Def. 5 they must belong to the same total paradigm. Moreover, if T is not equipollent with S, then either (a) T contains S as a proper sub-segment; in which case S, which is contained in the paradigm of S, is not in the paradigm of T; or (b) S contains T, with the complementary effect; or (c) neither S nor T contains the other, in which case both paradigms contain substituents not in the other. Therefore, if S, T are not equipollent, they belong to different total paradigms. Thus, the total paradigms defined in Def. 5 are indeed equivalence classes under equipollence.The relation between total paradigms, and the syntactic functions of the linguist, is now clear. If any two substituents belong to the same paradigm, then they share a common function. If they belong to different paradigms, they have no common function, unless their paradigms have a nontrivial union, in which case the latter provides them with a common function. We therefore postulate a one-to-one correspondence between syntactic functions and total paradigms; the first are the properties which characterize the second as classes. 98026However, in formal statements I shall prefer the term paradigm to function, on the grounds that the latter word has too many other uses not entirely excluded by the context. I shall normally drop the epithet "total" before "paradigm", where no confusion is likely to follow.Thus, while we have this simple relationship between our total paradigms and the relation of equipollence, their structure under the relation of replaceability is somewhat more complex. It may be reduced to the following five Lemmas:1. If a substituent A replaces both B and C, where B, C are not equipollent with each other, then the paradigm of A is the set union of those of B, C.2. If a substituent A consists of two or more segments, each a substituent, B, C...., the paradigm of A is included in each of those of B, C,...If there is in a language L a substituent Z such that any other substituent Z' containing Z is equipollent with Z, then the paradigm of Z is contained in that of every substituent.4. If there is in L a segment which can replace every other substituent in L, then this segment is a substituent in L, and has a paradigm including those of all other substituents in L.5. The paradigm of any substituent is unique (provided we take due account of the procedures mentioned in Sec. 2.1).The substituent Z mentioned in Lemma 3 is exemplified by a complete sentence not forming part of any other sentence and associated only by concatenation with other segments in an utterance. Formally we may state the following:Def. 6. A substituent in a language L is a free sentence of L if it is a component of an utterance equipollent with the whole utterance.The segment mentioned in Lemma 4 is exemplified by a sign of omission such as ..., or a word such as 'thingummy' used to replace any word which a speaker will not trouble accurately to recall.The above five lemmas are sufficient to prove that, if we assume the existence of the substituents postulated in (3) and (4), the system of (98026) all the total paradigms is a lattice under the set-inclusion relation. For if S, T are any two non-equipollent substituents, their respective paradigms have, potentially, a join defined by (1) and a meet defined by (2), while the bounds of the lattice are provided by (3) and (4); these points satisfy the definition of a lattice (see Birkhoff (l)) 4. CONSTRUCTION OF THE GENERAL SYNTAX LATTICEWe must now show how the coding used in the empirical procedures currently being tested on the Cambridge computer is derived from the theory. The actual program is fully discussed elsewhere (11), but it 98026will be illustrated here by a simplified example.The full product lattice has 144 elements, and in principle substituents with any pair of these functions may form a group, or compound substituent. As the model does not take the order of the components of a group into account we thus have ½ (144 2 ) possible different groups. The function of a group is, however, determined by the meet algorithm, and the group formed by any one of the 10368 possible pairs of substituents must therefore be defined by a point in the lattice. There are thus only 144 different meet-points. 12 of these points, moreover, are in the lower ideal of zz, and are not accepted as they stand but converted by the polar algorithm. This leaves us, therefore, with 132 result-points representing different kinds of group; these will be called substituent types.By using this set of substituent types we can extend our classification system. In setting up the syntax lattice we treated it as a schema for classifying substituents according to their functions. The function of a substituent is naturally related to its behaviour in groups of substituents, but we did not initially attempt to classify substituents according to the kinds of group in which they can figure. It will now be clear that we can give more information about a substituent if we take what we may describe as its grouping possibilities into account. These are derived from the lattice in a straightforward way: for a substituent with a particular function we list the set of result-points which can be reached when operating on a pair of substituents, one of which is the substituent in question. We thus give, in terms of their functions, the kinds of group in which the substituent can participate. This information is represented by a positive mark in the appropriate positions in a 132-place entry. We will call the entry as a whole the substituent's participation class. The record is further refined by entering, for each kind of group in which the substituent can figure, whether it functions as governor, or dependent, or either.For practical purposes, however, the size of participation class entries as just described is not very satisfactory: thus it is too large for convenient machine handling, or for teaching to dictionary makers. It can, however, be reduced as follows. In terms of the theory the set of 132 result-points can be naturally divided into those which lie in the principal exponent of the primary lattice, and those which fall elsewhere, that is, into those with secondary function Z. (Except for ZZ, points with primary function Z are excluded by the polar algorithm.) It will be clear from the lattice that there are 12 points of the first kind and 9802648 120 of the second. This distinction represents the extent to which further grouping is required before the stop-point defined by ZZ can be reached.Compound substituents with secondary function Z can be 'direct' components of full clauses; those with neither function Z require at least one intermediate grouping, with the application of the polar algorithm, before they can be grouped to give a full clause. It can be argued that the information about a substituent represented by the fact that it can be a member of a group of the second kind is less useful that that representing its membership of a group of the first kind, given that from a group of the second kind one of the first kind will be reached. If we accept this argument, we can then replace the 132-place participation class by one with 12 places. For a particular substituent this replacement will give the result-points in the principal exponent of the primary lattice which will be reached by operations on pairs of substituents of which the substituent in question is a member.Examination of natural languages shows that the 12-place participation class is an oversimplification. Thus in English there are two kinds of compound substituent which would both be given the function OA.Z, namely adverbial groups (like "almost exactly") and adverbial clauses (like "considering the circumstances"). These clearly represent different constructions and if they were identified in classifying the behaviour of a word, would lead to incorrect grouping. We can, however, deal with this difficulty by taking into account distinctions which the theory already contains. For instance, we can at least make use of the distinction between substituents, the meet of whose component functions falls in the lower ideal of Z.Z, and those whose meet falls elsewhere in the lattice; that is, be-, tween complete exocentric groups (clauses) and the rest. This division would take us from 12 functions to 24 substituent types, straightforwardly derived from the theory, and therefore of interlingual validity (though we must not expect that all of them will be represented in any particular language.) In particular, we can construct model and restricted languages requiring many less distinctions than this. Most natural languages appear to need between 10 and 20 substituent types for their adequate analysis,We have so far discussed grouping, or bracketing, in terms of lattice points and lattice algorithms. We must now show how this works out for actual texts. Given that each substituent type defines a kind of group, it is clear that the fact that a set of substituents can be bracketed will be represented in their respective participation classes by a positive (98026) entry for the same substituent type. This in itself, however, is not enough; the items to be bracketed must also be contiguous, and must satisfy the governor dependent relation. The latter means that we can only bracket a group of substituents if one of them can be the governor, and the rest dependents, in the kind of group concerned. The governor-dependent relation thus acts as a restriction on bracketing.) Two points should be noted: (1) AS many items as possible can be combined at the same time to form a group. (2) The substituent types are arranged in a priority order from left to right: that is, we look for groups of kind 1 first. The order loosely corresponds to the lattice structure in that 'weak' groups are found first, and full clauses last, but it is essentially a practical device for reducing the amount of effort spent in trying to find brackets: as bracketing is carried out on ever larger units, there is clearly some point in looking for the smallest groups of most closely associated substituents first.The way in which the information contained in participation classes is used for bracketing can be illustrated as follows:A B x + - y + +This means that x can belong to a group of kind A, but not of kind B, and that y can belong to groups of kind A and kind B. If x and y occur in contiguous positions in a text and can therefore be bracketed, the resulting group must have the function A.We will now consider a more elaborate case with governor-dependent information given:A B C x G - D y D D - z - - DWhen the governor-dependent rule is satisfied only x and y can be bracketed,to give a group with function A. x and z are both dependents in groups of type C and cannot therefore be combined.In order to keep the example simple the following modifications of the actual procedure have been made: (i) habitat and concord information is omitted; (ii) only 6 substituent types are used; (iii) the bracketing rules are formulated rather crudely.A rather lazy cat chases falling leaves and butterflies; of course these can easily get away.We will assume that the participation class entry for each word has been obtained by dictionary look-up. The sentence with appropriate entries is as follows: Bracketing is carried out according to the following rules:Starting at the last item before the punctuation stop, whether simple or a compound obtained by previous bracketing (see below), read backwards in each column in turn, looking for the longest continuous sequence immediately preceding the stop in which one item is governor and the rest dependents. As the priority rating of the columns is from left to right the first such sequence is taken (even if there is a longer one in a later column). For example, in a particular column the following are all bracket groups: No brackets starting from c can be obtained in the following:A -D G b D G - c D -DWhen Rule 1 suggests a bracket in column 1 if the item marked as governor (i.e. the conjunct substituent) is immediately flanked by two items marked as dependent, treat the three as a group. Thus the first case below will bracket but the second will not:EQUATION52If under Rule 1 in proceeding backwards from a group already made no brackets can be found, take from the beginning of the existing group the smallest number of items compatible with its remaining a group and try backwards from the last of these before trying again with the reduced following group. There is shown in the following example: 6.4EQUATIONThe column in which a bracket is made represents the type of the resulting compound substituent. Reference to Table I below gives the appropriate participation class entry, and the group with this new entry is treated as a single item in further bracketing. We now have the whole sentence bracketed as follows:((a(rather lazy) cat) (chases (falling (leaves and butterflies;)))) (of course these (can (easily get away.)))When we come to apply the methods described to a particular language, the following procedures have to be gone through. So far, these have been effected only for English, though plans are now made to apply the same methods to several other languages.First, we have to fix upon a suitable set of substituent types to describe the constructions met with in the given language; in making this choice, considerations of informational efficiency will play a large part; for in most languages it is possible to find a few examples of substituent types which it is not practically expedient to recognise because of their rarity or difficulty of recognition. Next, given our substituent types, we have to prepare participation classes based on them, to act as the syntactic parts of the dictionary entries for each word in our dictionaries. 98026
null
null
null
In the system, represented in a simplified form in figure 6, we can interpret certain of the lattice relations in more detail than has been shown above. One additional descriptive contrast which the theory thus gives us is that between the governor and the dependent of any compound substituent. One finds, in any such group, that there is one component which 'colours' or 'gives tone to' the whole group, the others having a more passive role. Thus, in any substituent with three or more components, one will stand out from the others; this we call the governor, and the other components are the dependents of the substituent (98026) (Note: dependents are "of" substituents, not "of" the associated governors). If all available examples of a given substituent type have only two components, we must identify the governor from its lattice properties. Now in terms of the lattice, this works out differently in the three cases of exocentric groups, endocentric groups, and conjunct groups. In a free clause, it is clearly the verb-group which gives its colour to the whole; thus we make O governor over S. The same rule will serve for all exocentric groups, in which the primary functions of the components show this difference; where they do not (as for instance in a group whose components are SZ and AA) we may go by their secondary functions (in the case mentioned, as these are respectively Z and A, this means treating it as if it were an endocentric group). In the complete 144-point lattice, difficulty may also arise from components with I functions: in any particular context, these may behave in their S or in their O capacity, and this must be ascertained first.In endocentric groups, it is clearly the component which has the same function as the whole which colours the group; thus, the component represented by the lowest point on the lattice is the governor. In this case, then, the dependent-governor relation is straightforwardly the inclusion relation in the lattice. Clearly, too, while we can have such a group with several upper points, the meet algorithm forbids the existence of an endocentric substituent with more than one lower bound, namely, the governor.In conjunct groups, the dominant component is the conjunction itself; though, standing as it does at the top of the lattice, it does not affect the group's function. Thus, we adopt the convention that, when any point which is a join in the lattice is interpreted as the join-relation itself, this shall mark the governor of the group.A yet more important place in traditional linguistic description belongs to the subject-predicate contrast than to the governor-dependent contrast, and this too can be derived very simply from the present theory. We have seen that, by means of the polar algorithm, a scale of increasing completeness of exocentric groups can be defined. Incomplete clauses have their meets outside the lower ideal on the point Z.Z; complete but subordinate clauses have meets within this ideal, which the polar algorithm transposes into the upper ideal; a free clause has its meet actually at 98026Substituents of this last type, and these alone, are subject -predicate groups. This theory of syntax therefore provides a representation for the subject -predicate pattern in language: it is that of an exocentric substituent whose meet is at the point Z.Z, one of the two non-bounding vertices, or "central" points, of the lattice. It can also be thought of as the result of applying the polar algorithm as a stop rule: when in the build-up of a sentence structure we come to this point we can stop, but not before.This interpretation does not at first sight seem to have much to do with the logicians' notion of subject and predicate. These have been thought of either grammatically (as "sentence with a main verb") or, following Russell, formally (as a formula of the type xP subject to extension either by the addition of further terms y, z, ... to the x, or by adding quantifying restrictions to the x). AS to the grammatical interpretation, our theory explains rather the notion of mainness than of verbness; it shows us how to build up the distinction between "main" and "subordinate" verbs (all verbs being initially merely operatives). As to the logical interpretation, the theory does not define the relation of predication (though it can cope with the distinction between monadic, dyadic etc, relations), but it does explain why, however predicative logic is developed, the symbol P remains unique and unchanged; for this point, our Z.Z, is one of the four vertices of the lattice which can be transformed, by inversion of factors in the lattice, into the upper or the lower bound, just as in logic P is the point from which, no matter how far the x -sequence is extended the whole system of relation always hangs. It is in this sense that it is possible to say that, starting from acknowledged linguistic notions, which we define more exactly than heretofore, we can arrive at an account of this important logical form, the subject -predicate sentence.
Main paper: introduction: This paper describes briefly a new model of grammatical description, devised originally with the purpose of providing a better tool for the machine processing of language material. Particular attention has been given to the advantages likely to accrue, for this purpose, from exploiting to the full whatever features could be found in common between all languages. The need to devise a new model became apparent when it was found how little attention had been given in the past to this point.It seems that previous models of grammatical description fall into four main classes. The oldest of these, which has been called by Hockett (6) the "Word-and-Paradigm" or WP model, originated in antiquity, and is well adapted to the description of inflected languages like Sanskrit, Greek and Latin. It is however, despite Robins' (3) recent reconsideration, far too limited in scope for our purposes. The next, the "Item-and-Process" or IP model in Hockett's terminology, works with the notion of items (word or short phrases) being modified by various processes (suffixation, vowelchange, root-replacement, etc.) to produce all the various forms of the language. This model was first clearly systematized by Sapir (9) ; it is more adaptable than the WP model, but still not sufficiently general. The "Item-and-Arrangement" or LA model was evolved by descriptive linguists; it aims to describe the whole grammar of a language in terms of lists of items and of the ways in which they can be arranged (i.e. constructions). This model lends itself better to expressing the basic hierarchical structure of sentences, first recognised clearly by Husserl (7) , than the previous models, and is somewhat easier to formulate mathematically; but it runs into numerous difficulties which have led to the formulation of yet another type of model. This is the one originated by Harris (4) and greatly strengthened by Chomsky (2); we may call it the Kernel-and-Transformation or KT model. It takes as its starting point, a number of simple standard sentence forms, called "kernels", and seeks to derive every possible correct sentence in the language by developing these kernels through a mechanism of substitution of their components by other kernels. This model has a number of advantages, notably in the description of what I here call interrupted substituents, but it is very refractory to mathematical formulation. This model has received a more extensive application to problems of handling language material and mechanization of language processes than the others. This work is especially associated with the University of Pennsylvania, where it has been ingeniously used by Hiz (5) and by Kaufman (8) . Unfortunately the great complexity produced by these efforts, even though they have been confined (98026) to the description of a single language (English) casts some doubt on the effectiveness of the KT model for our purposes.The new model which I propose here, for the purpose of meeting the needs of machine translation better than those previously have done, will be set out so far as possible in an axiomatic manner, in order to emphasize its internal structure. The task of demonstrating in detail its application to the description of actual languages lies outside the scope of this paper. Evidence that it is so applicable comes from two sources: first, the operation of machine programs embodying ideas drawn from the model for the syntactic analysis of texts; and second, descriptions of various particular languages capable of being compared with each other and with more conventional descriptions. Evidence of both sorts is planned for publication in due course; here, I shall confine myself to exposition alone.First, I shall define an operation called "replacement" by which parts of utterances may be may be substituted by other parts: this does no more than re-state familiar Ideas. Second, I shall use this operation to derive a rigorous definition of grammatical function (in a partly mathematical context this term, unfortunately, is too liable to be misunderstood, and must be replaced: I use the term "paradigm", in an analogically extended sense, for this purpose). Third, I show that the set of all possible paradigms (functions) constitutes a well-defined mathematical system, namely, a lattice; this makes possible major simplifications in the description of syntactic phenomena. Fourth, I shall use the conceptual apparatus to hand to circumscribe the possible diversity of syntactic forms observable in any language, and thereby show how a uniform system of categories can be applied to all languages. Lastly, I shall discuss how the ideas developed can be applied to the mechanical programming of syntactic analysis. the concept of replacement 2.0. replacement in a closed language: We consider a closed language as being a closed corpus consisting of a set of utterances; each utterance is a sequence of signs having a beginning and an end. The signs in any such sequence are understood to have a unique simple ordering. Each sign may be a written letter or ideograph, or a sound; there are thus various possibilities for the realization of the signs, and in some realizations it may be necessary to resort to special conventions in order that they may be unambiguously assigned a simple ordering; this however, is a matter which at the present level of (98026) discourse need not be pursued in detail.Any subset of the signs constituting an utterance, presented in the same order in which they occur in this utterance, is called a segment, if S is a segment of an utterance U, and if between the first and the last sign included in S, every sign in U is also a sign in S, then S is said to be an uninterrupted segment; otherwise, S would be interrupted. We shall have occasion to use the notion of a zero segment, that is, one consisting of no signs; just as the empty set, in set theory, is understood to be a subset of every set, so we shall admit the presence of an empty sub-segment in every other segment. In all the statements which we shall make about segments, the possibility that a zero segment may be referred to should be borne in mind.If an interrupted segment consists of n subsegments, each of which is itself uninterrupted, the latter will be called fragments to distinguish them from general subsegments, which may be themselves interrupted. A fragment, being itself a segment, may also on occasion be a zero segment. We shall use, as a general form for denoting a segment, ...F 1 ...F 2 ..., where F 1 and F 2 are fragments of an interrupted segment. Whenever such a form is used, it must be understood that though two fragments are shown, more than two fragments may in fact be present.A segment ...F 1 ...F 2 ... is said to be replaceable by another segment ...F 1 ...F 2 ... if the following two postulates are fulfilled: (a) for any X, Y, Z such that XF 1 YF 2 Z is an utterance in the language, XF' 1 YF' 2 Z is also an utterance in the language;(b) for any ...G 1 ...G 2 ... in the language, of which ... F 1 ... F 2 ... is a subsegment, there is at least one utterance of the form XF 1 YF 2 Z in the language, which does not contain ... G 1 ... G 2 ...The second condition is required to avoid saying that one segment is replaceable by another if they are only so when they are parts of larger ones.A closed language, as defined above, is a rather unsatisfactory model of actual speech. At the very least it needs to contain an enormous amount of material if it is to provide examples of all possible constructions. Furthermore, in a strict sense, the set of "possible constructions" in any actual language is an open one in that any speaker may coin a new construction without thereby ceasing to speak the given language. We therefore need to pass over from consideration of closed languages, to take account An open language, is, like a closed language, considered as a set of utterances. But whereas in a closed language these utterances form an ostensibly given corpus, which can be examined to determine whether a given sequence is or is not an utterance, in an open language the criterion is, whether or not a given sequence is accepted by a competent speaker as a correct utterance in the given language. The definition of replaceability given above, needs modification in three particulars, in order to adapt it for use in an open language. We have to re-define the term "segment"; we have to consider carefully what is implied by a sequence being an utterance; and we have to re-phrase the definition of replaceability.In effect, we are trying to substitute, for the closed corpus of a closed language, the behavioural response of a competent speaker, to define the compass of an open language. This being so, we cannot simply regard a segment as a sequence of signs, unless we admit as "signs" not only written marks and spoken sounds, but any sensory clue available to the competent speaker during the act of communication. We therefore regard all such clues as imaginary diacritics which could be added to the manifest signs composing a given utterance or segment. In other words, we allow our competent speaker to annotate any text before we subject it to further analysis.The scope of such annotations may be illustrated by the example of the English phrases 'you and not me' and 'shorthand notes'. Both, as they stand, are sequences of written letters, both can be parts of utterances in English, and both contain the uninterrupted sequence 'and not'. By the definition above, this sequence is certainly a segment, of which both phrases contain exponents. We rely on the annotations or diacritics which a competent speaker might add, to recognise that the two letter-sequences are effectively different. This might, for example, be done by underlining the first and last letters of every word, in which case the two sequences would be 'and not' and 'and not'. The particular device adopted does not matter, provided (a) it can be non-contentiously performed, and (b) it leaves the annotated text capable of complete analysis on the assumption that, if a segment S is replaceable by a segment T, S and T are sufficiently identifiable by the sequences of signs (including the diacritics) which they contain.If this principle is applied to actual texts in actual languages, itis possible to find cases where it breaks down. These are cases of irreducible ambiguity. An example is the sentence 'Iceland fish catch drops': it is more than a competent speaker can do to annotate this text so as to distinguish non-contentiously all the meaningful segments in it. For it can bear two distinct meanings, which only a fuller context could disengage: either it concerns animal behaviour, or the fishing idustry, according as 'catch' or 'drops' is taken as the verb. It is therefore necessary to prescind such cases of irreducible ambiguity in the rigorous analysis of open languages.Whereas in a closed language, every sequence of signs either is or is not an utterance, there are four cases which may have to be considered in regard to open languages. These are exemplified by the following phrases:1. 'It's a nice morning'; This is an utterance in English.; Not an utterance; the correct form is 'I'm hungry'. 3. 'Lake three stand'; Not an utterance, no comments occur.The definition given for replaceability in a closed language was based on two postulates. The first of these, when its terms are interpreted in the light of what has been said above about segments and utterances, can stand. The second, aimed to exclude recognition of replacement between segments which are "really" parts of larger segments, between which the replaceability relation is more usefully posited, requires amendment. For in an open language it is no longer sufficient, in order to exclude this situation, to find one instance to the contrary, or even a closed set of instances. Thus, in English, we could say that 'ga' is replaceable by 'ra', adducing instances in which 'gain' is replaceable by 'rain'; this is not any the less silly because we can add a few other instances of the same replacement, such as 'gate' being replaceable by 'rate'. Only if there is an open set of such cases can we count the replaceability as genuine.We are therefore led to the following revised definition:Def.2. a segment ...F 1 ... ...F 2 ... in an open language L is replaceable by another segment ...G' 1 ...G' 2 ... if and only if:(a) for any X, Y, Z in L such that XF 1 YF 2 Z is an utterance in L, XG' 1 YG' 2 Z is a corrigible sequence in L. (b)for any two distinct utterances XF 1 F 2 Z the corresponding XG' 1 YG' 2 Z are also distinct, and(c) for any segment ...G 1 ... G 2 ... containing ... F 1 ... F 2 ... as a proper subsegment, there is an open set of utterances XF 1 YF 2 Z not containing ... G 1 ... G 2 ... total paradigms 3.1 equipollence: As defined above, replaceability is an asymmetrical relation. It can happen that segment S' can replace another segment S while S cannot replace S'. For instance, we can readily show that in English 'them' is replaceable by 'gypsies'. But we cannot replace 'gypsies' by 'them'. For if we make this replacement in the utterance 'the gypsies came', we get 'the them came'. If this is accepted as corrigible, its correction can only be 'they came'. But, 'gypsies came' is also an utterance, dis- 98026tinct from "the gypsies came". If we make the proposed replacement we get 'them came' which is corrigible, but again corrects to 'they came'. It is not therefore distinct from 'the them came' according to Def.l. The replacement therefore fails to satisfy postulate (b) of Def.2.Nevertheless, it is easy to define a symmetrical relation, based on the replacement idea, as follows:Def.3. two segments S, T in L are said to be equipollent if S is replaceable by T, and T by S, in L.This relationship of equipollence is analogous, at the syntactic level, to that of "replacement" as defined by Jones (10) in regard to semantics. Like the latter, equipollence is a similarity relation; for it is reflexive (every segment is equipollent with itself), symmetrical (by definition), and transitive (for if S is replaceable by T, and T by U, then S is replaceable by U; and conversely). It therefore divides the class of segments in given language into classes, whose members share common syntactical properties, just as Jones' "replacement" divides the class of lexemes into classes whose members share a common "meaning".However, not all sequences in a given language are either utterances or segments of utterances; likewise, not all segments are recognisable, either by a "competent speaker" or by a trained linguist, as meaningful units of speech. In order to be able to isolate those segments which can be profitably used as units in the syntactic analysis of a text, we need to define a certain subclass of the domain of equipollence which shall contain only those segments which are useful for this purpose.Def.3. a segment S, interrupted or not, is said to be a substituent in a language L if there is at least one segment T in L, distinct from S, such that: (a) T is equipollent with S (b) there is no sequence U of segments U 1 U 2 ,... such that (b1) for every U 1 there is at least one segment V j in L distinct from and equipollent with U l , and (b2) the sequence U is corrigible to T.The effect of this definition is to recognise as a substituent only segments which are equipollent with simple substituents, i.e. those which are unable to be further divided into substituents. Roughly speaking, 98026this allows any meaningful unit, up to a sentence, to be a substituent, since sentences are in general equipollent with single units like "yes" or "no", and in all languages there exist sentences of so formal and stereotyped a character as to be admissible as simple lexemes. For instance, we do not get a true picture of the meaning of 'How do you do?' if we analyze it into its component parts; such a sentence, while certainly equipollent with genuine sentences like "How is your stomach?" is a perfectly good candidate for inclusion as a whole in a dictionary. It is convenient for some purposes, also, to recognise any sequence of two or more sentences as equipollent with a single sentence; if this is done, the restriction (b) in Def. 3 is hardly needed. However, we aim eventually to consider the syntactic relations between the sentences in a paragraph or conversation, and for this purpose we must make a fairly clear distinction between "sentences" and higher units which Def. 3 succeeds in doing.The reason for introducing corrigibility into the postulate (b2) is to allow for words like the French 'au' which while apparently simple substituents (in that they cannot be analysed as they stand into smaller substituents) are inexpedient to admit as such, because in reality they are compounded of units having separate and definable functions in the sentence. But, of course, there exists the sequence 'à le' which, though not a segment in French, is certainly corrigible to 'au', and which is a sequence of segments each equipollent with at least one other ('à with 'dans'; 'le' with 'un'). The reason why we do not want to have to treat 'au' as a single substituent, is that in an expression such as 'au fond' we would like to recognise as substituents not 'au' and 'fond' but the more logical pair 'à' and 'le fond'. In bracket notation we would wish to analyse 'au' into 'à (le...)'.The following supplementary definition therefore suggests itself for use in connection with substituents: Def. 4. a substituent of S in L is said to be compound if it is the correction of * a sequence U of segments U 1 U 2 ,... such that (a) each U 1 is a substituent in L, and (b) the sequence left on replacing any one U i by the zero segment is also a substituent in L.* note here, that by Def.1 every segment is its own correction.In such a case, the segments U 1 , U 2 ,... are the components of S.We have already mentioned that equipollence is a similarity relation dividing any subclass of its domain, and in particular the class of substituents, into equivalence classes. Members of any of these classes would be said, by linguists, to have the same syntactic function. However, the following definition proves to be more amenable to our purposes: Def. 5 the total paradigm of a substituent S in a language L is the set of all substituents in L which contain either S, or another substituent equipollent with S, as subsegments.It is part of the method of this work to replace the unsatisfactory unit of the "word", already abandoned by most linguistic schools, by the carefully defined concept of "substituent". It is this replacement which justifies the use of the term "paradigm" in this sense. It will shortly appear, that those members of the total paradigm of a "stem" (which, in an inflected language, is in general a simple substituent) which are "words" in the conventional sense form a set almost Identical with the "paradigm" in the traditional linguists' sense.It is evident that if two substituents S, T are equipollent, then according to Def. 5 they must belong to the same total paradigm. Moreover, if T is not equipollent with S, then either (a) T contains S as a proper sub-segment; in which case S, which is contained in the paradigm of S, is not in the paradigm of T; or (b) S contains T, with the complementary effect; or (c) neither S nor T contains the other, in which case both paradigms contain substituents not in the other. Therefore, if S, T are not equipollent, they belong to different total paradigms. Thus, the total paradigms defined in Def. 5 are indeed equivalence classes under equipollence.The relation between total paradigms, and the syntactic functions of the linguist, is now clear. If any two substituents belong to the same paradigm, then they share a common function. If they belong to different paradigms, they have no common function, unless their paradigms have a nontrivial union, in which case the latter provides them with a common function. We therefore postulate a one-to-one correspondence between syntactic functions and total paradigms; the first are the properties which characterize the second as classes. 98026However, in formal statements I shall prefer the term paradigm to function, on the grounds that the latter word has too many other uses not entirely excluded by the context. I shall normally drop the epithet "total" before "paradigm", where no confusion is likely to follow.Thus, while we have this simple relationship between our total paradigms and the relation of equipollence, their structure under the relation of replaceability is somewhat more complex. It may be reduced to the following five Lemmas:1. If a substituent A replaces both B and C, where B, C are not equipollent with each other, then the paradigm of A is the set union of those of B, C.2. If a substituent A consists of two or more segments, each a substituent, B, C...., the paradigm of A is included in each of those of B, C,...If there is in a language L a substituent Z such that any other substituent Z' containing Z is equipollent with Z, then the paradigm of Z is contained in that of every substituent.4. If there is in L a segment which can replace every other substituent in L, then this segment is a substituent in L, and has a paradigm including those of all other substituents in L.5. The paradigm of any substituent is unique (provided we take due account of the procedures mentioned in Sec. 2.1).The substituent Z mentioned in Lemma 3 is exemplified by a complete sentence not forming part of any other sentence and associated only by concatenation with other segments in an utterance. Formally we may state the following:Def. 6. A substituent in a language L is a free sentence of L if it is a component of an utterance equipollent with the whole utterance.The segment mentioned in Lemma 4 is exemplified by a sign of omission such as ..., or a word such as 'thingummy' used to replace any word which a speaker will not trouble accurately to recall.The above five lemmas are sufficient to prove that, if we assume the existence of the substituents postulated in (3) and (4), the system of (98026) all the total paradigms is a lattice under the set-inclusion relation. For if S, T are any two non-equipollent substituents, their respective paradigms have, potentially, a join defined by (1) and a meet defined by (2), while the bounds of the lattice are provided by (3) and (4); these points satisfy the definition of a lattice (see Birkhoff (l)) 4. CONSTRUCTION OF THE GENERAL SYNTAX LATTICE 'verns hollip': ; Undecidable.There is no novelty about either (l) or (3). The new cases not paralleled in a closed language are (2) and (4). The last is in fact peculiarly tiresome, in that there are in real life speech situations in which this phrase could be accepted as an utterance, and meaning could be attached to the words 'vern' and 'hollip'. But in the context of any mechanical language processing we have to regard it as not an utterance, because it must always remain unrecognisable, until the words it contains get into the dictionary. The case (2) can be more constructively treated. We shall formulate the following definition:Def.l.: a sequence S in an open language L which differs from some utterance S' in L, if at all, in such a way that in the given context, a competent speaker of L will unambiguously identify S with S', is said to be corrigible to S', which is called its correction. Two different sequences both corrigible to the same utterance are said to be not distinct.This definition has been so formulated that it applies to the cases (1) and (2) of the above list, but not to (3) and (4). Its effect is, that in open languages, the class of corrigible sequences will take the place occupied by utterances in closed languages. (98026)Having established that the system of paradigms of substituents in any language must form a lattice, we have now to show what lattice is in fact formed. This could be done empirically, by applying the definitions given above to a sufficient body of texts in a given language. Even with the best mechanical aids, this would be a virtually impossible task, even for one language. Or, it could be done intuitively; any intelligent person can learn how to do this for a language he knows well enough, but the results carry conviction only to one who has himself gone through the procedure. I shall therefore construct the syntax lattice, step by step, starting from the free clause and introducing progressively finer syntactic contrasts, till it is sufficiently developed to serve as a model of actual language; I shall then show how it can be used in the design of a syntax analysis program.We have already seen that, in a lattice representing the total paradigms of substituents, if a point A 'includes' (or 'precedes') a point B, a substituent whose paradigm is A can form part of a substituent whose paradigm is B. We can assume that the simplest imaginable 'language' has the capacity for some kind of syntactic contrast, and has sentences made up of smaller units. The syntax lattice for such a rudimentary language would therefore be as shown in figure 1. Even in this simple schema, the following lattice properties are illustrated:(1) the side-to-side symmetry of the lattice: i.e. the complementarity of O and S. This distinction we interpret as the subject -predicate dichotomy.(11) the top-to-bottom asymmetry of the lattice: i.e. the partialordering relation (inclusion of paradigms defined as sets). At this stage, this has only a trivial interpretation, but later we shall correlate this with the governor -dependent distinction.(111) the two binary operations definable in every lattice, namely the join and meet of any two points, denoted by  and  respectively. As we have already seen, we are committed to interpret ab as the syntactic function of a substituent capable of being used either like those of function a or like those of function b; and ab as the function of a substituent having components of functions a and b.The lattice just considered, with its two principal points O, S, gives us our basic schema. But it is clear that it is far too primitive as it stands for its use to be extended from our imaginary language to any real one. To obtain a more adequate system, the basic schema is therefore enlarged by the addition of points representing new paradigms, which include, in their capacity as sets, the points S and O respectively. This gives us the lattice shown in figure 2. It is obvious that if we are to retain the substantive-operative distinction represented by the two sides of the lattice, we cannot extend the system in any other way, for the more refined classification we require must represent a sophistication of this basic division. The two new side points SA and OA represent substantive adjuncts and operative adjuncts respectively. For if book has the paradigm S and new book is equipollent with book, the paradigm of new includes both book and new book, whereas that of book does not include new; such examples, in the light of Def. 5, show that SA and OA stand, very roughly, for adjectives and adverbs. Their join gives us the new indeterminate adjunct IA, and this also includes the principle indeterminate I. The lattice now has seven points. This extended lattice is still however, inadequate, and we need to add, above SA, OA, IA a further series of paradigms SB, OB, IB to represent subadjuncts, that is, for words whose use is restricted to qualifying other adjuncts. This gives us a ten-point lattice; but this still fails to account for certain syntactically important types of words, such as prepositions, conjunctions, etc. Prepositions could, without too much arbitrariness, be classified as postverbs, and assigned to the point OB, but conjunctions (connectives, as the logician understands them) are still unaccounted for. Since the join operation is the lattice equivalent of the logical and/or connective, we can expect to represent conjunctions by an additional point at the very top of the lattice, IC. Indeed, there are conjunctions in some languages which can be used to connect words of any syntactic function, and whose paradigm therefore by Def. 5 includes all other paradigms. But this does not go for all "conjunctions". There are those which specifically connect clauses rather than separate words; these have a paradigm ZA immediately including Z. The lattice now has (98026) Figure 3 . Complete Primary Lattice Preliminary empirical investigation has shown that, to a surprising extent, this system contains the hard core of the general syntactic classification we need. It is therefore called the primary syntax lattice; that point of the lattice representing the paradigm (i.e. syntactic function) of a substituent, is called the lattice position indicator or LPI of the substituent.Having set up the classification represented by the primary lattice, we can now start to use it to find the function of a compound substituent. I here introduce the first algorithm derived from the theory, which I call the "meet algorithm". This is simply Lemma 2, in the form: The paradigm of a compound substituent is the meet of those of its components.Let us see how this algorithm works out in the lattice of fig. 3 , The meet of the points SA and S, for instance, is S. That is to say, a group consisting of a substantive and a substantive adjunct has the function of a substantive (a man with long legs: equipollent with a man); similarly, the meet of O and OA is O (you have finished it? I have!).For either side of the lattice, therefore, the algorithm works in a satisfactory way and gives linguistically acceptable results; clearly, if we put together units with a common character, we should expect the group to have the same character.The meet algorithm falls, however, when we consider the meet of any two points on opposite sides of the lattice. For such a meet can only be the point Z, whereas in fact a great variety of syntactic functions can be discharged by substituents having components between which the substantiveoperative contrast is in evidence. Given this complementarity, we can only describe Z, very weakly, as the property of having a function different from those of its components. This is serious, since we began by interpreting Z much more strongly, as the paradigm of a free clause.In order to deal with this situation, we make use of the distinction between an endocentric substituent, which is equipollent with one or more of its components; and an exocentric substituent, which is not equipollent with either. What we have found is that the meet algorithm, applied to the simple primary lattice of fig. 3 , works for endocentric but fails for exocentric substituents. We must now develop the lattice schema to take account of exocentric substituents; in fact, the strength of the theory largely rests in its capacity to give an adequate and precise account of the nature of exocentric substituents.The key to this development is to make use of the lattice relation of duality. Just as all the other points in the primary lattice represent possible parts of what we at first interpreted as a clause, represented by the lower bound Z: so, now that we have to weaken the interpretation of Z to that of an exocentric substituent, we need lattice points to represent all those substituents of which an exocentric group Z is a possible part. Of this new set of points, by lemma 2, Z is thus the upper bound. And since we may expect a substituent of any function to have components replaceable by exocentric groups, the new set of points must contain all those, besides Z, which we have already allowed for in the primary lattice. What we have to do, therefore, is to add to the primary lattice its own dual (consisting of the same points with the converse inclusionrelation between them), the point Z being in common between them. This system, shown in figure 4, is still a lattice; it is divided up into two mutually dual sublattices, the primary lattice which, as we have seen, is concerned with endocentric substituents, and the secondary lattice, as we may now call it, which is concerned with exocentric substituent. This lattice however is still not the one we want; the meet algorithm applied to points in the primary sublattice still gives Z as the result for all exocentric groups. To avoid this, we have to accept the existence of further distinctions. There is not, for instance, just one kind of (98026) operative, O: there must be different kinds, each determining a different function for the exocentric groups which it can enter into. Thus, to the different kinds of exocentric groups represented by the dual secondary lattice hanging from Z, there must be added a parallel set of distinctions hanging from every other point of the primary lattice, representing the different kinds of operatives, of operative adjuncts, and so on.The resulting system, when fully developed, as will be clear to those acquainted with lattice theory, will be the direct product of the primary lattice of fig. 3 with its dual. This lattice is too large for convenient setting out here. But we can take advantage of the fact that in actual practice we can do with a less complete classification of functions for compound substituents than is required for their irreducible components. Thus, so long as we are only interested in compound substituents, we can replace the 12-point primary lattice by the 5-point lattice shown in Simplified Self-dual-product lattice This, by the way, is the smallest possible self-dual-product lattice.We can now see how the meet algorithm works out in this fuller version of the syntax lattice. For endocentric groups there is no problem; for by definition one of the points concerned must include the other, which, as before, is their meet and defines the function of the whole group. But for exocentric groups the situation is more complicated.Some exocentric substituents will have a meet in the lower ideal of ZZ, that is, in the lowest exponent of the secondary lattice. Now we have already indicated that to these points we attach the same syntactic functions as are attached to the corresponding points in the primary lattice. However, if we make no modification in the meet algorithm, we shall be faced 98026with the difficulty that if any such substituent is in turn included as part of a yet larger one, the function of the latter can be represented only by a point yet lower in the lattice; in fact, if an exocentric group forms part of another exocentric group, the latter will be assigned the function Z.IC, which is that of a conjunction and is unlikely to be correct. To cope with this situation we make use once again of the top-tobottom contrast in the lattice. What we need is to get from a point in the lower ideal of ZZ to a point in its upper ideal. Because of the duality relation between these two sublattices, their respective points are already in correspondence; in the notation used in figure 6 , those in the upper ideal have letter-pairs ending in Z, and those in the lower ideal have letter-pairs beginning with Z. Therefore, we make the rule that whenever the meet algorithm leads us to a point in the lower ideal of ZZ, we replace this, as the result of the algorithm, by the corresponding point in the upper ideal of ZZ. This rule I call the polar algorithm.On this basis, exocentric groups can be divided into three classes, all of which share the property that their functions differ from those of any of their components. In the first class, the function of the whole group is ZZ; in the second, it is a point in the lower ideal of ZZ; in the third, it is any other point in the lattice. Linguistically, these three classes represent different degrees of "completeness"; the first are complete clauses, the second may be called subordinate clauses, while the third class contains groups too incomplete to be called clauses at all. the governor -dependent relation: In the system, represented in a simplified form in figure 6, we can interpret certain of the lattice relations in more detail than has been shown above. One additional descriptive contrast which the theory thus gives us is that between the governor and the dependent of any compound substituent. One finds, in any such group, that there is one component which 'colours' or 'gives tone to' the whole group, the others having a more passive role. Thus, in any substituent with three or more components, one will stand out from the others; this we call the governor, and the other components are the dependents of the substituent (98026) (Note: dependents are "of" substituents, not "of" the associated governors). If all available examples of a given substituent type have only two components, we must identify the governor from its lattice properties. Now in terms of the lattice, this works out differently in the three cases of exocentric groups, endocentric groups, and conjunct groups. In a free clause, it is clearly the verb-group which gives its colour to the whole; thus we make O governor over S. The same rule will serve for all exocentric groups, in which the primary functions of the components show this difference; where they do not (as for instance in a group whose components are SZ and AA) we may go by their secondary functions (in the case mentioned, as these are respectively Z and A, this means treating it as if it were an endocentric group). In the complete 144-point lattice, difficulty may also arise from components with I functions: in any particular context, these may behave in their S or in their O capacity, and this must be ascertained first.In endocentric groups, it is clearly the component which has the same function as the whole which colours the group; thus, the component represented by the lowest point on the lattice is the governor. In this case, then, the dependent-governor relation is straightforwardly the inclusion relation in the lattice. Clearly, too, while we can have such a group with several upper points, the meet algorithm forbids the existence of an endocentric substituent with more than one lower bound, namely, the governor.In conjunct groups, the dominant component is the conjunction itself; though, standing as it does at the top of the lattice, it does not affect the group's function. Thus, we adopt the convention that, when any point which is a join in the lattice is interpreted as the join-relation itself, this shall mark the governor of the group.A yet more important place in traditional linguistic description belongs to the subject-predicate contrast than to the governor-dependent contrast, and this too can be derived very simply from the present theory. We have seen that, by means of the polar algorithm, a scale of increasing completeness of exocentric groups can be defined. Incomplete clauses have their meets outside the lower ideal on the point Z.Z; complete but subordinate clauses have meets within this ideal, which the polar algorithm transposes into the upper ideal; a free clause has its meet actually at 98026Substituents of this last type, and these alone, are subject -predicate groups. This theory of syntax therefore provides a representation for the subject -predicate pattern in language: it is that of an exocentric substituent whose meet is at the point Z.Z, one of the two non-bounding vertices, or "central" points, of the lattice. It can also be thought of as the result of applying the polar algorithm as a stop rule: when in the build-up of a sentence structure we come to this point we can stop, but not before.This interpretation does not at first sight seem to have much to do with the logicians' notion of subject and predicate. These have been thought of either grammatically (as "sentence with a main verb") or, following Russell, formally (as a formula of the type xP subject to extension either by the addition of further terms y, z, ... to the x, or by adding quantifying restrictions to the x). AS to the grammatical interpretation, our theory explains rather the notion of mainness than of verbness; it shows us how to build up the distinction between "main" and "subordinate" verbs (all verbs being initially merely operatives). As to the logical interpretation, the theory does not define the relation of predication (though it can cope with the distinction between monadic, dyadic etc, relations), but it does explain why, however predicative logic is developed, the symbol P remains unique and unchanged; for this point, our Z.Z, is one of the four vertices of the lattice which can be transformed, by inversion of factors in the lattice, into the upper or the lower bound, just as in logic P is the point from which, no matter how far the x -sequence is extended the whole system of relation always hangs. It is in this sense that it is possible to say that, starting from acknowledged linguistic notions, which we define more exactly than heretofore, we can arrive at an account of this important logical form, the subject -predicate sentence. substituent types and participation classes: We must now show how the coding used in the empirical procedures currently being tested on the Cambridge computer is derived from the theory. The actual program is fully discussed elsewhere (11), but it 98026will be illustrated here by a simplified example.The full product lattice has 144 elements, and in principle substituents with any pair of these functions may form a group, or compound substituent. As the model does not take the order of the components of a group into account we thus have ½ (144 2 ) possible different groups. The function of a group is, however, determined by the meet algorithm, and the group formed by any one of the 10368 possible pairs of substituents must therefore be defined by a point in the lattice. There are thus only 144 different meet-points. 12 of these points, moreover, are in the lower ideal of zz, and are not accepted as they stand but converted by the polar algorithm. This leaves us, therefore, with 132 result-points representing different kinds of group; these will be called substituent types.By using this set of substituent types we can extend our classification system. In setting up the syntax lattice we treated it as a schema for classifying substituents according to their functions. The function of a substituent is naturally related to its behaviour in groups of substituents, but we did not initially attempt to classify substituents according to the kinds of group in which they can figure. It will now be clear that we can give more information about a substituent if we take what we may describe as its grouping possibilities into account. These are derived from the lattice in a straightforward way: for a substituent with a particular function we list the set of result-points which can be reached when operating on a pair of substituents, one of which is the substituent in question. We thus give, in terms of their functions, the kinds of group in which the substituent can participate. This information is represented by a positive mark in the appropriate positions in a 132-place entry. We will call the entry as a whole the substituent's participation class. The record is further refined by entering, for each kind of group in which the substituent can figure, whether it functions as governor, or dependent, or either.For practical purposes, however, the size of participation class entries as just described is not very satisfactory: thus it is too large for convenient machine handling, or for teaching to dictionary makers. It can, however, be reduced as follows. In terms of the theory the set of 132 result-points can be naturally divided into those which lie in the principal exponent of the primary lattice, and those which fall elsewhere, that is, into those with secondary function Z. (Except for ZZ, points with primary function Z are excluded by the polar algorithm.) It will be clear from the lattice that there are 12 points of the first kind and 9802648 120 of the second. This distinction represents the extent to which further grouping is required before the stop-point defined by ZZ can be reached.Compound substituents with secondary function Z can be 'direct' components of full clauses; those with neither function Z require at least one intermediate grouping, with the application of the polar algorithm, before they can be grouped to give a full clause. It can be argued that the information about a substituent represented by the fact that it can be a member of a group of the second kind is less useful that that representing its membership of a group of the first kind, given that from a group of the second kind one of the first kind will be reached. If we accept this argument, we can then replace the 132-place participation class by one with 12 places. For a particular substituent this replacement will give the result-points in the principal exponent of the primary lattice which will be reached by operations on pairs of substituents of which the substituent in question is a member.Examination of natural languages shows that the 12-place participation class is an oversimplification. Thus in English there are two kinds of compound substituent which would both be given the function OA.Z, namely adverbial groups (like "almost exactly") and adverbial clauses (like "considering the circumstances"). These clearly represent different constructions and if they were identified in classifying the behaviour of a word, would lead to incorrect grouping. We can, however, deal with this difficulty by taking into account distinctions which the theory already contains. For instance, we can at least make use of the distinction between substituents, the meet of whose component functions falls in the lower ideal of Z.Z, and those whose meet falls elsewhere in the lattice; that is, be-, tween complete exocentric groups (clauses) and the rest. This division would take us from 12 functions to 24 substituent types, straightforwardly derived from the theory, and therefore of interlingual validity (though we must not expect that all of them will be represented in any particular language.) In particular, we can construct model and restricted languages requiring many less distinctions than this. Most natural languages appear to need between 10 and 20 substituent types for their adequate analysis,We have so far discussed grouping, or bracketing, in terms of lattice points and lattice algorithms. We must now show how this works out for actual texts. Given that each substituent type defines a kind of group, it is clear that the fact that a set of substituents can be bracketed will be represented in their respective participation classes by a positive (98026) entry for the same substituent type. This in itself, however, is not enough; the items to be bracketed must also be contiguous, and must satisfy the governor dependent relation. The latter means that we can only bracket a group of substituents if one of them can be the governor, and the rest dependents, in the kind of group concerned. The governor-dependent relation thus acts as a restriction on bracketing.) Two points should be noted: (1) AS many items as possible can be combined at the same time to form a group. (2) The substituent types are arranged in a priority order from left to right: that is, we look for groups of kind 1 first. The order loosely corresponds to the lattice structure in that 'weak' groups are found first, and full clauses last, but it is essentially a practical device for reducing the amount of effort spent in trying to find brackets: as bracketing is carried out on ever larger units, there is clearly some point in looking for the smallest groups of most closely associated substituents first.The way in which the information contained in participation classes is used for bracketing can be illustrated as follows:A B x + - y + +This means that x can belong to a group of kind A, but not of kind B, and that y can belong to groups of kind A and kind B. If x and y occur in contiguous positions in a text and can therefore be bracketed, the resulting group must have the function A.We will now consider a more elaborate case with governor-dependent information given:A B C x G - D y D D - z - - DWhen the governor-dependent rule is satisfied only x and y can be bracketed,to give a group with function A. x and z are both dependents in groups of type C and cannot therefore be combined.In order to keep the example simple the following modifications of the actual procedure have been made: (i) habitat and concord information is omitted; (ii) only 6 substituent types are used; (iii) the bracketing rules are formulated rather crudely.A rather lazy cat chases falling leaves and butterflies; of course these can easily get away.We will assume that the participation class entry for each word has been obtained by dictionary look-up. The sentence with appropriate entries is as follows: Bracketing is carried out according to the following rules:Starting at the last item before the punctuation stop, whether simple or a compound obtained by previous bracketing (see below), read backwards in each column in turn, looking for the longest continuous sequence immediately preceding the stop in which one item is governor and the rest dependents. As the priority rating of the columns is from left to right the first such sequence is taken (even if there is a longer one in a later column). For example, in a particular column the following are all bracket groups: No brackets starting from c can be obtained in the following:A -D G b D G - c D -DWhen Rule 1 suggests a bracket in column 1 if the item marked as governor (i.e. the conjunct substituent) is immediately flanked by two items marked as dependent, treat the three as a group. Thus the first case below will bracket but the second will not:EQUATION52If under Rule 1 in proceeding backwards from a group already made no brackets can be found, take from the beginning of the existing group the smallest number of items compatible with its remaining a group and try backwards from the last of these before trying again with the reduced following group. There is shown in the following example: 6.4EQUATIONThe column in which a bracket is made represents the type of the resulting compound substituent. Reference to Table I below gives the appropriate participation class entry, and the group with this new entry is treated as a single item in further bracketing. We now have the whole sentence bracketed as follows:((a(rather lazy) cat) (chases (falling (leaves and butterflies;)))) (of course these (can (easily get away.)))When we come to apply the methods described to a particular language, the following procedures have to be gone through. So far, these have been effected only for English, though plans are now made to apply the same methods to several other languages.First, we have to fix upon a suitable set of substituent types to describe the constructions met with in the given language; in making this choice, considerations of informational efficiency will play a large part; for in most languages it is possible to find a few examples of substituent types which it is not practically expedient to recognise because of their rarity or difficulty of recognition. Next, given our substituent types, we have to prepare participation classes based on them, to act as the syntactic parts of the dictionary entries for each word in our dictionaries. 98026 Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
761
0.00657
null
null
null
null
null
null
null
null
eada962e0f5af8cff3d8c5690c47b932208f39d9
51860864
null
An Introduction to Computational Procedures in Linguistic Research
PREFATORY REMARKS EVEN at the 1962 Institute where these lectures were presented, it was hard to find much interest in linguistic research of the empirical sort. Two areas were far more attractive: the design and refinement of translation algorithms, and the establishment of mathematical theory for linguistics. Yet each algorithm either contains or presupposes a body of empirical fact which, in fact, does not presently exist, and theory is pertinent to linguistics and its applications only insofar as it guides the collection and organization of data. During the Institute, it occasionally seemed that the theoreticians were refusing this aid to the empiricists; some of the theorems stated, and some of the interpretations given, suggested that it is theoretically impossible for linguistic theory to guide the collection of data. The theorems are undoubtedly true, but the interpretations are indubitably false. These lectures, therefore, have to maintain a double argument: that the adoption of systematic procedures for collection and organization of linguistic data is (i) necessary and (ii) possible. Necessary, in the sense that practical applications (such as automatic translation) cannot be developed to the point of usefulness without empirical studies that are unmanageable unless they follow systematic procedures. Possible, in the sense that undecidability theorems do not apply to the situations that arise in practise. Beyond this argument, these lectures are concerned with techniques, with the steps to be carried out in a real program of data collection. Convenience, economy, and avoidance or control of errors are, as they must be in large-scale operations, central questions. Finally, it will be necessary to emphasize, even here, the need for additional theory. The aspects of language that have been studied most widely and formalized most adequately heretofore are not the only aspects of language relevant to automatic translation, and systems of automatic translation that rely entirely on present-day theory have not proved satisfactory. The written version of these lectures was prepared after the Institute, and the author took advantage, where possible, of what was said there by students and other lecturers. It will be obvious that he is especially indebted to Professor Bar-Hillel, whose work stimulated much more than the construction of counter-arguments on specific points. Insofar as the lectures are based on earlier publications of the same author, they draw most heavily from [1] and [2] .
{ "name": [ "Hays, David G." ], "affiliation": [ null ] }
null
null
Automatic Translation of Languages NATO Summer School
1962-07-01
13
0
null
EVEN at the 1962 Institute where these lectures were presented, it was hard to find much interest in linguistic research of the empirical sort. Two areas were far more attractive: the design and refinement of translation algorithms, and the establishment of mathematical theory for linguistics. Yet each algorithm either contains or presupposes a body of empirical fact which, in fact, does not presently exist, and theory is pertinent to linguistics and its applications only insofar as it guides the collection and organization of data. During the Institute, it occasionally seemed that the theoreticians were refusing this aid to the empiricists; some of the theorems stated, and some of the interpretations given, suggested that it is theoretically impossible for linguistic theory to guide the collection of data. The theorems are undoubtedly true, but the interpretations are indubitably false.These lectures, therefore, have to maintain a double argument: that the adoption of systematic procedures for collection and organization of linguistic data is (i) necessary and (ii) possible. Necessary, in the sense that practical applications (such as automatic translation) cannot be developed to the point of usefulness without empirical studies that are unmanageable unless they follow systematic procedures. Possible, in the sense that undecidability theorems do not apply to the situations that arise in practise. Beyond this argument, these lectures are concerned with techniques, with the steps to be carried out in a real program of data collection. Convenience, economy, and avoidance or control of errors are, as they must be in large-scale operations, central questions. Finally, it will be necessary to emphasize, even here, the need for additional theory. The aspects of language that have been studied most widely and formalized most adequately heretofore are not the only aspects of language relevant to automatic translation, and systems of automatic translation that rely entirely on present-day theory have not proved satisfactory.The written version of these lectures was prepared after the Institute, and the author took advantage, where possible, of what was said there by students and other lecturers. It will be obvious that he is especially indebted to Professor Bar-Hillel, whose work stimulated much more than the construction of counter-arguments on specific points. Insofar as the lectures are based on earlier publications of the same author, they draw most heavily from [1] and [2] . 140 DAVID G. HAYSThe courses taught in American high schools include English, History, Geography, and Mathematics. Until courses in 'Human Relations' were introduced, English and Mathematics had the special distinction of being the only courses intended to influence behavior outside the school. And, whereas Mathematics would be expected to influence behavior only in such special situations as the verification of bank accounts, English was and is expected to influence the student's behavior whenever he speaks or writes. Human Relations (and Driver Training) are also intended to influence behavior, the one universally, the other in narrowly defined circumstances. Now, everyone would agree that driving in a way that differs from the methods taught in school is dangerous (driving without using the steering wheel) or bound to be unsuccessful (driving without turning on the ignition). Likewise, doing arithmetic by nonstandard methods (as with such rules as 3+2 = 7) cannot lead to uniformly satisfactory results. These courses teach all there is to know about their subjects. On the other hand, everyone would agree that a Human Relations course does not, because it cannot, teach everything there is to know about dealing with one's fellow men; the contrary proposition is laughable, but so is the proposition that an English course teaches everything there is to know about the use of that language, and yet that proposition is often adopted in computational linguistics, admittedly only in covert versions such as 'the best dictionaries and grammars contain much useful information'.The best dictionaries and grammars (e.g. [3] ) do indeed contain enormous amounts of useful information about English, French, and other languages. To omit them from a list of sources of linguistic data would be folly and lead to regrettable waste of time and money. But they do not contain everything there is to know about English, French, or any other language except possibly some dead language of which only a few sentences remain. Some of the most striking examples of the gaps that can always be found are rules for selection of equivalent words and equivalent grammatical structures in translation; rules for prepositional usage and the kinds of structures (phrases, subordinate clauses, etc.) that particular words can govern; rules for ellipsis; rules for pronominal reference; rules for insertion, deletion, or translation of articles, moods, aspects. There is just not enough in all the dictionaries and grammars of English and French to make possible the immediate construction of a good system for automatic translation of one language into the other, or of a good system for automatic indexing or abstracting of either language, and therefore the school that would be up and doing is bound to be unsuccessful, in the view of the present author, for some time to come.Mathematics and language resemble one another so closely, as several linguists have pointed out (e.g. [4] ), the construction of a grammatically accurate sentence closely resembles the construction of a valid formula, that one may not immediately see why mathematics should be so definitively exposed in its treatises and languages so inadequately dealt with in theirs. The answer is nevertheless immediately clear: languages are invented, learned, modified, and kept unaltered by largely unconscious processes in human interaction. Like other aspects of human behavior, language is a matter of convention, but it must be clearly understood that these conventions are mostly unconscious. When a child comes to school for the first time he already knows a great deal about his language, and what he learns thereafter he learns partly in language classes but partly elsewhere. The purpose of language teaching (that is, classroom teaching of the child's native language) is only to reinforce certain conventions that, according to experience, are not adequately supported by the An Introduction to Computational Procedures in Linguistic Research 141 unconscious mechanisms. Its purpose can be called artificial, in contrast to the natural support of conventions outside the classroom.Since the child does not learn his native language in the classroom, but only a few somewhat artificial elements of his language, it is not necessary for the sake of such teaching to have a thorough description of the language in systematic form. And the teaching of second languages also depends on practise, on unconscious learning and on knowledge of the first language. Heretofore, full-scale knowledge of natural languages has been an object of at most academic interest, and the academicians (the linguists in this case) have had special interests: phonetics and phonemics, morphology, and to a limited degree syntax. Limitation of attention to these areas avoided conflicts with neighboring disciplines, and brought the reward of quick success. In less than a century the study of speech sounds has reached the point of making automatic speech production and recognition almost realizable. Within half a century, the study of morphology and syntax has made the automatic dictionary possible, automatic parsing possible within certain clear limits and with certain prerequisites, and has made automatic methods of linguistic research almost realizable. On the other hand, areas extending toward what is generally called semantics have not been studied so closely, and the motivation to study 'usage' in detail and in extenso has been absent. Some of the conventions that define natural languages have been brought to light, others remain unconscious.Thus the difference between mathematics and language: the mathematician begins with rules or conventions, explicitly formulated, and works out their consequences, the sentences of his formal languages. The native speaker acquires a set of conventions by overt learning, in which case the conventions are at least occasionally conscious, or by covert learning, through listening to sentences and reading them, in which case the conventions may never be conscious. The linguist's task is to obtain a full statement of these conventions. Those that are conscious can be obtained by asking direct questions or, for well-known languages, by reading reference books. Those that are still unconscious can only be discovered by inference from observation of behavior. And there is the additional difficulty that the so-called 'conscious rules' may not control normal linguistic behavior; hence even these must be verified.The linguist has several kinds of raw material at his disposition. He can use text in the language that he wants to study, and this text may have a natural origin or it may have been produced in response to his questioning. He can use parallel texts in two languages, and again these texts may have natural origins; as when a Russian book is translated into English for publication, or they may have been produced expressly for the linguist; as when the linguist asks an informant to translate a sentence from the linguist's language into the informant's. He can interrogate informants in any way he chooses, for example asking whether a sentence that he utters is correct, or asking whether there are any words in English (if that is the informant's language) that form plurals but not with -s or -es, or asking the informant to comment, in the linguist's own terms, on the sentences of a text. He can, in principle, collect data on the meanings of texts or fragments of text, although the techniques that he would need are not well developed.Linguistic methodology has variants corresponding to choice of raw material. If the sole object of study is a text, obtained under neutral conditions, then the methodology is called 'distributional analysis', and its only objective is to characterize the sentences of the language. The chief advocate of this method, at least in the United States, seems to be Harris [5] . It is obvious that no other objective can be attained, since all that is known about any sequence of sounds or letters is that it does or does not occur in the text. The object studied is always a finite text, and there is always a finite characterization of the sentences in a finite text. In fact, there are indefinitely many such characterizations. The hope of eliminating some of them, or even of reducing the set of acceptable characterizations to a single member which could be called the (unique) grammar of the language, has led to the introduction of extrinsic principles, of which the most famous is simplicity. A simple example of distributional procedure leads to a classification of English written consonants. Let + stand for the space between words, and v stand for any vowel. The linguist can inquire what consonants occur in the distributional frame +_v, that is, after a word space and before a vowel. If the text consulted is large enough, he will find every consonant. Now he takes the frame + c_v, where c is any consonant. In this frame he finds H, L, and R very often (and after many different consonants); he finds M, N, and W less often; he finds that several consonants often occur after S; and he finds that a number of other consonants occur rarely, each after only one or two other consonants. He therefore asserts that there are four main frames for the classification of consonants (with respect to occurrence at the beginning of a word): +S_v, + _Hv, +_Lv, and +_Rv. In the first frame, C, H, K, L, M, N, P, T, and W occur (for example, in science, ship, skate, slit, smooth, snuff, spark, stand, swear). In the second, C, G, P, S, and T (check, ghost, philosophy, share, that); note that the appearance of H in the first frame is equivalent to the appearance of S in the second. In the third, B, C, F, G, P, and S (black, clay, flay, glass, play, slay); again, +SLv is either S in + _Lv or L in +S_v. In the fourth frame, B, C, D, F, G, P, T, and W (bray, craw, draw, free, grey, prey, tray, write). These observations are not the end of the description of English initial consonant clusters, and the description is not the end of the analysis, but they illustrate the kind of observations that distributional methodology permits.If the linguist chooses to accept statements about meaning as raw material, he must of course obtain them by interrogation of an informant. Bloomfield's definition of the morpheme, quoted by Nida [6] and widely accepted, illustrates form-meaning methodology: "A linguistic form which bears no partial phonetic-semantic resemblance to any other form is . . . a morpheme". If a phonetic description of the language is given in advance, "partial phonetic resemblance" is clearly determinable. Thus, for example, ban-bar, can-car, fan-far, man-mar, pan-par, tan-tar are pairs bearing partial phonetic resemblance; in each pair, the two words have the same initial consonant and the same vowel (in written form). Does ban bear any partial semantic resemblance to bar? It would not be wise to answer 'no' too quickly, since 'He was banned' and 'He was barred' might well be said to have similar meanings. The hyphenation of 'phonetic-semantic' in Bloomfield's definition means that the partial phonetic and semantic resemblances must be correlated, however, and the only evidence of such a correlation is that the same pair of resemblances occurs in several forms. If can and car, fan and far, etc., bear any partial semantic resemblances to one another, pair by pair, they are surely not the same as the resemblance of ban to bar. On the other hand, build-builder, work-worker, walk-walker, and many more such pairs, are asserted to bear a common partial phonetic-semantic resemblance. The first member of each pair does not, the second does end in -er; the first member of each pair names an action, the second member names a person who performs the action. Thus -er is identified as a morpheme with the meaning agentive. Hjelmslev's commutation test is methodologically a form-meaning procedure. As quoted by Togeby, "Les éléments du contenu ne sont indépendants que si leur interchangement peut entrainer un changement d'expression" [7, pp. 7-8] . Here 'élément du contenu' should be understood as a morpheme, semantically defined, and 'expression' has to do with the phonetic representation of morphemes. Thus two supposedly distinct meaning units must have phonetically distinct representations, at least in some contexts ('peut entrainer').Note that any use of parallel texts, whether in two languages or in one (in which case one is a paraphrase of the other), is a form-meaning procedure, since the assertion that the texts are parallel is a semantic assertion. Likewise, for reasons to be discussed below, the method with text and editors that will shortly be introduced is a semantic procedure.A third methodology is psychological. Linguistic conventions are effective only to the degree that they are part of the cognitive systems of speakers of the language. Psychological methodology makes the speaker an explicit object of study and undertakes to determine his cognitive structure. Martinet [8] quotes Baudouin de Courtenay's 'phonic intentions' as an example of a psycholinguistic concept and gives an illustration of its application. French has two l's, one voiced /lak/ = lac, the other unvoiced /ppl/ = peuple. These two sounds represent the same phoneme, however, because they occur in mutually exclusive distributions, or because they never serve to differentiate words with different meanings, or because, and this is the application of psychological methodology, they result from the same phonic intention. In other terms, they are represented in the cognitive structure of a speaker of French by a single element. But, as Martinet points out, evidence about cognitive structures is hard to obtain. Very few studies can be cited, but Miller's experiment on speech-perception units is certainly one [9] .It is widely believed that the three methodologies should lead to the same results. Martinet's version of the argument is that cognitive discriminations are made only when commutation, the need to keep sounds separate because they differentiate words, for example, forces them. The distributional methodology can be tied to the others by the argument that if two phones, for example, have complementary (mutually exclusive) distributions, they cannot serve to distinguish any pair of words.Each of the three methodologies has both advantages and disadvantages. In favor of the pure distributional methodology is the simplicity of collecting the data. Semantic and psychological data can be obtained, but in each case the theory is not well enough developed to permit the establishment of adequate controls. Against distributional methodology is the non-uniqueness of its results. The use of extrinsic principles such as economy is not fundamentally unsound, but economy or simplicity has not been formulated as yet in terms that all linguists can accept, and it cannot yet be demonstrated that any particular set of extrinsic principles is adequate to reduce the many possible distributional grammars based on any given finite text to uniqueness. The discovery that a fixed set of extrinsic principles had such an effect, and the further discovery that the resulting unique grammar corresponded to semantic and psychological findings, would be a linguistic achievement of great importance. The extrinsic principles thus supported, if they in turn could be proved unique, would have the status now sometimes claimed for the principle of economy: they would give a metapsychological characterization of the speakers of the language or languages, of the community of speakers, as they are supposedly characterized by Zipf's principle of least effort [10] .When a digital computer is available to the linguist, many tasks are conveniently carried out that would be nearly impossible without it. For the distributionalist, every operation involves scanning text for occurrences of an item in a frame, and the number of items and frames that ought to be studied is large. Thus Harris, a decade ago, regarded the distributional method as an idealization of good practise, impossible to apply. Only the beginnings of a system for automatic distributional analysis have been created as yet, but it is no longer impossible to imagine practical use of distributional methodology.An adaptation of the form-meaning method to the special circumstances of computer use is the 'cyclical' method with posteditors [11] . When this method is applied to the study of a language, a crude description of the language must first be constructed. The work then proceeds through a series of stages. At each stage, the existing description of the language is supplied to a computer program which applies it to a sample text. Here the generalizations of the description are converted into analyses of specific sentences. If the description is incomplete, some sentences may be analysed satisfactorily, but others may be given incomplete or incorrect analyses, and some sentences may not be analysed at all. The correctness of each sentence analysis is decided by the editors (posteditors, because they examine the text after the computer program has analysed it). The editors correct any errors they find, and their corrections furnish the raw material for studies that lead to an improved description of the language. The modification of the description ends one cycle and permits another to begin.If the editors were eliminated, the cyclic procedure would be a distributional method. The data to be studied in each cycle would consist of the sentences not fully analysed. But the editors use their whole knowledge of the language and the subject matter of the text when they consider the correctness of each sentence analysis. Hence the procedure uses formmeaning methodology.Nevertheless, the procedure avoids asking informants or editors questions of certain difficult kinds. The editor is never asked to make a general statement about the language, but only to comment on specific sentences. Second, he is never asked to provide any sentences; not the editor, but the original author, vouches for the sentencehood of each sentence in the text. All disputes about such sequences as Chomsky's 'colorless green ideas sleep furiously' [12] are thus avoided. Third, the editor is never asked to state the meaning of a sentence, or to say whether two sentences have the same meaning or related meanings, except that he may be asked to translate or paraphrase a text, and a text and its translation or paraphrase are expected to have almost equivalent meanings. Many delicate questions, whose answers are too doubtful to provide good support for a linguistic description, do not have to be asked. Moreover, the answers given by editors to the questions that must be asked can be checked. The same text can be edited by different persons and their corrections of the machine-produced analyses compared. The method avoids or controls errors, as a good method should.Two other characteristics of a good research procedure are economy (in the operation of the procedure, not in the results) and convenience, which leads to greater accuracy and economy. The convenience of a cyclic procedure with posteditors depends in part on the design of the posteditors' worksheets, the forms on which they are given analysed text for correction; in part on the kinds and quantities of corrections that they must make, and the notational scheme provided for the indication of changes in the machine-produced analyses; and in part on the processes that must be carried out after postediting, the processes that reduce the raw data to an improved description of the language. Economy should be gained by the use of the computer, first to provide tentative analyses of the text, second to manipulate the raw data.So far, almost nothing has been said about the nature of a linguistic description, or. about the places in the cyclic procedure where it is involved: the computer program that assigns analyses to sentences, the postediting that corrects these analyses, and the data- reduction that modifies the description. A text is a string of occurrences of characters from a finite alphabet. In natural languages, texts can be segmented into recurrent substrings, each indicating the occurrence of a word or morpheme. Part of the description of a language is a list of these substrings; with each must be given a statement of its linguistic properties. The list is a dictionary, and one step in sentence analysis is dictionary lookup. All the rest of the description, and all further steps in sentence analysis, uses the properties of units, not their alphabetic representations.The structure of a sentence is a set of relationships binding all the word or morpheme occurrences in it together into a whole. Among the competing theories of sentence structure now extant, only one will be introduced and used here. The theory of immediate constituents [12] , is its principal rival, and far more widely known, but the author's experience is largely confined to the theory of dependency, which has the same descriptive power in a certain formal sense [13] . According to the theory of dependency [14] , [15] , [16] , [17] , every word (or morpheme; but word will be used hereafter, as it can properly be used in the study of some but not all languages) depends on one other word in the same sentence. There are two exceptions: one word in every sentence is independent, and relative pronouns, adverbs, and adjectives depend on two other words simultaneously. Each word serves some function for its governor. The number of functions in any natural language is apparently small, and it seems reasonable to postulate that no word ever governs two words with the same function at the same time, again with certain exceptions. Strings of appositives ("John, the leader of the group, the strongest member, the one on whom all others relied," etc.) occur, and it is moot whether, say, all appositives depend on the first in the sequence or each on the one directly before it. Combinations of words with a conjunction ("One, two, three, ..., and N are integers") occur, and it seems best to attribute the same function to all conjoined elements and let them all depend on the conjunction.The fact that one word can serve a particular function for its governor, together with the fact that a second word can govern the same function, do not mean that the first word can serve the given function for the second. Thus 'books' can serve subjective function, and 'am' can govern a word with subjective function, but 'books-am' is not a possible subject-governor combination. The properties of words that determine whether they can serve functions for one another are called agreement properties. If word X can serve function F for word Y, words X and Y agree with respect to function F with X as dependent.Among the properties that must be listed in the dictionary are grammatical properties. The grammatical properties of a word are the functions it can govern, the functions it can serve (as dependent), and the agreement properties involved in any of its potential functions, plus certain other properties concerning word order and punctuation that will not be discussed here.Given the theory of dependency and a dictionary in which grammatical properties are listed, a computer can determine the structure of any sentence provided (i) the sentence is composed of words in the dictionary, (ii) the word-occurrences in the sentence are used in accordance with the properties shown in their dictionary entries, (iii) certain word-order rules, primarily the rule of projectivity, are obeyed, and (iv) there is no ellipsis. The rule of projectivity [16] requires that all the dependents of an occurrence, all the dependents of its dependents, etc., lie between the first word to left and right that do not depend on it. Ellipsis raises problems that will not be discussed here. In general, the structure assigned to a sentence will not be unique; some sentences will be ambiguous.The program that determines sentence structure, after dictionary lookup, has three parts [2] . The first selects, in accordance with the projectivity rule, a pair of occurrences that can be connected if they agree. The second, given a possibly connected pair, looks for a function that one can serve for the other with respect to which the members of the pair agree. The third changes the list of functions that can be governed by the governing member of the pair, eliminating the function served by the dependent member. The three parts of the program operate in rotation; after the third has operated on a pair of words (or after the second has failed to find a connection between a pair), the first selects a new pair. Properly designed, a program of this type can operate at high speed on a standard computer.Computational systems for linguistic analysis are still no more than semiautomatic. They rearrange and summarize data in accordance with rather simple, specific instructions from the linguist, who must draw his own conclusions from the results. Hence, regrettably, the design of analytic procedures must take into account the linguist's momentary state of knowledge. The more he knows about the language he is studying, the more elegant and powerful can his analyses be. The start that has been made on fully automatic analysis is described in Section 4; that work is for the future, however, whereas the methods introduced in the present section are ready, at least in principle, for immediate use.The classic tool of linguistics is the concordance. Each entry in a concordance consists of a key occurrence together with part of its context. Ordinarily, every occurrence in a text appears as the key occurrence in one entry in the concordance of the text; a given occurrence may also appear in other entries, as part of the context of other occurrences. If the context included in each entry consists of the occurrences immediately preceding and following the key occurrence, the concordance of a test is three times as large as the text itself, not counting the location indicators that are usually added to each entry in a concordance. Sometimes selective concordances are prepared, omitting, for example, all entries with function words (prepositions, conjunctions, articles, etc.) as key occurrences.When a concordance is prepared from postedited text, the context of an occurrence can be defined in terms of structural connections instead of linear order. Thus an entry might consist of a key occurrence together with its governor and all its dependents. The function served by the key occurrence and those served by its dependents can also be included. When a dictionary is available, other information can be added to each concordance entry. The exact form of the key occurrence can be replaced with a canonical form: plural nouns with singular, all verb forms with infinitives, and so on. Grammatical information can be used as well.The arrangement of a concordance is always systematic, since the text itself could otherwise serve as its own concordance. The system chosen depends on the information available in each entry, of course, and on the purpose the concordance is to serve. Using terminology that is familiar in computing manuals, one can speak of major, intermediate, and minor sorting variables. In a telephone book, the major variable is family name, the intermediate variable is first name, and the minor variable is middle name; in the so-called 'Yellow Pages', the major variable is name of product or service, the intermediate variable is firm name, and the minor variable, used only occasionally, is branch or dealer name. In a concordance, the major variable may be the form of the key occurrence, intermediate the form of the preceding occurrence, minor the form of the following occurrence. Some other arrangements are: . This list is not complete, and a whole series of other arrangements could be defined if such characteristics as the target-language equivalent of the key occurrence were included in each entry. Each arrangement has some use. Unfortunately, the arrangement best adapted to the study of any difficult problem is least adapted, in general, to the study of diverse problems. Concordances have been published in the past [18] , each the product of immense manual effort, and concordances are still being published, now often with the aid of a computer. The usual arrangement of a published concordance is by form of key occurrence as major variable and occurrence order as intermediate. This arrangement, most readily understood and used by everyone, is ill adapted to almost any particular problem that comes to mind. For example, a study of grammatical agreement rules for syntactic functions would be most tedious with such an arrangement. Since standard computer programs can make concordances quickly and with any list of sorting variables desired, it seems that the publication of concordances will have little influence on research in the future. Once a text has been put on magnetic tape for computer input, and especially if postediting data is included with it, a scholar with a new research idea can obtain the tape, name his sorting variables in accordance with his plans, and obtain a concordance (very likely a selective listing instead of a complete survey of the text) with little effort, expense, or delay.The concordance, although a classic tool of the linguist, is not the most powerful. Given postedited text and a computer, the linguist can call for crosstabulation by categories of many useful kinds. To make a crosstabulation involves the selection of two or more variables and the definition of units to be listed or counted. The result is a matrix in which each row represents a value of one variable, each column represents a value of another variable, and each cell contains the number of units characterized by the values of the corresponding row and column.As a concrete example, let us consider the study of grammatical agreement with respect to one syntactic function, say the subjective function in Russian. We may suppose, for the purposes of the example, that the linguist has already analysed each Russian form (occurring between spaces in text) as composed of a stem and an inflectional suffix; he suspects that the endings are involved in agreement. Oversimplifying for clarity, let us suppose further that each form contains at most one suffix; every form that contains no suffix will be treated, temporarily, as if it contained a zero suffix, and every form will be said to contain some stem. The units to be counted are pairs of occurrences, each consisting of a governor and a dependent related by subjective function. The two variables of the cross tabulation are (i) inflectional suffix of the dependent and (ii) inflectional suffix of the governor. The matrix will have one row for each suffix that occurs in a subjective dependent and one column for each suffix that occurs in the governor of a subject. The counts are based on a certain text; each cell contains the number of occurrences in that text of subjectgovernor pairs with a certain suffix in the dependent, a certain suffix 'in the governor. Every such pair is counted just once in some cell of the matrix. Since the counts in the matrix are based on a finite text, a sample of all the Russian text ever written or to be written, they are best regarded as estimates of the counts that would be obtained from an indefinitely long text. As estimates, they are subject to sampling error, deviations from the counts that would truly describe the language but can never be obtained. The treatment of sampling error is a statistical problem that will not be discussed here. A substantial literature has been devoted to the analysis of cross tabulation matrices with sampling error, but linguistic applications are just beginning [19] . In the following paragraphs, some of the possible analyses will be discussed in terms that presuppose errorless data. The linguist who intends to perform such analyses should consult the statistical literature, or a statistician, before proceeding.The linguist who analyses the Russian data of our example begins with a hypothesis drawn from experience with many languages concerning the relation between morphology (the suffixes, the classes of stems with which they occur, etc.) and syntax. There exist, according to this hypothesis, syntactic categories in terms of which the agreement rules for the subjective function are relatively simple. He does not suppose, however, that each suffix belongs to exactly one syntactic category, nor that each distinct suffix belongs to a different category. The possible complexities of the relations between morphology and syntax guide his analysis.First, there may be two or more suffixes that are syntactically equivalent. Looking among the dependents first, such suffixes would be represented in the matrix by two identical rows. More precisely, since two suffixes can be syntactically equivalent even if one is more frequently used than the other, the rows should be proportional. If each entry in the matrix is divided by the sum of all entries in its row, then rows corresponding to equivalent suffixes should be identical. For simplicity in further analyses, identical rows can be combined by adding together the entries in each column; one row, representing a set of syntactically equivalent suffixes, replaces several in the matrix. The same analysis, leading to a similar reduction of the matrix, is performed on the columns. In the example, the singular suffixes of different declensions would be equivalent, and so would the plural suffixes.Second, there may be some suffix that is used in two syntactically different ways, each corresponding to the use of some other suffix. The ambiguous suffix would be represented by a row equal to the sum of two other rows (or of three or more other rows). In the example, an ending that is singular in one declension and plural in another would have this property. Since the suffix may be singular in a high-frequency declension and plural in one of low frequency, or vice versa, its row need not be equal to the sum of two others, even after division of every entry by row sums, but only to some linear combination. Upon finding such a row, which corresponds either to a single suffix or to a set of syntactically equivalent suffixes, the linguist must call for a supplementary analysis. Let X be the ambiguous suffix, Y and Z the two suffixes whose range it covers. The subject-governor pairs in which X appears are sorted into two groups: those with governors that also govern Y, those with governors that also govern Z. The question is whether X occurs with the same or different stems in the two groups of occurrences. If the stems are different, they can be assigned to different classes, and indeed they may well belong to different morphological classes already because they take different sets of suffixes. With a stem of one class, X belongs to one syntactic category, that of Y, and with a stem of the other class X is equivalent to Z. The ambiguity of suffix X is reduced morphologically and its row in the matrix can be combined with the rows of Y and Z, reducing three rows to two. On the other hand, if the stems in the two groups of occurrences are the same, the ambiguity is not eliminated morphologically although it can be eliminated syntactically. The same reduction of the matrix can nevertheless be performed. Naturally, a similar analysis and reduction is carried out on the columns of the matrix.A third possibility is that some suffix has one syntactic use that is unique to itself, another use equivalent to the use of some other suffix. The zero suffix appears in some Russian declensions, where it is syntactically equivalent to other suffixes; it also appears, for example, in the personal pronouns ya = I, my = we, etc., making it the unique firstperson suffix. It follows that the corresponding row in the matrix is equal to a linear combination of other rows plus a remainder. As in the previous situation, a supplementary analysis is required, and in the example it will lead to recognition of several morphologically resolvable uses for the zero suffix.The linguist would like to continue the analysis until the matrix contained only one nonzero entry in each row and each column; he could then call each row a simple syntactic category. In the Russian example, the matrix has 14 rows and 14 columns at that stage. Although the analyst cannot label his matrix in this fashion, we can name them by specifying person, number, and gender. Writing 1, 2, and 3 for first, second, and third persons, m, f, and n for masculine, feminine, and neuter genders, and s, p for singular and plural numbers, the columns (and rows) are labeled 1ms, lmp, lfs, lfp, 2ms, 2mp, 2fs, 2fp, 3ms, 3mp, 3fs, 3fp, 3ns, and 3np. To reach this point, the linguist may be forced to consider a row or column which contains two nonzero entries, neither corresponding to the unique nonzero entry in any other row or column, but such considerations should wait to the end of the analysis. Having reached this point, the linguist can reconsider. He should discover that gender in Russian nouns is determined by the stem, not by the suffix; that no Russian verb has a suffix exactly identifying person, number, and gender; and so on. He will probably not retain the separate syntactic categories 1ms, 2ms, and 3ms for verb suffixes, since every suffix that belongs to any one of these categories either belongs to all three of them or to one of them and also to one or two others.The final stages of the analysis belong to the linguist, who can call on many different criteria in sharpening and systematizing the classification. The earlier stages, beginning with a large matrix with entries that always, in practise, must be subject to sampling error, can be programmed and carried out on a computer. From the preparation of concordances to the construction of crosstabulations to their analysis, the computer has taken a larger and more sophisticated part in the research process. With such a perspective, the scholar who limits its role to sorting and listing appears to be wasting his resources.The illustration of crosstabulation analysis just presented was drawn from the classic domain of the relations between morphology and syntax. Since classic methods have been highly productive in this domain, new ones are not likely to add much, and the illustration is to that extent misleading, but it was chosen for clarity, not as representative of the problems to which the crosstabulation method should be applied. This method, better, this family of methods, is perfectly general for the class of problems involving relations among two or more variables. It is not surprising, therefore, to find many possible applications for it in linguistics.There is, for example, the problem of syntactic classification of stems. Given a syntactic function for which the morphological agreement rules are known, are there further rules determining the classes of stems that can occur in words connected by this function? Again the rows of the matrix correspond to dependents, the columns to governors, and the entries in the cells are occurrence counts. But this time the stems, rather than the endings, appear as row and column labels. Since the number of stems is ordinarily much larger than the number of suffixes in a language, the size of the matrix will be larger for this analysis, and the entries in the cells will consequently be smaller on the average. In fact, unless the text in which occurrences are counted is extremely large, almost all the cells will contain zeros and almost all the nonzero counts will be ones. The analysis is more sensitive, but not impossible. Unlike the determination of morphological agreement rules, the study of stem-stem agreement rules in syntax is just beginning, and it may be hoped that new analytic methods will accelerate it.The classification of texts according to subject matter, and the corresponding classification of vocabulary items, although it is perhaps of more interest in information retrieval than in machine translation, is another example of a problem to which crosstabulation analysis can be applied. The first stage of the analysis uses a matrix in which rows and columns are labeled with the titles (or identification numbers) of books, articles, or abstracts; here the set of row labels is identical to the set of column labels. Each cell contains, e.g. the number of words that occur in each of two documents. Analysis of this matrix yields a classification of the documents. The next stage is a consideration of the vocabulary of the whole library, using as a description of each word the number of times it occurs in each class of documents. Each word is characterized as specific to a class of documents, hence by inference, to a subject of discourse, or as specific to a range of document classes, or as general. A few such studies have been performed, although so far only on a limited scale [20] . One current difficulty is that, for linguistic reasons, the word is most unlikely to be a suitable unit, yet the units that ought to be used cannot now be identified and listed.The same difficulty unfortunately applies to the study of rules for choice of equivalents in machine translation; the units that ought to have stable equivalents are still unlisted. On the other hand, the search for rules of equivalent choice may lead to the identification of appropriate units. Taking the word as the starting unit, and one word at a time, the analysis can consider several variables: the functions the word serves in its various occurrences, and for what governors; the functions served by dependents of the word, and the words that serve them; and, of course, the translations of the word itself as well as the translations of related occurrences. (It is assumed here that posteditors have supplied at least a provisional translation of the text being studied, including specific translations for individual words or word groups, in addition to indications of the structure of source-language sentences.)The first step in analysis of a particular word might be the construction of a matrix in which each row represents one equivalent of the word and each column one function it serves. If each column contains exactly one nonzero cell, the equivalent of the word is determined by its syntactic function and the analysis is complete. Otherwise, the next step is to take each function served by the word and construct a matrix for that function with rows again representing equivalents and columns representing all the words that govern that function. If each column of this matrix contains exactly one nonzero cell, the governors of the word can be classified according to the equivalents that they determine, and again the analysis is complete. However, other interesting situations are possible. If one equivalent is limited to a few governors while the others appear with many different governors, the word under study can be considered to form a fixed combination with each of the few governors that determines an equivalent, and the fixed combinations can be taken as translation units. On the other hand, if most equivalents are determined by particular governors whereas one equivalent appears with many different governors, the latter equivalent can be set aside and the governors classified as before. The diffuse equivalent may occur, for example, when the word under study appears with a particular dependent or with one of a particular class of dependents. The analysis continues as necessary, with matrices having, as column labels, words that occur as dependents, translations of related words, etc. At each stage, fixed combinations, in the target language as well as in the source, can appear and be noted.In the selection of equivalents, several hypotheses always have to be considered and the research procedure should take all of them into account. One is that the individual word is too small a unit, i.e. that it must be translated as part of a fixed combination, at least in some of its occurrences. Another is that local conditions (syntactic function, type of governor or dependent) determine equivalent choice. A third hypothesis is stylistic or accidental variation; posteditors can choose different equivalents for different occurrences of a word without having any clear or explicable reason. Fourth, the hypothesis of subject field has to be remembered; a word can have different translations in different articles, even if there are not distinctive differences in local context, because of differences in subject matter and in the habits of authors in different disciplines. And a fifth hypothesis, although probably not the last that could be found, is that the word, in some of its occurrences, is an abbreviation of a fixed combination. If a word sometimes appears in one or more fixed combinations, and if one of those combinations occurs near the beginning of an article, it is possible that subsequent occurrences of the word stand for the whole combination and must be translated with the target-language abridgment of the combination. Harris's paper on 'discourse analysis' [21] , was concerned with such problems, and K. E. Harper's (unpublished) observation that Russian nouns are modified more often near the beginning of an article than further on suggests that the phenomenon is widespread. Unfortunately, systematic procedures for analysis of linguistic relations that span more than one sentence are still undeveloped and cannot serve machine translation, but research procedures aimed at discovery of equivalent-determination rules can take this hypothesis into account.The object of linguistic analysis is to characterize the sentences of a language. The language is not a finite text, but a finite text is all that the linguist can ever study. Before his analysis, he can be sure of nothing about the language, but he can only begin if he is willing to hypothesize some properties for it, and he naturally chooses properties that are universal, or at least widespread, in languages already known. These properties constitute his theory of the structure of natural languages; from the theory, he would like to derive a set of procedures that will yield a concrete description of the finite text that he is able to study and, hopefully, characterize the language beyond that text. Certain linguists, adopting the distributional methodology that excludes semantic and psycholinguistic data, have raised the problem of purely automatic 'discovery procedures'. Since such procedures could be programmed and applied, by means of a computer, to very large quantities of text, their importance for the future of linguistics seems great. The speculation that distributional, form-meaning, and psycholinguistic methodologies would yield virtually equivalent structures for any natural language makes the possibility of automatic linguistic analysis even more attractive. The beginning that has been made and the prospects for further work are the subject of this section.The first step in the analysis of a new language, after texts have been recorded, is the reconstruction of its vocabulary. (A certain normalization of its alphabet, whether of letters or of phones, sound units, may be needed, but can be passed over here.) A universal feature of natural languages is that groups of alphabetic characters form units with which the rest of the language is constructed. Each such group is a morph and represents a morpheme. In general, morphs occur one after another in text; there are many exceptions to this rule, as in languages where the consonants of a word belong to one morph and the intervening vowels to another, but it is better to oversimplify than to introduce all the important but complicated qualifications that a useful discovery procedure would have to accept. The problem, then, is to segment a text into morph occurrences. In some texts the segmentation is marked by the author, who spaces after each morph. More often, short strings of morphs are bounded by spaces and have to be segmented internally (as in printed English, French, German, Russian, etc.). In many spoken languages and some written ones, the strings of morphs between spaces or silences are long. The silence or blank space can be taken as an absolute morph boundary (omitting qualifications as usual), and the sequences between can be sorted out. In printed English or Russian, for example, there will only be a few thousand different sequences between blanks in a text of fifty or a hundred thousand running words, whereas in spoken French an equivalent text might contain only a few silence-to-silence sequences that occurred more than once each.A procedure has been proposed by Harris [22] for segmentation of morphs. Take one unit from silence to silence or from space to space, sayx 1 x 2 x 3 ….x n .Here each x i is some character of the alphabet. Consider the list of all silence-to-silence units that begin with the same character x 1 , and determine the variability of second-character choice among these units. If the next character in every unit is the same, variability is nil; if all characters of the alphabet occur as second character following x 1 each equally often, variability is maximum. The observed variability, say V l is noted. Next V 2 is determined; it is the variability of third-character choice among all units that begin with the sequence 4 , and so on. Plotting V i against i, the analyst expects a declining curve, because there are relatively few morphs in a natural language as compared with the number that could be constructed using its alphabet. English, for example, could have 26 5 = 11,881,376 different five-letter words, about twenty times as many words as in its entire vocabulary. Some of the words that do not occur are forbidden for phonological reasons, e.g. * mxzntzz or * qqq. Others are phonologically possible but simply not used, e.g. *maser (until recently) and *thaser (until it becomes acronymic, or otherwise enters the language). There are many morphs that begin with any x 1 fewer that begin with the same x 1 and any x 2 , and so on, so that Vi falls until the end of the morph is reached. If the morph x 1 x 2 ... x k can be followed by relatively many other morphs, V k is larger than either V k-1 , or V k+1 Hence a relative maximum in V i often marks the boundary of a morph. Exactly the same calculation can be performed from right to left, this time giving variability of next-to-last character among units ending with x n . The relative maxima given by the two calculations should mark the same boundaries, but in a language where each space-to-space unit consists of one stem morph followed by zero or one suffix morphs the right-to-left calculation should give more obvious results.x 1 x 2 , then V 3 , VThe Harris procedure cannot work in a language that uses every phonologically allowable sequence to represent a morph, but no such language is known. There may be a few languages with so few phonemes that a large proportion of the allowable sequences are used, and if there are the procedure would be inefficient for them. Even in languages where it is most efficient, the procedure is not likely to find all the morph boundaries in a text, and it is likely to mark some that subsequent analyses do not retain. On the one hand, if vowel sequences are narrowly restricted by phonological rules, and consonant sequences likewise, but vowel-consonant sequences relatively unrestricted, there will be a relative maximum in V i at each transition from vowel to consonant or vice versa, whenever there are two phones of the same class before the transition. These phonological maxima will be large only in peculiar cases, however, and can therefore, perhaps, be disregarded. They can be eliminated if adequate phonological analyses are performed in advance of the Harris procedure; V i can then be calculated as the ratio of observed variability to phonologically allowable variability in each position. On the other hand, if a morph can be followed only by a few other morphs (for example, if every verb stem must be followed by one of two or three suffixes), V i will not have a relative maximum at its boundary. In spite of these qualifications, an experiment with English showed the procedure to be more than 80 per cent effective, † and that figure could be improved by the use of refined technique and a larger sample.The object of a grouping procedure like Harris's is not to find the morphs in a language, but to find a set of units with which analysis can proceed. If the procedure is 80 to 90 per cent accurate, as measured against the results of a psycholinguistic or form-meaning analysis, the tentative morphs that it produces can be submitted to further distributional study. Procedures lately suggested by Lamb and Garvin require a text with marked morph boundaries as input, but it would be easy to adjust their methods and other methods of the same general kind, so that one output would be a revision of those boundaries.Given a tentative list of morphs, the next problem is to determine their mutual relations, i.e. their distributions relative to one another. One aspect of the problem is classification of the morphs; two morphs belong to the same class if they have identical distributions. The other aspect of the problem is the listing of constructions, i.e. of admissible combinations of morphs. In dependency terms, constructions are characterized by functions and agreement requirements. In any terms, the initial difficulty is that there are too many morphs and too few occurrences, even in a large text. The distributional regularities that will be summarized in a construction list do not appear until equivalent morphs are classed together, since there are not enough occurrences of individual morphs to bring out the regularities. The judgment that two morphs have equivalent distributions, hence can be assigned to a single class, cannot well be based on their few occurrences in a text, since no two morphs have similar distributions in terms of linear context and individual morphs. The judgment would be easier if it could be made in terms of classes of morphs and constructional context, but at first neither constructions nor classes are known. The deadlock can only be broken by an iterative approach that starts with a crude classification and tentative list of constructions, gradually refining the two together.Garvin [24] begins with a rough classification of morphs based on gross features of their distributions. He requires that two kinds of boundaries be marked in the text; roughly speaking, these are word and sentence boundaries or morph and utterance boundaries. The small-unit boundaries determine the items to be classified, and the large-unit boundaries furnish the distributional criteria for an initial classification. Each occurrence of a morph is characterized as adjacent to and preceding a major boundary, hence final; as adjacent to and following a major boundary, hence initial; or as not adjacent to a boundary, hence medial. Considering all occurrences of a morph simultaneously, it can be assigned to one of seven categories accordingly as it occurs in all three positions (class IMF), only two (classes IM, IF, and MF), or one (classes I, M, F). Since counts of occurrence in each position can be made, subtler classification is possible. But now a category symbol can be assigned to each morph occurrence in the text and a search for constructions started. Garvin has suggested some techniques for the search and is continuing his investigation of this problem.Lamb's procedure [25] is to form tentative constructions first, deriving tentative classes from them. He argues that a syntactic relation is a limitation on the variety of morph sequences that occur; hence local restrictions reveal (or may reveal) relations. For each morph in a text, he determines the variation in its neighbors, taking those to the right and those to the left separately. The morph with least variation is temporarily assumed to form a construction with its neighbor. Thus, if morph M i is almost always followed by some morph M j , every occurrence of M i is assumed to be in construction with the following occurrence, whether M j or some other morph. (If M 1 had regularly been preceded by some M j , the construction would consist of M i and its left neighbor). Not just the single morph with least variation among its neighbors, but all the morphs with variation below a threshold are treated in this way. Each such tentative construction is given a name, and the calculation of variation coefficients is repeated, with different results because the constructions now appear in the text instead of their constituents.Classes are not formed until second-order constructions appear, in which one of the constituents is itself a construction. Then all the morphs that occur as partners of M i in the first-order construction when the construction is in turn the partner of M j are classed together. The rationale is that these morphs have the same distribution to a second degree of approximation, and that cases of third-degree differences in morph distribution are rare. The morphs that are classed together take the same partner and with it form constructions that take the same partner; the partners of the second-order construction could vary with the morph contained in the first-order construction, but experience says that such variation is unlikely in natural languages.Since the criterion by which constructions are formed is approximate, the procedure must allow for dissolution of constructions. Lamb recalculates variation coefficients whenever a construction or class is formed. Every occurrence in text of a tentative construction is replaced with an arbitrary symbol standing for the construction, and every occurrence of a morph in a tentative class is replaced with an arbitrary symbol standing for the class. Using this text, the variation coefficients are recalculated for individual morphs, for constructions, and so on, and it may happen that the constructions originally established, when variation coefficients had to be calculated on the basis of individual morphs, will now be replaced by others. Lamb's example is that a preposition, whose following neighbors include articles, nouns, and adjectives (in English), may at first be put in construction with that neighbor so that, e.g. (in + the) is marked as a construction. Later, when a class of nouns begins to develop, the coefficient of variation among following neighbors of 'the' will decrease until (the+noun) is formed and replaces 'the' as partner of 'in'.The units that have been called morphs in the description of Lamb's procedure might be the product of Harris's procedure, they might be the units occurring between blanks in printed text, or they might have been obtained in some other way. If they are forms, as found between blanks, they can be segmented into stems and endings by some procedure like Harris's and submitted to a morphological-agreement analysis by crosstabulation of the kind discussed in Section 3. If they are tentative morphs, from Harris's procedure, they need to be checked. One step is to look for constructions that do not involve classes when Lamb's procedure ceases to be productive. Those constructions can be taken to reassemble morphs that the Harris procedure erroneously dissected. Again, any class of tentative morphs can be inspected for phonic or graphic similarity. If Lamb's procedure gives a class in which every morph ends with a particular letter or group of letters, that ending is possibly a morph; if the same ending occurs in several distributional classes, it should certainly be recognized. Thus syntactic criteria can be applied to readjust morphological findings, and the 80 to 90 per cent accuracy of Harris's method is not a final result by any means.Lamb is not concerned with dependency theory, but if his procedure is accepted up to this point it can be extended to the determination of dependency connections. Consider a construction, say X = Y i Z j . In each occurrence of the construction, Y i and Z j are either morphs or constructions. The identity of the construction over all of its occurrences is established by two facts: the construction has a homogeneous distribution, and all the Y i (or, instead, all the Z j ) belong to a single class. If all the Y i belong to one class, the Z j may belong to one class or to several classes; we can consider the case in which they belong to a single class, since in the other case the following procedure would merely be repeated for every class of Z j . For the present, suppose that each Z j is either a morph or a construction whose members are morphs (it could also be a construction whose members are constructions; that case is treated below). Thus, either Z j = M j , a morph, or Z j = M j1 M j2 , a construction of two morphs. If Z j is a morph, Y i and Z j are connected by a dependency link. If Z j = M j1 M j2 , then Y i is linked by dependency to either M j1 or M j2 . Form two sets; the first is composed of all the M j such that Z j = M j for some j, together with all the M j1 such that Z j = M jl M j2 for some j, and the second set contains the M j 's and M j2 's. Calculate a coefficient of variation for each set. If the coefficient of the first set is smaller, the first member of the construction M j1 M j2 is linked to Y i by dependency, and otherwise the second member. Now suppose that Z j can take one of three forms:M j , M j1 M j2 , or M j1 (M j2 M j3 ). That is, one partner in the construction being analysed is either a morph, or a construction whose members are morphs, or a construction whose members are a morph and a construction whose members are morphs. The dependency connection is between Y i and M j if Z j has the first form, between Y i and either M jl or M j2 if Z j has the second form, and between Y i and one of M j1 M j2 , M j3 if Z j has the third form. There are three sets to be assembled:If Z j = M j If Z j = M j1 M j2 If Z j = M j (M j2 M j3 ) The first set contains M j M j1 M j1 The second set contains M j M j2 M j2 The third set contains M j M j2 M j3The omission of all other possible sets is justified by the assumption that in a construction of a given type the two positions are distinct. Again, variation coefficients are calculated and the dependency links are established in the same manner as before. Note that it is not always possible to determine the dependency links within a structure such as M j1 (M j2 M j3 ) before comparing it with its partners. It may, in fact, be necessary to consider still more complex cases, but the general rules are implicit in the example just given. If projectivity is postulated as a universal feature of natural languages, it simplifies the search for dependency links, but its use would require a long discussion that would be out of place here.The procedures just given lead to the establishment of dependency links, but they do not indicate the direction of dependency; they do not differentiate governors and dependents. When a long span of text (e.g. a sentence), is connected by dependency links, it only remains to choose one occurrence in the span as origin and all links are automatically directed toward that occurrence. For this purpose, it is important to introduce the restriction of projectivity. In a projective language, every origin occurrence lies on the unique path between the first and last occurrences in a connected span. That is to say, the first occurrence in a sentence is connected to one or more following occurrences, among which one or more are connected to following occurrences, and so on, until some sequence of connections leads to the last occurrence in the sentence. This sequence of connections forms a path through certain occurrences, and one of them must be independent; projectivity would be violated if any occurrence not on the path were chosen as the independent, or origin, occurrence in the sentence. Moreover, every occurrence not on the path depends, directly or indirectly, on some occurrence on the path; hence all connections outside the path are directed, their governors and dependents differentiated, as soon as the universal of projectivity is adopted.Let us assume that every morph has been assigned to some class, and consider all pairs of occurrences such that the first belongs to some class, say X, the second to some class Y (the two classes may be the same or different), and the two occurrences are connected. If the direction of dependency is sometimes from X to Y, sometimes from Y to X, the description of the language is more complex than if the dependency always goes in one direction. A partial test of the projectivity postulate is to determine, for each such pair, whether the directions induced by projectivity are consistent or variable. Variability for many pairs would make the postulate doubtful for the language. If, on the other hand, consistency is found, the determination of an origin occurrence for each sentence can also be based on a consistency argument. Define a dependency type by the classes of two con-nected morphs and their order, and assign to each dependency type found in the text a direction according to the findings just described. For some types, i.e. those that occur only on the initial-final paths of text sentences, direction will be undetermined. In text, mark each connection on an initial-final path with the direction pertaining to its type. In each connected span, there are three possibilities, (i) Every connection is marked, and a unique origin occurrence is determined. Every connection is directed toward it. (ii) Not all connections are marked, but the marks are consistent; those near the beginning of the span point toward the end, those near the end point toward the beginning. If any connections are unmarked before the last right-pointing connection or after the first left-pointing connection, they can be marked and their types assigned the indicated direction. The origin occurrence lies in an unmarked zone and remains to be chosen (see below), (iii) The marked connections are inconsistent; they do not point toward a single occurrence or cluster of occurrences. In such spans, one or more occurrences must be located according to a minimization of inconsistency. If the result is unique, the origin is determined; otherwise, the origin of the span remains to be chosen by the procedure below.The procedure above can be iterated, since it sometimes determines the direction of a new dependency type. When it stabilizes, there may remain spans (sentences) with indeterminate origins. In fact, there may be many such spans, and the number of possible origin occurrences in each may be large. Note that no dependency type within the indeterminacy region of any span occurs anywhere else. Hence all such dependency types could be given the same direction without inconsistency. If that plan is not satisfactory, all dependency types with undetermined direction can be partially ordered by sequence of occurrence and those up to any arbitrary point in the partial order made right-pointing, those beyond it left-pointing.Another criterion can be introduced here, or even earlier if it is regarded as linguistically more important than the kind of consistency used up to this point. Some classes must occur at sentence origins; it makes linguistic sense to minimize the number of different classes there. If some origins are determined by projectivity and consistency, they determine certain origin classes, and members of those classes can be sought in each sentence with indeterminate origin. If there is one, in a sentence, its origin is determined. If there are two, the choice can wait on consistency (every time an origin is chosen, new dependency types are given directions). The sentences containing no possible origin of a known origin class are collected and choices made simultaneously for all of them in such a way as to minimize the number of new origin classes; this calculation is feasible if the number of choices to be made is not too large.Thus if Lamb's procedure, or Garvin's, can give a phrase-structure to each span of text, it is possible to extend the analysis to a dependency structure. It remains to be seen whether the procedures of Lamb and Garvin will be satisfactory; almost without doubt, they will need elaboration. Lamb's has been applied, in tentative fashion, to a small amount of English text with gratifying results; such a trial, as Lamb remarked in reporting it, is far from a demonstration of workability. The dependency procedure added here has not been tried at all.According to the viewpoint developed earlier, the determination of morphemic structure, relations among morphs, is not the end of linguistic research. First the sound or letter sequences were segmented into morphs, then the morph sequences analysed for syntactic relations so that a dependency diagram could be given for each sentence. (In cases of ambiguity, there are alternative diagrams, of course.) The identification of morphs with similar spelling and identical or closely related distributions as alternative representations of the same morpheme remains to be done, but that is a side problem that can at best reduce the difficulty of the following main step, which is analogous to the segmentation of the original letter sequence: the dependency diagrams have to be segmented into semes (Lamb's sense of the term [26] , approximately). These semes are more nearly the units wanted in translation than the morphs or morphemes that comprise them; they have syntactic relations of their own; and a sentence must satisfy simultaneously conditions best stated in terms of (a) letter or sound sequences, (b) dependency diagrams over morphs or morphemes, and (c) diagrams over semes or sememes. The research procedures that can now be envisaged are very like those already discussed in this section, and this formulation of the problem of 'deep grammar' as Hockett [27] calls it or semantic compatibility in the traditional terms is so recent that little can be said beyond a plea for attention to it.The procedures described in Section 3 are not 'discovery' procedures; they are merely aids to the linguist who uses them along with all his knowledge of linguistic theory, semantics, and the rest. He is aided, if he is fortunate, by insight or intuition, or perhaps by fortunate guesses. His result may be a grammar in the formal sense, or merely a collection of observations. The procedures of Section 4 are 'discovery' procedures in the linguistic sense, but they are not infallible. They can be applied, to the extent that they have been specified, without any use of semantics, intuition, or judgment. Their application, however, will not always lead to a complete, consistent grammar capable of assigning at least one description to every sentence in the text on which it is based and to some other sentences not in that text.On the contrary, such discovery procedures can be written without difficulty. For example, given a text, cut it at random into 'morph occurrences', insert 'sentence boundaries', and assign every morph to one or both of two classes: class X does not occur just before a sentence boundary, class Y always does. Adopt two dependency rules: an occurrence of a class X morph governs a following occurrence of a class X morph or of a class Y morph. An occurrence of a class Y morph therefore governs nothing. This grammar covers the text and can generate an endless number of additional sentences. It will account for new texts chosen at random, except for the necessity of adding some new 'morphs' to the dictionary. It is unambiguous, in that it assigns exactly one structure to every sentence. Unfortunately, this grammar will accept a great many intuitively undesirable sentences, help but little in machine translation or information retrieval, and recognize too few morphs in new text. This morphemic grammar, moreover, will show no relation to any higher or lower-stratum grammar. The two classes, X and Y, are not morphologically differentiated, even approximately, and the discovery of semes would be fortuitous. Thus its internal simplicity is matched by the enormous complexity of its external relations.Bar-Hillel, during the Advanced Study Institute at which these lectures were given, stated several theorems that have not yet been published. Their general tone, when applied to problems of empirical linguistics, is to denigrate 'discovery' procedures. Given an infinite set of sentences, it is impossible to determine their grammar, even if it is known in advance that they have a context-free phrase-structure grammar; the theorems quoted are even stronger and broader, but their essential feature is the impossibility of absolute inference from a finite analysis to the infinite set of sentences. Given a finite text, as we have seen, finite grammars are easy to obtain. The issue is extrapolation.One could suspect, even before the enunciation of these theorems, that there would be difficulties. Supposing the existence of an infinite set of sentences for theoretical purposes and deciding whether a given sequence belongs to the infinite class of 'English sentences' for empirical purposes are two distinctly different problems. The only ways to decide, empirically, about a given sequence are to find it in text and to ask an informant. Text usually gives no answer; the number of possible sequences over a given alphabet or vocabulary is much greater than the number of sentences even in an immense text, and the linguist wants to extrapolate, not to describe the given finite text. Asking an informant gives an uncertain answer, one that varies from informant to informant and even from time to time with a single informant; the answer depends on the kind of question asked as well as on the sentence given, and there is not unanimity about the question. The answer to these difficulties has always been to impose more and more criteria on the grammar derived from a finite text, to check it against new text, to check it, overall, not in minute detail, against intuition, and to include criteria of interstratal consistency: Syntax must accord with morphology and semantics or sememics.The new theorems confirm this approach by denying the possibility of any other. The empirically difficult concept of an original infinite set of sentences for which a grammar must be found is now seen to be theoretically worthless, since the correspondence of grammar and 'language' (infinite set of sentences) would be unverifiable. Intuition and insight could yield a perfect grammar, but its perfection would be untestable. Systematic procedures may never yield a perfect grammar, but their connection with finite text samples, via criteria of analysis, can be explicated, as the connection of an intuitively derived grammar cannot be. The basic concepts of linguistics, replacing the empirically and theoretically difficult concept of an a priori infinite set of sentences, will therefore be the finite collection of textually validated sentences and the set of sentences generated by a grammar. (There are theoretical difficulties about the latter set, but they do not influence this discussion.) The connection between these two sets is made in two steps: criteria for derivation of a grammar from a finite text, and procedures for the generation of a set of sentences under the control of a grammar. The grammar is rigidly connected with the finite sample. Its connections with the rest of the 'natural language' for which it is proposed as a summary description necessarily remain vague, but the linguist can test its adequacy for the recognition of sentences in new text by mechanical procedures, and he can test, by recourse to informants, the acceptability of its analyses of given sentences and the acceptability of sentences that it generates. The grammar has become the instrument of extrapolation, as Chomsky once hinted [12] , and the criteria of its derivation determine the extrapolation made.† Presented at the NATO Advanced Study Institute on Automatic Translation of Languages, Venice, 15-31 July 1962.† Unpublished seminar paper by C. Chomsky, cited in[23].
null
null
The posteditor is a linguistic technician, a subprofessional aide to the linguist. He knows the languages that are being studied and he also knows, to a limited extent, the theoretical bases of the research. The tasks that are assigned to him are exacting, but they must be adapted to his special abilities and never allowed to exceed them. They are tedious, hence fatiguing, and therefore must be designed to minimize fatigue. They are time consuming, hence expensive, and therefore must be designed for speed of performance. And the results have to be keypunched and collated with the output of the computer system for subsequent analysis. The relations between man and machine, as the current saying goes, are as complex in this system as in any now operating; the best possible design has probably not been achieved, but fairly good ones have been tried out.Before text reaches the posteditor, it has been put through automatic dictionary lookup and automatic sentence-structure determination, and it may have been translated as well.If the text has not been processed before, i.e. if it is really new, it contains some items not in the dictionary, some items already known but in grammatical constructions previously unknown for the item. The text is also likely to contain new idiomatic combinations and new grammatical constructions. If it has been translated, the text may contain words with new equivalents or with equivalents that must be chosen according to new contextual criteria. Every new phenomenon gives work to the posteditor.As the automatic processing of text goes by stages, postediting can likewise be divided into several steps. These steps can be made quite separate, or they can be combined. Thus, for example, worksheets can be printed after dictionary lookup. At that stage, the posteditor would merely fill out dictionary entries for unrecognized items. He can, in fact, be given an alphabetic listing of new items, with one or more examples of their use in the text. Using these examples, together with any approved reference works, he writes entries which are keypunched, added to the dictionary, and used in another dictionary-lookup operation before automatic sentence-structure determination is attempted. This plan minimizes the work to be done at this stage, but does not deal with the problem of new functions for old items. For example, a Russian text may contain noun occurrences governing dative nouns, or instrumental nouns, or particular prepositional phrases, contrary to previous experience. Observation of such phenomena adds to knowledge of the language, and must be handled at some stage. If the posteditor is required to revise all dictionary entries prior to automatic sentence-structure determination (SSD), with the goal of adding to every entry the codes necessary for proper treatment of the item's occurrences in the new text, he must actually determine the structure of every sentence himself without computer help. Thus it seems impractical to correct the dictionary completely before attempting automatic SSD, but convenient to insert entries for new items. Of course, when the rate of occurrence of new items gets small, even this step can be eliminated. Its purpose is to eliminate errors in SSD, which are costly to correct. But when the cost of correcting those errors falls below the cost of repeating dictionary lookup with a corrected dictionary, it is time to drop the preliminary step.Fixed combinations of words, idioms, have several varieties. Some must be recognized before SSD, because the grammatical properties of the idiom are distinctly different from those of the component items. Such idioms can be recognized, easily and economically, provided that they always occur with their components adjacent in the text and in fixed order. Such idioms as English 'inasmuch as', Russian nesmotrya na = despite, can be listed and recognized during or just after dictionary lookup. To find new idioms of this type, the posteditor must read the text, keeping in mind the grammatical properties of individual items and the capacity of the SSD system for recognizing constructions. If the number of such idioms is small, as in English and Russian, it is probably better to identify new ones after SSD and correct the errors they cause.Thus the stages preceding SSD can well be combined with it in an uninterrupted sequence of machine operations leading to the production of a worksheet on which the posteditor will correct all the errors he finds. The argument can be extended still further, through translation of the text into a second language, so that the posteditor works on each passage of text only once. The system used at the Rand Corporation for analysing 250,000 running words of Russian text combined everything into one worksheet, but experience suggests that the cost of postediting twice, once for the determination of structure, once for translation, would not be much greater than the cost of a single, overall editing, and that the reduction of translation errors by correct determination of source-language structures would be worthwhile. Let us assume the two-step program.An SSD system can yield no more than one structure, or more than one, for each sentence, according to its design. If it can yield more than one, it can yield a great many. The format of worksheets for postediting must suit the situation.If the posteditor is to look at a single structure for each sentence and correct it, the worksheet can have a simple columnar format. Each occurrence of a source-language word occupies one line, the source-language text being printed in a vertical column. Occurrence numbers are needed for subsequent collation of different kinds of information about the text, and also for easy encoding of dependency connections. These connections are given in another column, by the computer program in the first instance, by the editor thereafter. Each occurrence has zero, one, or two governors. If zero, the program or editor writes 00 next to the occurrence. If one, the program or editor writes the occurrence number of the governor. And if an occurrence has two governors, both their occurrence numbers must be given. Each occurrence serves some function for its governor, and relatively few functions are distinguished in any language. Another column of the worksheet format is used for designation of functions, and a code assigning a one-letter designation to each function is prescribed. The program or editor writes the code symbol designating the function it serves next to each occurrence. Other information can be printed on the worksheet for the benefit of the editor: the grammatical information obtained from the glossary, one or more targetlanguage equivalents for each item, etc. Table 1 illustrates this format; it is filled out with a Russian sentence and structural coding.Working in this format, the editor has only to look for errors or gaps in dependency connections and function designations. Upon finding either an error or a gap, the editor must correct the structure. What more shall he do? The errors that he finds are caused by errors or gaps in the dictionary, the grammar, or the computer program. There may be similar errors that cause no mistakes in sentence-structure determination, over the text he is editing, but such other errors should not be the object of active search. A dictionary error that has no effect on the SSD operation at a particular point may be discovered at that point, but more or less by accident, and can be noted outside the normal postediting system. The dictionary errors that cause mistakes in SSD are sometimes obvious, sometimes subtle. The entry for a new item contains no information; for Russian, inserting gender, number, and case is usually easy. An idiom that has not been identified and listed should, when it occurs, cause an SSD error, and can be recognized, but giving it a complete grammatical description may not be simple. Noting that an item occurs with a new syntactic function is necessary, since its function is part of the sentence structure that the posteditor is correcting, but adding all the agreement properties necessary for that function can be difficult. In short, correcting the dictionary (and grammar) during postediting would be contrary to the principles of speed and simplicity. Moreover, these errors can be found during the analysis that is to come (see Section 3). Errors in the program, which are certainly possible if the program is intended to find at most one structure for each sentence, are still more difficult to isolate during postediting, or afterwards for that matter. Therefore it seems best to ask the editor for corrections of dependency connections and function assignments, and for nothing more at this stage. If the posteditor, on the other hand, is given a list of alternative structures for each sentence, the format must be quite different. As many as a hundred different structures for one sentence may be offered by the program; the format must be designed to minimize the time consumed in finding the one desired. Listing the sentence a hundred times, each time with one structure marked, is not the answer. One possible answer follows.One way in which the alternative structures of a sentence can differ is in the choice of the independent occurrence. The worksheet for a sentence begins with a listing of the whole sentence; each occurrence that can be independent in some structure of the sentence is marked, and a reference number is given for each. These reference numbers identify sections in the worksheet (see Table 2 ). The posteditor chooses the occurrence that should be independent, marks it, and turns to the section identified by the corresponding reference number, say section R-l.Orkestr igraet marsh garnizona gussarov malen'kogo goroda.(R-1) (Note: Although 'marsh' can be used as a verb, there is no complete structure for this sentence in which the occurrence of 'marsh' is independent.) Another way in which the alternative structures of a sentence can differ is in the choice of occurrences that depend on the independent occurrence. Here, however, it is necessary to decide what part of the sentence derives from each dependent of the independent occurrence and what function each dependent has. Suppose, for example, that occurrence number 7 is independent, and that occurrences 3, 8, and 10 depend on it, occurrence number 10 being the last in the sentence. Then, by projectivity, occurrences 1, 2, and 4 through 6 all depend, directly or indirectly, on number 3. But occurrence 9 may depend on either 8 or 10. To avoid later difficulties, and the possibility of deciding, by mistake, that number 9 depends on both 8 and 10, the posteditor must decide at once whether 9 goes with 8 or with 10. And in general he must mark a point of division somewhere between any two successive dependents of the same governor.Section R-l contains a list of alternatives. Each alternative is defined by (i) a set of occurrences dependent on the independent occurrence, (ii) the functions assigned to these dependents, and (iii) the boundaries between their derivation spans. There may, of course, be only one alternative in this section, or in any other. Each alternative is represented by a listing of the whole sentence. The independent occurrence is marked, and is constant throughout the section. The boundaries between derivation spans are also marked. Below each sentence listing, the alternative functions of each dependent of the independent occurrence are listed. These function listings are arranged so that, reading across the worksheet, a set of compatible functions are on a single line. Next to each function symbol is a reference number, identifying a section in the worksheet. The posteditor chooses one line in section R-l; it contains one or more reference numbers. He turns, one by one, to each of the corresponding sections (see Table 3 ). Table 1 , except: C2 = 2nd complementary (indirect object), C3 = 3rd complementary. Double underlining marks the main occurrence in a section; single underlining marks the dependents of the main occurrence. Thus in section R-l, igraet governs orkestr and marsh, but either orkestr is subject and marsh is object, or vice versa. In section 8, marsh governs garnizona alone, or garnizona and gussarov, or garnizona and gussarov and goroda. The asterisks mark boundaries between derivation spans; if marsh governs only garnizona, the derivation span of that dependent runs to the end of the sentence, but if it governs garnizona, gussarov, and goroda, the derivation spans of the first two contain only themselves and the derivation span of goroda contains malen'kogo goroda.All subsequent sections of the worksheet have the same format as section R-l, except that each alternative is represented by the part of the sentence between two derivation-span boundaries in the preceding section. If the original sentence described in the example above were divided * 123456*7*89*10*, then one section of the worksheet lists occurrences 1 through 6, another occurrences 8 and 9, another occurrence 10. If occurrence 3, for example, can have two different functions as dependent of occurrence 7, and if its capacity to govern other occurrences depends on its own function, then two sections must be used for occurrences 1 through 6. By selecting one alternative in each section, the editor chooses a governor and function for each occurrence in the sentence. But the posteditor need not refer to every section of the worksheet; each alternative that he selects leads to certain sections, and he must follow these leads. When he has no leads to follow, he has finished with the sentence, whether he has looked at all sections or not; if not, the untouched sections belong exclusively to false analyses.The postediting plan just outlined requires the editor to choose the one best structure for each sentence. Another plan would require him to verify that every structure proposed by the program is syntactically valid. According to such a plan, he would have to look at every alternative in every section of the worksheet. Worse, he would have to abandon all knowledge about the language and subject matter except syntactic knowledge. But abandonment implies separation, and separating syntax from all other linguistic knowledge is evidently very difficult. If a sentence has two syntactically valid structures, one correct for that sentence and the other not, then there must exist in the language some other sentence for which the second structure is valid. The editor could be asked to provide that sentence, but the text can be asked also. That is to say, if the posteditor marks the one correct structure for each sentence, the text analysed will eventually provide examples of all correct structures. Eventually does not mean within some limited period, however long, and the whole argument is subject to questions about the limitations imposed by style and subject matter, but the point is that verifying every machine-generated structure for each sentence has no overwhelming advantages and does have some severe disadvantages.Even if the posteditor is looking for the one best structure of each sentence, it is possible that he will not find it among those offered by the program. Suppose that in a certain section he fails to find the alternative that he requires. He must add a line to that section showing the occurrences that he wants to depend on the main element in that section, the function of each dependent, and the boundaries between their derivation spans. Next he must look for appropriate reference numbers for the dependents he has indicated. He may find all of them in other alternatives in the same section, he may find some of them there and some of them in other sections, or he may not find all of them anywhere. The sections can be arranged in order by occurrence number of the main element to simplify the search procedure. The editor can even use a section with erroneous boundaries, but then at some point he will be obliged to correct himself. Suppose, for example, that the span * 1 2 3 4 5 6 * of the sentence cited above should actually end after the fifth occurrence. In the section devoted to this span, there may be an alternative in which occurrence 3 has the correct dependents-say, 2 and 4-and the only span error concerns occurrence 6. This alternative has the form *12*3*456*. The analysis of the first span can continue normally. The section devoted to the span *456* may contain a semi-correct alternative, say *4*56*. Choosing this alternative, the posteditor would normally continue by selecting an analysis of the span *56*, but since occurrence 6 does not properly belong in the original span, in this case the posteditor must stop. In other cases, he would have to add a section to bring an occurrence into a span where it belonged. Remembering that the posteditor's object is simply to indicate the correct governor and the correct function for each occurrence, and that derivation spans were introduced solely to reduce the opportunity of error, will make finding the simplest procedure for adding structures perfectly obvious.When postediting has been completed, the new information is keypunched and collated with the original text, the information obtained by dictionary lookup, and that produced by SSD. Whatever the form of the worksheet, the keypunched information can be reduced to a set of specifications which, added to SSD output, determines exactly one structure for each sentence in text.The next stage in automatic processing is an idiom lookup. This time the fixed combinations of words that must be recognized need not occur in fixed order, but they must be connected in the structure of the sentence. It is not yet possible to describe in detail the kinds of properties that must be ascribed to these idioms, and to individual words when they occur outside these idioms, but it is certain that properties similar to syntactic properties will have to be assigned to them, and that an operation similar to SSD will have to be carried out on them. An old-fashioned term for this process is the determination of semantic compatibility. Its purpose is to eliminate the ambiguity remaining after SSD, reducing the alternative structures of sentences from dozens or hundreds to one or two each, and to avoid ambiguity in the selection of target-language equivalents. It will have to be postedited, but first it must be defined in more detail than seems possible for the moment. In Section 3, it will be assumed that there exist certain source-language units (and these may be idioms or words occurring not in idioms) to which correspond sets of target-language equivalents (idioms or single words), and that the posteditor, at some stage, chooses the best equivalent for each occurrence of a source-language unit. More circumstantiality would be premature.
null
Main paper: methodology and research design: The courses taught in American high schools include English, History, Geography, and Mathematics. Until courses in 'Human Relations' were introduced, English and Mathematics had the special distinction of being the only courses intended to influence behavior outside the school. And, whereas Mathematics would be expected to influence behavior only in such special situations as the verification of bank accounts, English was and is expected to influence the student's behavior whenever he speaks or writes. Human Relations (and Driver Training) are also intended to influence behavior, the one universally, the other in narrowly defined circumstances. Now, everyone would agree that driving in a way that differs from the methods taught in school is dangerous (driving without using the steering wheel) or bound to be unsuccessful (driving without turning on the ignition). Likewise, doing arithmetic by nonstandard methods (as with such rules as 3+2 = 7) cannot lead to uniformly satisfactory results. These courses teach all there is to know about their subjects. On the other hand, everyone would agree that a Human Relations course does not, because it cannot, teach everything there is to know about dealing with one's fellow men; the contrary proposition is laughable, but so is the proposition that an English course teaches everything there is to know about the use of that language, and yet that proposition is often adopted in computational linguistics, admittedly only in covert versions such as 'the best dictionaries and grammars contain much useful information'.The best dictionaries and grammars (e.g. [3] ) do indeed contain enormous amounts of useful information about English, French, and other languages. To omit them from a list of sources of linguistic data would be folly and lead to regrettable waste of time and money. But they do not contain everything there is to know about English, French, or any other language except possibly some dead language of which only a few sentences remain. Some of the most striking examples of the gaps that can always be found are rules for selection of equivalent words and equivalent grammatical structures in translation; rules for prepositional usage and the kinds of structures (phrases, subordinate clauses, etc.) that particular words can govern; rules for ellipsis; rules for pronominal reference; rules for insertion, deletion, or translation of articles, moods, aspects. There is just not enough in all the dictionaries and grammars of English and French to make possible the immediate construction of a good system for automatic translation of one language into the other, or of a good system for automatic indexing or abstracting of either language, and therefore the school that would be up and doing is bound to be unsuccessful, in the view of the present author, for some time to come.Mathematics and language resemble one another so closely, as several linguists have pointed out (e.g. [4] ), the construction of a grammatically accurate sentence closely resembles the construction of a valid formula, that one may not immediately see why mathematics should be so definitively exposed in its treatises and languages so inadequately dealt with in theirs. The answer is nevertheless immediately clear: languages are invented, learned, modified, and kept unaltered by largely unconscious processes in human interaction. Like other aspects of human behavior, language is a matter of convention, but it must be clearly understood that these conventions are mostly unconscious. When a child comes to school for the first time he already knows a great deal about his language, and what he learns thereafter he learns partly in language classes but partly elsewhere. The purpose of language teaching (that is, classroom teaching of the child's native language) is only to reinforce certain conventions that, according to experience, are not adequately supported by the An Introduction to Computational Procedures in Linguistic Research 141 unconscious mechanisms. Its purpose can be called artificial, in contrast to the natural support of conventions outside the classroom.Since the child does not learn his native language in the classroom, but only a few somewhat artificial elements of his language, it is not necessary for the sake of such teaching to have a thorough description of the language in systematic form. And the teaching of second languages also depends on practise, on unconscious learning and on knowledge of the first language. Heretofore, full-scale knowledge of natural languages has been an object of at most academic interest, and the academicians (the linguists in this case) have had special interests: phonetics and phonemics, morphology, and to a limited degree syntax. Limitation of attention to these areas avoided conflicts with neighboring disciplines, and brought the reward of quick success. In less than a century the study of speech sounds has reached the point of making automatic speech production and recognition almost realizable. Within half a century, the study of morphology and syntax has made the automatic dictionary possible, automatic parsing possible within certain clear limits and with certain prerequisites, and has made automatic methods of linguistic research almost realizable. On the other hand, areas extending toward what is generally called semantics have not been studied so closely, and the motivation to study 'usage' in detail and in extenso has been absent. Some of the conventions that define natural languages have been brought to light, others remain unconscious.Thus the difference between mathematics and language: the mathematician begins with rules or conventions, explicitly formulated, and works out their consequences, the sentences of his formal languages. The native speaker acquires a set of conventions by overt learning, in which case the conventions are at least occasionally conscious, or by covert learning, through listening to sentences and reading them, in which case the conventions may never be conscious. The linguist's task is to obtain a full statement of these conventions. Those that are conscious can be obtained by asking direct questions or, for well-known languages, by reading reference books. Those that are still unconscious can only be discovered by inference from observation of behavior. And there is the additional difficulty that the so-called 'conscious rules' may not control normal linguistic behavior; hence even these must be verified.The linguist has several kinds of raw material at his disposition. He can use text in the language that he wants to study, and this text may have a natural origin or it may have been produced in response to his questioning. He can use parallel texts in two languages, and again these texts may have natural origins; as when a Russian book is translated into English for publication, or they may have been produced expressly for the linguist; as when the linguist asks an informant to translate a sentence from the linguist's language into the informant's. He can interrogate informants in any way he chooses, for example asking whether a sentence that he utters is correct, or asking whether there are any words in English (if that is the informant's language) that form plurals but not with -s or -es, or asking the informant to comment, in the linguist's own terms, on the sentences of a text. He can, in principle, collect data on the meanings of texts or fragments of text, although the techniques that he would need are not well developed.Linguistic methodology has variants corresponding to choice of raw material. If the sole object of study is a text, obtained under neutral conditions, then the methodology is called 'distributional analysis', and its only objective is to characterize the sentences of the language. The chief advocate of this method, at least in the United States, seems to be Harris [5] . It is obvious that no other objective can be attained, since all that is known about any sequence of sounds or letters is that it does or does not occur in the text. The object studied is always a finite text, and there is always a finite characterization of the sentences in a finite text. In fact, there are indefinitely many such characterizations. The hope of eliminating some of them, or even of reducing the set of acceptable characterizations to a single member which could be called the (unique) grammar of the language, has led to the introduction of extrinsic principles, of which the most famous is simplicity. A simple example of distributional procedure leads to a classification of English written consonants. Let + stand for the space between words, and v stand for any vowel. The linguist can inquire what consonants occur in the distributional frame +_v, that is, after a word space and before a vowel. If the text consulted is large enough, he will find every consonant. Now he takes the frame + c_v, where c is any consonant. In this frame he finds H, L, and R very often (and after many different consonants); he finds M, N, and W less often; he finds that several consonants often occur after S; and he finds that a number of other consonants occur rarely, each after only one or two other consonants. He therefore asserts that there are four main frames for the classification of consonants (with respect to occurrence at the beginning of a word): +S_v, + _Hv, +_Lv, and +_Rv. In the first frame, C, H, K, L, M, N, P, T, and W occur (for example, in science, ship, skate, slit, smooth, snuff, spark, stand, swear). In the second, C, G, P, S, and T (check, ghost, philosophy, share, that); note that the appearance of H in the first frame is equivalent to the appearance of S in the second. In the third, B, C, F, G, P, and S (black, clay, flay, glass, play, slay); again, +SLv is either S in + _Lv or L in +S_v. In the fourth frame, B, C, D, F, G, P, T, and W (bray, craw, draw, free, grey, prey, tray, write). These observations are not the end of the description of English initial consonant clusters, and the description is not the end of the analysis, but they illustrate the kind of observations that distributional methodology permits.If the linguist chooses to accept statements about meaning as raw material, he must of course obtain them by interrogation of an informant. Bloomfield's definition of the morpheme, quoted by Nida [6] and widely accepted, illustrates form-meaning methodology: "A linguistic form which bears no partial phonetic-semantic resemblance to any other form is . . . a morpheme". If a phonetic description of the language is given in advance, "partial phonetic resemblance" is clearly determinable. Thus, for example, ban-bar, can-car, fan-far, man-mar, pan-par, tan-tar are pairs bearing partial phonetic resemblance; in each pair, the two words have the same initial consonant and the same vowel (in written form). Does ban bear any partial semantic resemblance to bar? It would not be wise to answer 'no' too quickly, since 'He was banned' and 'He was barred' might well be said to have similar meanings. The hyphenation of 'phonetic-semantic' in Bloomfield's definition means that the partial phonetic and semantic resemblances must be correlated, however, and the only evidence of such a correlation is that the same pair of resemblances occurs in several forms. If can and car, fan and far, etc., bear any partial semantic resemblances to one another, pair by pair, they are surely not the same as the resemblance of ban to bar. On the other hand, build-builder, work-worker, walk-walker, and many more such pairs, are asserted to bear a common partial phonetic-semantic resemblance. The first member of each pair does not, the second does end in -er; the first member of each pair names an action, the second member names a person who performs the action. Thus -er is identified as a morpheme with the meaning agentive. Hjelmslev's commutation test is methodologically a form-meaning procedure. As quoted by Togeby, "Les éléments du contenu ne sont indépendants que si leur interchangement peut entrainer un changement d'expression" [7, pp. 7-8] . Here 'élément du contenu' should be understood as a morpheme, semantically defined, and 'expression' has to do with the phonetic representation of morphemes. Thus two supposedly distinct meaning units must have phonetically distinct representations, at least in some contexts ('peut entrainer').Note that any use of parallel texts, whether in two languages or in one (in which case one is a paraphrase of the other), is a form-meaning procedure, since the assertion that the texts are parallel is a semantic assertion. Likewise, for reasons to be discussed below, the method with text and editors that will shortly be introduced is a semantic procedure.A third methodology is psychological. Linguistic conventions are effective only to the degree that they are part of the cognitive systems of speakers of the language. Psychological methodology makes the speaker an explicit object of study and undertakes to determine his cognitive structure. Martinet [8] quotes Baudouin de Courtenay's 'phonic intentions' as an example of a psycholinguistic concept and gives an illustration of its application. French has two l's, one voiced /lak/ = lac, the other unvoiced /ppl/ = peuple. These two sounds represent the same phoneme, however, because they occur in mutually exclusive distributions, or because they never serve to differentiate words with different meanings, or because, and this is the application of psychological methodology, they result from the same phonic intention. In other terms, they are represented in the cognitive structure of a speaker of French by a single element. But, as Martinet points out, evidence about cognitive structures is hard to obtain. Very few studies can be cited, but Miller's experiment on speech-perception units is certainly one [9] .It is widely believed that the three methodologies should lead to the same results. Martinet's version of the argument is that cognitive discriminations are made only when commutation, the need to keep sounds separate because they differentiate words, for example, forces them. The distributional methodology can be tied to the others by the argument that if two phones, for example, have complementary (mutually exclusive) distributions, they cannot serve to distinguish any pair of words.Each of the three methodologies has both advantages and disadvantages. In favor of the pure distributional methodology is the simplicity of collecting the data. Semantic and psychological data can be obtained, but in each case the theory is not well enough developed to permit the establishment of adequate controls. Against distributional methodology is the non-uniqueness of its results. The use of extrinsic principles such as economy is not fundamentally unsound, but economy or simplicity has not been formulated as yet in terms that all linguists can accept, and it cannot yet be demonstrated that any particular set of extrinsic principles is adequate to reduce the many possible distributional grammars based on any given finite text to uniqueness. The discovery that a fixed set of extrinsic principles had such an effect, and the further discovery that the resulting unique grammar corresponded to semantic and psychological findings, would be a linguistic achievement of great importance. The extrinsic principles thus supported, if they in turn could be proved unique, would have the status now sometimes claimed for the principle of economy: they would give a metapsychological characterization of the speakers of the language or languages, of the community of speakers, as they are supposedly characterized by Zipf's principle of least effort [10] .When a digital computer is available to the linguist, many tasks are conveniently carried out that would be nearly impossible without it. For the distributionalist, every operation involves scanning text for occurrences of an item in a frame, and the number of items and frames that ought to be studied is large. Thus Harris, a decade ago, regarded the distributional method as an idealization of good practise, impossible to apply. Only the beginnings of a system for automatic distributional analysis have been created as yet, but it is no longer impossible to imagine practical use of distributional methodology.An adaptation of the form-meaning method to the special circumstances of computer use is the 'cyclical' method with posteditors [11] . When this method is applied to the study of a language, a crude description of the language must first be constructed. The work then proceeds through a series of stages. At each stage, the existing description of the language is supplied to a computer program which applies it to a sample text. Here the generalizations of the description are converted into analyses of specific sentences. If the description is incomplete, some sentences may be analysed satisfactorily, but others may be given incomplete or incorrect analyses, and some sentences may not be analysed at all. The correctness of each sentence analysis is decided by the editors (posteditors, because they examine the text after the computer program has analysed it). The editors correct any errors they find, and their corrections furnish the raw material for studies that lead to an improved description of the language. The modification of the description ends one cycle and permits another to begin.If the editors were eliminated, the cyclic procedure would be a distributional method. The data to be studied in each cycle would consist of the sentences not fully analysed. But the editors use their whole knowledge of the language and the subject matter of the text when they consider the correctness of each sentence analysis. Hence the procedure uses formmeaning methodology.Nevertheless, the procedure avoids asking informants or editors questions of certain difficult kinds. The editor is never asked to make a general statement about the language, but only to comment on specific sentences. Second, he is never asked to provide any sentences; not the editor, but the original author, vouches for the sentencehood of each sentence in the text. All disputes about such sequences as Chomsky's 'colorless green ideas sleep furiously' [12] are thus avoided. Third, the editor is never asked to state the meaning of a sentence, or to say whether two sentences have the same meaning or related meanings, except that he may be asked to translate or paraphrase a text, and a text and its translation or paraphrase are expected to have almost equivalent meanings. Many delicate questions, whose answers are too doubtful to provide good support for a linguistic description, do not have to be asked. Moreover, the answers given by editors to the questions that must be asked can be checked. The same text can be edited by different persons and their corrections of the machine-produced analyses compared. The method avoids or controls errors, as a good method should.Two other characteristics of a good research procedure are economy (in the operation of the procedure, not in the results) and convenience, which leads to greater accuracy and economy. The convenience of a cyclic procedure with posteditors depends in part on the design of the posteditors' worksheets, the forms on which they are given analysed text for correction; in part on the kinds and quantities of corrections that they must make, and the notational scheme provided for the indication of changes in the machine-produced analyses; and in part on the processes that must be carried out after postediting, the processes that reduce the raw data to an improved description of the language. Economy should be gained by the use of the computer, first to provide tentative analyses of the text, second to manipulate the raw data.So far, almost nothing has been said about the nature of a linguistic description, or. about the places in the cyclic procedure where it is involved: the computer program that assigns analyses to sentences, the postediting that corrects these analyses, and the data- reduction that modifies the description. A text is a string of occurrences of characters from a finite alphabet. In natural languages, texts can be segmented into recurrent substrings, each indicating the occurrence of a word or morpheme. Part of the description of a language is a list of these substrings; with each must be given a statement of its linguistic properties. The list is a dictionary, and one step in sentence analysis is dictionary lookup. All the rest of the description, and all further steps in sentence analysis, uses the properties of units, not their alphabetic representations.The structure of a sentence is a set of relationships binding all the word or morpheme occurrences in it together into a whole. Among the competing theories of sentence structure now extant, only one will be introduced and used here. The theory of immediate constituents [12] , is its principal rival, and far more widely known, but the author's experience is largely confined to the theory of dependency, which has the same descriptive power in a certain formal sense [13] . According to the theory of dependency [14] , [15] , [16] , [17] , every word (or morpheme; but word will be used hereafter, as it can properly be used in the study of some but not all languages) depends on one other word in the same sentence. There are two exceptions: one word in every sentence is independent, and relative pronouns, adverbs, and adjectives depend on two other words simultaneously. Each word serves some function for its governor. The number of functions in any natural language is apparently small, and it seems reasonable to postulate that no word ever governs two words with the same function at the same time, again with certain exceptions. Strings of appositives ("John, the leader of the group, the strongest member, the one on whom all others relied," etc.) occur, and it is moot whether, say, all appositives depend on the first in the sequence or each on the one directly before it. Combinations of words with a conjunction ("One, two, three, ..., and N are integers") occur, and it seems best to attribute the same function to all conjoined elements and let them all depend on the conjunction.The fact that one word can serve a particular function for its governor, together with the fact that a second word can govern the same function, do not mean that the first word can serve the given function for the second. Thus 'books' can serve subjective function, and 'am' can govern a word with subjective function, but 'books-am' is not a possible subject-governor combination. The properties of words that determine whether they can serve functions for one another are called agreement properties. If word X can serve function F for word Y, words X and Y agree with respect to function F with X as dependent.Among the properties that must be listed in the dictionary are grammatical properties. The grammatical properties of a word are the functions it can govern, the functions it can serve (as dependent), and the agreement properties involved in any of its potential functions, plus certain other properties concerning word order and punctuation that will not be discussed here.Given the theory of dependency and a dictionary in which grammatical properties are listed, a computer can determine the structure of any sentence provided (i) the sentence is composed of words in the dictionary, (ii) the word-occurrences in the sentence are used in accordance with the properties shown in their dictionary entries, (iii) certain word-order rules, primarily the rule of projectivity, are obeyed, and (iv) there is no ellipsis. The rule of projectivity [16] requires that all the dependents of an occurrence, all the dependents of its dependents, etc., lie between the first word to left and right that do not depend on it. Ellipsis raises problems that will not be discussed here. In general, the structure assigned to a sentence will not be unique; some sentences will be ambiguous.The program that determines sentence structure, after dictionary lookup, has three parts [2] . The first selects, in accordance with the projectivity rule, a pair of occurrences that can be connected if they agree. The second, given a possibly connected pair, looks for a function that one can serve for the other with respect to which the members of the pair agree. The third changes the list of functions that can be governed by the governing member of the pair, eliminating the function served by the dependent member. The three parts of the program operate in rotation; after the third has operated on a pair of words (or after the second has failed to find a connection between a pair), the first selects a new pair. Properly designed, a program of this type can operate at high speed on a standard computer. postediting: The posteditor is a linguistic technician, a subprofessional aide to the linguist. He knows the languages that are being studied and he also knows, to a limited extent, the theoretical bases of the research. The tasks that are assigned to him are exacting, but they must be adapted to his special abilities and never allowed to exceed them. They are tedious, hence fatiguing, and therefore must be designed to minimize fatigue. They are time consuming, hence expensive, and therefore must be designed for speed of performance. And the results have to be keypunched and collated with the output of the computer system for subsequent analysis. The relations between man and machine, as the current saying goes, are as complex in this system as in any now operating; the best possible design has probably not been achieved, but fairly good ones have been tried out.Before text reaches the posteditor, it has been put through automatic dictionary lookup and automatic sentence-structure determination, and it may have been translated as well.If the text has not been processed before, i.e. if it is really new, it contains some items not in the dictionary, some items already known but in grammatical constructions previously unknown for the item. The text is also likely to contain new idiomatic combinations and new grammatical constructions. If it has been translated, the text may contain words with new equivalents or with equivalents that must be chosen according to new contextual criteria. Every new phenomenon gives work to the posteditor.As the automatic processing of text goes by stages, postediting can likewise be divided into several steps. These steps can be made quite separate, or they can be combined. Thus, for example, worksheets can be printed after dictionary lookup. At that stage, the posteditor would merely fill out dictionary entries for unrecognized items. He can, in fact, be given an alphabetic listing of new items, with one or more examples of their use in the text. Using these examples, together with any approved reference works, he writes entries which are keypunched, added to the dictionary, and used in another dictionary-lookup operation before automatic sentence-structure determination is attempted. This plan minimizes the work to be done at this stage, but does not deal with the problem of new functions for old items. For example, a Russian text may contain noun occurrences governing dative nouns, or instrumental nouns, or particular prepositional phrases, contrary to previous experience. Observation of such phenomena adds to knowledge of the language, and must be handled at some stage. If the posteditor is required to revise all dictionary entries prior to automatic sentence-structure determination (SSD), with the goal of adding to every entry the codes necessary for proper treatment of the item's occurrences in the new text, he must actually determine the structure of every sentence himself without computer help. Thus it seems impractical to correct the dictionary completely before attempting automatic SSD, but convenient to insert entries for new items. Of course, when the rate of occurrence of new items gets small, even this step can be eliminated. Its purpose is to eliminate errors in SSD, which are costly to correct. But when the cost of correcting those errors falls below the cost of repeating dictionary lookup with a corrected dictionary, it is time to drop the preliminary step.Fixed combinations of words, idioms, have several varieties. Some must be recognized before SSD, because the grammatical properties of the idiom are distinctly different from those of the component items. Such idioms can be recognized, easily and economically, provided that they always occur with their components adjacent in the text and in fixed order. Such idioms as English 'inasmuch as', Russian nesmotrya na = despite, can be listed and recognized during or just after dictionary lookup. To find new idioms of this type, the posteditor must read the text, keeping in mind the grammatical properties of individual items and the capacity of the SSD system for recognizing constructions. If the number of such idioms is small, as in English and Russian, it is probably better to identify new ones after SSD and correct the errors they cause.Thus the stages preceding SSD can well be combined with it in an uninterrupted sequence of machine operations leading to the production of a worksheet on which the posteditor will correct all the errors he finds. The argument can be extended still further, through translation of the text into a second language, so that the posteditor works on each passage of text only once. The system used at the Rand Corporation for analysing 250,000 running words of Russian text combined everything into one worksheet, but experience suggests that the cost of postediting twice, once for the determination of structure, once for translation, would not be much greater than the cost of a single, overall editing, and that the reduction of translation errors by correct determination of source-language structures would be worthwhile. Let us assume the two-step program.An SSD system can yield no more than one structure, or more than one, for each sentence, according to its design. If it can yield more than one, it can yield a great many. The format of worksheets for postediting must suit the situation.If the posteditor is to look at a single structure for each sentence and correct it, the worksheet can have a simple columnar format. Each occurrence of a source-language word occupies one line, the source-language text being printed in a vertical column. Occurrence numbers are needed for subsequent collation of different kinds of information about the text, and also for easy encoding of dependency connections. These connections are given in another column, by the computer program in the first instance, by the editor thereafter. Each occurrence has zero, one, or two governors. If zero, the program or editor writes 00 next to the occurrence. If one, the program or editor writes the occurrence number of the governor. And if an occurrence has two governors, both their occurrence numbers must be given. Each occurrence serves some function for its governor, and relatively few functions are distinguished in any language. Another column of the worksheet format is used for designation of functions, and a code assigning a one-letter designation to each function is prescribed. The program or editor writes the code symbol designating the function it serves next to each occurrence. Other information can be printed on the worksheet for the benefit of the editor: the grammatical information obtained from the glossary, one or more targetlanguage equivalents for each item, etc. Table 1 illustrates this format; it is filled out with a Russian sentence and structural coding.Working in this format, the editor has only to look for errors or gaps in dependency connections and function designations. Upon finding either an error or a gap, the editor must correct the structure. What more shall he do? The errors that he finds are caused by errors or gaps in the dictionary, the grammar, or the computer program. There may be similar errors that cause no mistakes in sentence-structure determination, over the text he is editing, but such other errors should not be the object of active search. A dictionary error that has no effect on the SSD operation at a particular point may be discovered at that point, but more or less by accident, and can be noted outside the normal postediting system. The dictionary errors that cause mistakes in SSD are sometimes obvious, sometimes subtle. The entry for a new item contains no information; for Russian, inserting gender, number, and case is usually easy. An idiom that has not been identified and listed should, when it occurs, cause an SSD error, and can be recognized, but giving it a complete grammatical description may not be simple. Noting that an item occurs with a new syntactic function is necessary, since its function is part of the sentence structure that the posteditor is correcting, but adding all the agreement properties necessary for that function can be difficult. In short, correcting the dictionary (and grammar) during postediting would be contrary to the principles of speed and simplicity. Moreover, these errors can be found during the analysis that is to come (see Section 3). Errors in the program, which are certainly possible if the program is intended to find at most one structure for each sentence, are still more difficult to isolate during postediting, or afterwards for that matter. Therefore it seems best to ask the editor for corrections of dependency connections and function assignments, and for nothing more at this stage. If the posteditor, on the other hand, is given a list of alternative structures for each sentence, the format must be quite different. As many as a hundred different structures for one sentence may be offered by the program; the format must be designed to minimize the time consumed in finding the one desired. Listing the sentence a hundred times, each time with one structure marked, is not the answer. One possible answer follows.One way in which the alternative structures of a sentence can differ is in the choice of the independent occurrence. The worksheet for a sentence begins with a listing of the whole sentence; each occurrence that can be independent in some structure of the sentence is marked, and a reference number is given for each. These reference numbers identify sections in the worksheet (see Table 2 ). The posteditor chooses the occurrence that should be independent, marks it, and turns to the section identified by the corresponding reference number, say section R-l.Orkestr igraet marsh garnizona gussarov malen'kogo goroda.(R-1) (Note: Although 'marsh' can be used as a verb, there is no complete structure for this sentence in which the occurrence of 'marsh' is independent.) Another way in which the alternative structures of a sentence can differ is in the choice of occurrences that depend on the independent occurrence. Here, however, it is necessary to decide what part of the sentence derives from each dependent of the independent occurrence and what function each dependent has. Suppose, for example, that occurrence number 7 is independent, and that occurrences 3, 8, and 10 depend on it, occurrence number 10 being the last in the sentence. Then, by projectivity, occurrences 1, 2, and 4 through 6 all depend, directly or indirectly, on number 3. But occurrence 9 may depend on either 8 or 10. To avoid later difficulties, and the possibility of deciding, by mistake, that number 9 depends on both 8 and 10, the posteditor must decide at once whether 9 goes with 8 or with 10. And in general he must mark a point of division somewhere between any two successive dependents of the same governor.Section R-l contains a list of alternatives. Each alternative is defined by (i) a set of occurrences dependent on the independent occurrence, (ii) the functions assigned to these dependents, and (iii) the boundaries between their derivation spans. There may, of course, be only one alternative in this section, or in any other. Each alternative is represented by a listing of the whole sentence. The independent occurrence is marked, and is constant throughout the section. The boundaries between derivation spans are also marked. Below each sentence listing, the alternative functions of each dependent of the independent occurrence are listed. These function listings are arranged so that, reading across the worksheet, a set of compatible functions are on a single line. Next to each function symbol is a reference number, identifying a section in the worksheet. The posteditor chooses one line in section R-l; it contains one or more reference numbers. He turns, one by one, to each of the corresponding sections (see Table 3 ). Table 1 , except: C2 = 2nd complementary (indirect object), C3 = 3rd complementary. Double underlining marks the main occurrence in a section; single underlining marks the dependents of the main occurrence. Thus in section R-l, igraet governs orkestr and marsh, but either orkestr is subject and marsh is object, or vice versa. In section 8, marsh governs garnizona alone, or garnizona and gussarov, or garnizona and gussarov and goroda. The asterisks mark boundaries between derivation spans; if marsh governs only garnizona, the derivation span of that dependent runs to the end of the sentence, but if it governs garnizona, gussarov, and goroda, the derivation spans of the first two contain only themselves and the derivation span of goroda contains malen'kogo goroda.All subsequent sections of the worksheet have the same format as section R-l, except that each alternative is represented by the part of the sentence between two derivation-span boundaries in the preceding section. If the original sentence described in the example above were divided * 123456*7*89*10*, then one section of the worksheet lists occurrences 1 through 6, another occurrences 8 and 9, another occurrence 10. If occurrence 3, for example, can have two different functions as dependent of occurrence 7, and if its capacity to govern other occurrences depends on its own function, then two sections must be used for occurrences 1 through 6. By selecting one alternative in each section, the editor chooses a governor and function for each occurrence in the sentence. But the posteditor need not refer to every section of the worksheet; each alternative that he selects leads to certain sections, and he must follow these leads. When he has no leads to follow, he has finished with the sentence, whether he has looked at all sections or not; if not, the untouched sections belong exclusively to false analyses.The postediting plan just outlined requires the editor to choose the one best structure for each sentence. Another plan would require him to verify that every structure proposed by the program is syntactically valid. According to such a plan, he would have to look at every alternative in every section of the worksheet. Worse, he would have to abandon all knowledge about the language and subject matter except syntactic knowledge. But abandonment implies separation, and separating syntax from all other linguistic knowledge is evidently very difficult. If a sentence has two syntactically valid structures, one correct for that sentence and the other not, then there must exist in the language some other sentence for which the second structure is valid. The editor could be asked to provide that sentence, but the text can be asked also. That is to say, if the posteditor marks the one correct structure for each sentence, the text analysed will eventually provide examples of all correct structures. Eventually does not mean within some limited period, however long, and the whole argument is subject to questions about the limitations imposed by style and subject matter, but the point is that verifying every machine-generated structure for each sentence has no overwhelming advantages and does have some severe disadvantages.Even if the posteditor is looking for the one best structure of each sentence, it is possible that he will not find it among those offered by the program. Suppose that in a certain section he fails to find the alternative that he requires. He must add a line to that section showing the occurrences that he wants to depend on the main element in that section, the function of each dependent, and the boundaries between their derivation spans. Next he must look for appropriate reference numbers for the dependents he has indicated. He may find all of them in other alternatives in the same section, he may find some of them there and some of them in other sections, or he may not find all of them anywhere. The sections can be arranged in order by occurrence number of the main element to simplify the search procedure. The editor can even use a section with erroneous boundaries, but then at some point he will be obliged to correct himself. Suppose, for example, that the span * 1 2 3 4 5 6 * of the sentence cited above should actually end after the fifth occurrence. In the section devoted to this span, there may be an alternative in which occurrence 3 has the correct dependents-say, 2 and 4-and the only span error concerns occurrence 6. This alternative has the form *12*3*456*. The analysis of the first span can continue normally. The section devoted to the span *456* may contain a semi-correct alternative, say *4*56*. Choosing this alternative, the posteditor would normally continue by selecting an analysis of the span *56*, but since occurrence 6 does not properly belong in the original span, in this case the posteditor must stop. In other cases, he would have to add a section to bring an occurrence into a span where it belonged. Remembering that the posteditor's object is simply to indicate the correct governor and the correct function for each occurrence, and that derivation spans were introduced solely to reduce the opportunity of error, will make finding the simplest procedure for adding structures perfectly obvious.When postediting has been completed, the new information is keypunched and collated with the original text, the information obtained by dictionary lookup, and that produced by SSD. Whatever the form of the worksheet, the keypunched information can be reduced to a set of specifications which, added to SSD output, determines exactly one structure for each sentence in text.The next stage in automatic processing is an idiom lookup. This time the fixed combinations of words that must be recognized need not occur in fixed order, but they must be connected in the structure of the sentence. It is not yet possible to describe in detail the kinds of properties that must be ascribed to these idioms, and to individual words when they occur outside these idioms, but it is certain that properties similar to syntactic properties will have to be assigned to them, and that an operation similar to SSD will have to be carried out on them. An old-fashioned term for this process is the determination of semantic compatibility. Its purpose is to eliminate the ambiguity remaining after SSD, reducing the alternative structures of sentences from dozens or hundreds to one or two each, and to avoid ambiguity in the selection of target-language equivalents. It will have to be postedited, but first it must be defined in more detail than seems possible for the moment. In Section 3, it will be assumed that there exist certain source-language units (and these may be idioms or words occurring not in idioms) to which correspond sets of target-language equivalents (idioms or single words), and that the posteditor, at some stage, chooses the best equivalent for each occurrence of a source-language unit. More circumstantiality would be premature. analysis of postedited text: Computational systems for linguistic analysis are still no more than semiautomatic. They rearrange and summarize data in accordance with rather simple, specific instructions from the linguist, who must draw his own conclusions from the results. Hence, regrettably, the design of analytic procedures must take into account the linguist's momentary state of knowledge. The more he knows about the language he is studying, the more elegant and powerful can his analyses be. The start that has been made on fully automatic analysis is described in Section 4; that work is for the future, however, whereas the methods introduced in the present section are ready, at least in principle, for immediate use.The classic tool of linguistics is the concordance. Each entry in a concordance consists of a key occurrence together with part of its context. Ordinarily, every occurrence in a text appears as the key occurrence in one entry in the concordance of the text; a given occurrence may also appear in other entries, as part of the context of other occurrences. If the context included in each entry consists of the occurrences immediately preceding and following the key occurrence, the concordance of a test is three times as large as the text itself, not counting the location indicators that are usually added to each entry in a concordance. Sometimes selective concordances are prepared, omitting, for example, all entries with function words (prepositions, conjunctions, articles, etc.) as key occurrences.When a concordance is prepared from postedited text, the context of an occurrence can be defined in terms of structural connections instead of linear order. Thus an entry might consist of a key occurrence together with its governor and all its dependents. The function served by the key occurrence and those served by its dependents can also be included. When a dictionary is available, other information can be added to each concordance entry. The exact form of the key occurrence can be replaced with a canonical form: plural nouns with singular, all verb forms with infinitives, and so on. Grammatical information can be used as well.The arrangement of a concordance is always systematic, since the text itself could otherwise serve as its own concordance. The system chosen depends on the information available in each entry, of course, and on the purpose the concordance is to serve. Using terminology that is familiar in computing manuals, one can speak of major, intermediate, and minor sorting variables. In a telephone book, the major variable is family name, the intermediate variable is first name, and the minor variable is middle name; in the so-called 'Yellow Pages', the major variable is name of product or service, the intermediate variable is firm name, and the minor variable, used only occasionally, is branch or dealer name. In a concordance, the major variable may be the form of the key occurrence, intermediate the form of the preceding occurrence, minor the form of the following occurrence. Some other arrangements are: . This list is not complete, and a whole series of other arrangements could be defined if such characteristics as the target-language equivalent of the key occurrence were included in each entry. Each arrangement has some use. Unfortunately, the arrangement best adapted to the study of any difficult problem is least adapted, in general, to the study of diverse problems. Concordances have been published in the past [18] , each the product of immense manual effort, and concordances are still being published, now often with the aid of a computer. The usual arrangement of a published concordance is by form of key occurrence as major variable and occurrence order as intermediate. This arrangement, most readily understood and used by everyone, is ill adapted to almost any particular problem that comes to mind. For example, a study of grammatical agreement rules for syntactic functions would be most tedious with such an arrangement. Since standard computer programs can make concordances quickly and with any list of sorting variables desired, it seems that the publication of concordances will have little influence on research in the future. Once a text has been put on magnetic tape for computer input, and especially if postediting data is included with it, a scholar with a new research idea can obtain the tape, name his sorting variables in accordance with his plans, and obtain a concordance (very likely a selective listing instead of a complete survey of the text) with little effort, expense, or delay.The concordance, although a classic tool of the linguist, is not the most powerful. Given postedited text and a computer, the linguist can call for crosstabulation by categories of many useful kinds. To make a crosstabulation involves the selection of two or more variables and the definition of units to be listed or counted. The result is a matrix in which each row represents a value of one variable, each column represents a value of another variable, and each cell contains the number of units characterized by the values of the corresponding row and column.As a concrete example, let us consider the study of grammatical agreement with respect to one syntactic function, say the subjective function in Russian. We may suppose, for the purposes of the example, that the linguist has already analysed each Russian form (occurring between spaces in text) as composed of a stem and an inflectional suffix; he suspects that the endings are involved in agreement. Oversimplifying for clarity, let us suppose further that each form contains at most one suffix; every form that contains no suffix will be treated, temporarily, as if it contained a zero suffix, and every form will be said to contain some stem. The units to be counted are pairs of occurrences, each consisting of a governor and a dependent related by subjective function. The two variables of the cross tabulation are (i) inflectional suffix of the dependent and (ii) inflectional suffix of the governor. The matrix will have one row for each suffix that occurs in a subjective dependent and one column for each suffix that occurs in the governor of a subject. The counts are based on a certain text; each cell contains the number of occurrences in that text of subjectgovernor pairs with a certain suffix in the dependent, a certain suffix 'in the governor. Every such pair is counted just once in some cell of the matrix. Since the counts in the matrix are based on a finite text, a sample of all the Russian text ever written or to be written, they are best regarded as estimates of the counts that would be obtained from an indefinitely long text. As estimates, they are subject to sampling error, deviations from the counts that would truly describe the language but can never be obtained. The treatment of sampling error is a statistical problem that will not be discussed here. A substantial literature has been devoted to the analysis of cross tabulation matrices with sampling error, but linguistic applications are just beginning [19] . In the following paragraphs, some of the possible analyses will be discussed in terms that presuppose errorless data. The linguist who intends to perform such analyses should consult the statistical literature, or a statistician, before proceeding.The linguist who analyses the Russian data of our example begins with a hypothesis drawn from experience with many languages concerning the relation between morphology (the suffixes, the classes of stems with which they occur, etc.) and syntax. There exist, according to this hypothesis, syntactic categories in terms of which the agreement rules for the subjective function are relatively simple. He does not suppose, however, that each suffix belongs to exactly one syntactic category, nor that each distinct suffix belongs to a different category. The possible complexities of the relations between morphology and syntax guide his analysis.First, there may be two or more suffixes that are syntactically equivalent. Looking among the dependents first, such suffixes would be represented in the matrix by two identical rows. More precisely, since two suffixes can be syntactically equivalent even if one is more frequently used than the other, the rows should be proportional. If each entry in the matrix is divided by the sum of all entries in its row, then rows corresponding to equivalent suffixes should be identical. For simplicity in further analyses, identical rows can be combined by adding together the entries in each column; one row, representing a set of syntactically equivalent suffixes, replaces several in the matrix. The same analysis, leading to a similar reduction of the matrix, is performed on the columns. In the example, the singular suffixes of different declensions would be equivalent, and so would the plural suffixes.Second, there may be some suffix that is used in two syntactically different ways, each corresponding to the use of some other suffix. The ambiguous suffix would be represented by a row equal to the sum of two other rows (or of three or more other rows). In the example, an ending that is singular in one declension and plural in another would have this property. Since the suffix may be singular in a high-frequency declension and plural in one of low frequency, or vice versa, its row need not be equal to the sum of two others, even after division of every entry by row sums, but only to some linear combination. Upon finding such a row, which corresponds either to a single suffix or to a set of syntactically equivalent suffixes, the linguist must call for a supplementary analysis. Let X be the ambiguous suffix, Y and Z the two suffixes whose range it covers. The subject-governor pairs in which X appears are sorted into two groups: those with governors that also govern Y, those with governors that also govern Z. The question is whether X occurs with the same or different stems in the two groups of occurrences. If the stems are different, they can be assigned to different classes, and indeed they may well belong to different morphological classes already because they take different sets of suffixes. With a stem of one class, X belongs to one syntactic category, that of Y, and with a stem of the other class X is equivalent to Z. The ambiguity of suffix X is reduced morphologically and its row in the matrix can be combined with the rows of Y and Z, reducing three rows to two. On the other hand, if the stems in the two groups of occurrences are the same, the ambiguity is not eliminated morphologically although it can be eliminated syntactically. The same reduction of the matrix can nevertheless be performed. Naturally, a similar analysis and reduction is carried out on the columns of the matrix.A third possibility is that some suffix has one syntactic use that is unique to itself, another use equivalent to the use of some other suffix. The zero suffix appears in some Russian declensions, where it is syntactically equivalent to other suffixes; it also appears, for example, in the personal pronouns ya = I, my = we, etc., making it the unique firstperson suffix. It follows that the corresponding row in the matrix is equal to a linear combination of other rows plus a remainder. As in the previous situation, a supplementary analysis is required, and in the example it will lead to recognition of several morphologically resolvable uses for the zero suffix.The linguist would like to continue the analysis until the matrix contained only one nonzero entry in each row and each column; he could then call each row a simple syntactic category. In the Russian example, the matrix has 14 rows and 14 columns at that stage. Although the analyst cannot label his matrix in this fashion, we can name them by specifying person, number, and gender. Writing 1, 2, and 3 for first, second, and third persons, m, f, and n for masculine, feminine, and neuter genders, and s, p for singular and plural numbers, the columns (and rows) are labeled 1ms, lmp, lfs, lfp, 2ms, 2mp, 2fs, 2fp, 3ms, 3mp, 3fs, 3fp, 3ns, and 3np. To reach this point, the linguist may be forced to consider a row or column which contains two nonzero entries, neither corresponding to the unique nonzero entry in any other row or column, but such considerations should wait to the end of the analysis. Having reached this point, the linguist can reconsider. He should discover that gender in Russian nouns is determined by the stem, not by the suffix; that no Russian verb has a suffix exactly identifying person, number, and gender; and so on. He will probably not retain the separate syntactic categories 1ms, 2ms, and 3ms for verb suffixes, since every suffix that belongs to any one of these categories either belongs to all three of them or to one of them and also to one or two others.The final stages of the analysis belong to the linguist, who can call on many different criteria in sharpening and systematizing the classification. The earlier stages, beginning with a large matrix with entries that always, in practise, must be subject to sampling error, can be programmed and carried out on a computer. From the preparation of concordances to the construction of crosstabulations to their analysis, the computer has taken a larger and more sophisticated part in the research process. With such a perspective, the scholar who limits its role to sorting and listing appears to be wasting his resources.The illustration of crosstabulation analysis just presented was drawn from the classic domain of the relations between morphology and syntax. Since classic methods have been highly productive in this domain, new ones are not likely to add much, and the illustration is to that extent misleading, but it was chosen for clarity, not as representative of the problems to which the crosstabulation method should be applied. This method, better, this family of methods, is perfectly general for the class of problems involving relations among two or more variables. It is not surprising, therefore, to find many possible applications for it in linguistics.There is, for example, the problem of syntactic classification of stems. Given a syntactic function for which the morphological agreement rules are known, are there further rules determining the classes of stems that can occur in words connected by this function? Again the rows of the matrix correspond to dependents, the columns to governors, and the entries in the cells are occurrence counts. But this time the stems, rather than the endings, appear as row and column labels. Since the number of stems is ordinarily much larger than the number of suffixes in a language, the size of the matrix will be larger for this analysis, and the entries in the cells will consequently be smaller on the average. In fact, unless the text in which occurrences are counted is extremely large, almost all the cells will contain zeros and almost all the nonzero counts will be ones. The analysis is more sensitive, but not impossible. Unlike the determination of morphological agreement rules, the study of stem-stem agreement rules in syntax is just beginning, and it may be hoped that new analytic methods will accelerate it.The classification of texts according to subject matter, and the corresponding classification of vocabulary items, although it is perhaps of more interest in information retrieval than in machine translation, is another example of a problem to which crosstabulation analysis can be applied. The first stage of the analysis uses a matrix in which rows and columns are labeled with the titles (or identification numbers) of books, articles, or abstracts; here the set of row labels is identical to the set of column labels. Each cell contains, e.g. the number of words that occur in each of two documents. Analysis of this matrix yields a classification of the documents. The next stage is a consideration of the vocabulary of the whole library, using as a description of each word the number of times it occurs in each class of documents. Each word is characterized as specific to a class of documents, hence by inference, to a subject of discourse, or as specific to a range of document classes, or as general. A few such studies have been performed, although so far only on a limited scale [20] . One current difficulty is that, for linguistic reasons, the word is most unlikely to be a suitable unit, yet the units that ought to be used cannot now be identified and listed.The same difficulty unfortunately applies to the study of rules for choice of equivalents in machine translation; the units that ought to have stable equivalents are still unlisted. On the other hand, the search for rules of equivalent choice may lead to the identification of appropriate units. Taking the word as the starting unit, and one word at a time, the analysis can consider several variables: the functions the word serves in its various occurrences, and for what governors; the functions served by dependents of the word, and the words that serve them; and, of course, the translations of the word itself as well as the translations of related occurrences. (It is assumed here that posteditors have supplied at least a provisional translation of the text being studied, including specific translations for individual words or word groups, in addition to indications of the structure of source-language sentences.)The first step in analysis of a particular word might be the construction of a matrix in which each row represents one equivalent of the word and each column one function it serves. If each column contains exactly one nonzero cell, the equivalent of the word is determined by its syntactic function and the analysis is complete. Otherwise, the next step is to take each function served by the word and construct a matrix for that function with rows again representing equivalents and columns representing all the words that govern that function. If each column of this matrix contains exactly one nonzero cell, the governors of the word can be classified according to the equivalents that they determine, and again the analysis is complete. However, other interesting situations are possible. If one equivalent is limited to a few governors while the others appear with many different governors, the word under study can be considered to form a fixed combination with each of the few governors that determines an equivalent, and the fixed combinations can be taken as translation units. On the other hand, if most equivalents are determined by particular governors whereas one equivalent appears with many different governors, the latter equivalent can be set aside and the governors classified as before. The diffuse equivalent may occur, for example, when the word under study appears with a particular dependent or with one of a particular class of dependents. The analysis continues as necessary, with matrices having, as column labels, words that occur as dependents, translations of related words, etc. At each stage, fixed combinations, in the target language as well as in the source, can appear and be noted.In the selection of equivalents, several hypotheses always have to be considered and the research procedure should take all of them into account. One is that the individual word is too small a unit, i.e. that it must be translated as part of a fixed combination, at least in some of its occurrences. Another is that local conditions (syntactic function, type of governor or dependent) determine equivalent choice. A third hypothesis is stylistic or accidental variation; posteditors can choose different equivalents for different occurrences of a word without having any clear or explicable reason. Fourth, the hypothesis of subject field has to be remembered; a word can have different translations in different articles, even if there are not distinctive differences in local context, because of differences in subject matter and in the habits of authors in different disciplines. And a fifth hypothesis, although probably not the last that could be found, is that the word, in some of its occurrences, is an abbreviation of a fixed combination. If a word sometimes appears in one or more fixed combinations, and if one of those combinations occurs near the beginning of an article, it is possible that subsequent occurrences of the word stand for the whole combination and must be translated with the target-language abridgment of the combination. Harris's paper on 'discourse analysis' [21] , was concerned with such problems, and K. E. Harper's (unpublished) observation that Russian nouns are modified more often near the beginning of an article than further on suggests that the phenomenon is widespread. Unfortunately, systematic procedures for analysis of linguistic relations that span more than one sentence are still undeveloped and cannot serve machine translation, but research procedures aimed at discovery of equivalent-determination rules can take this hypothesis into account. automatic linguistic analysis: The object of linguistic analysis is to characterize the sentences of a language. The language is not a finite text, but a finite text is all that the linguist can ever study. Before his analysis, he can be sure of nothing about the language, but he can only begin if he is willing to hypothesize some properties for it, and he naturally chooses properties that are universal, or at least widespread, in languages already known. These properties constitute his theory of the structure of natural languages; from the theory, he would like to derive a set of procedures that will yield a concrete description of the finite text that he is able to study and, hopefully, characterize the language beyond that text. Certain linguists, adopting the distributional methodology that excludes semantic and psycholinguistic data, have raised the problem of purely automatic 'discovery procedures'. Since such procedures could be programmed and applied, by means of a computer, to very large quantities of text, their importance for the future of linguistics seems great. The speculation that distributional, form-meaning, and psycholinguistic methodologies would yield virtually equivalent structures for any natural language makes the possibility of automatic linguistic analysis even more attractive. The beginning that has been made and the prospects for further work are the subject of this section.The first step in the analysis of a new language, after texts have been recorded, is the reconstruction of its vocabulary. (A certain normalization of its alphabet, whether of letters or of phones, sound units, may be needed, but can be passed over here.) A universal feature of natural languages is that groups of alphabetic characters form units with which the rest of the language is constructed. Each such group is a morph and represents a morpheme. In general, morphs occur one after another in text; there are many exceptions to this rule, as in languages where the consonants of a word belong to one morph and the intervening vowels to another, but it is better to oversimplify than to introduce all the important but complicated qualifications that a useful discovery procedure would have to accept. The problem, then, is to segment a text into morph occurrences. In some texts the segmentation is marked by the author, who spaces after each morph. More often, short strings of morphs are bounded by spaces and have to be segmented internally (as in printed English, French, German, Russian, etc.). In many spoken languages and some written ones, the strings of morphs between spaces or silences are long. The silence or blank space can be taken as an absolute morph boundary (omitting qualifications as usual), and the sequences between can be sorted out. In printed English or Russian, for example, there will only be a few thousand different sequences between blanks in a text of fifty or a hundred thousand running words, whereas in spoken French an equivalent text might contain only a few silence-to-silence sequences that occurred more than once each.A procedure has been proposed by Harris [22] for segmentation of morphs. Take one unit from silence to silence or from space to space, sayx 1 x 2 x 3 ….x n .Here each x i is some character of the alphabet. Consider the list of all silence-to-silence units that begin with the same character x 1 , and determine the variability of second-character choice among these units. If the next character in every unit is the same, variability is nil; if all characters of the alphabet occur as second character following x 1 each equally often, variability is maximum. The observed variability, say V l is noted. Next V 2 is determined; it is the variability of third-character choice among all units that begin with the sequence 4 , and so on. Plotting V i against i, the analyst expects a declining curve, because there are relatively few morphs in a natural language as compared with the number that could be constructed using its alphabet. English, for example, could have 26 5 = 11,881,376 different five-letter words, about twenty times as many words as in its entire vocabulary. Some of the words that do not occur are forbidden for phonological reasons, e.g. * mxzntzz or * qqq. Others are phonologically possible but simply not used, e.g. *maser (until recently) and *thaser (until it becomes acronymic, or otherwise enters the language). There are many morphs that begin with any x 1 fewer that begin with the same x 1 and any x 2 , and so on, so that Vi falls until the end of the morph is reached. If the morph x 1 x 2 ... x k can be followed by relatively many other morphs, V k is larger than either V k-1 , or V k+1 Hence a relative maximum in V i often marks the boundary of a morph. Exactly the same calculation can be performed from right to left, this time giving variability of next-to-last character among units ending with x n . The relative maxima given by the two calculations should mark the same boundaries, but in a language where each space-to-space unit consists of one stem morph followed by zero or one suffix morphs the right-to-left calculation should give more obvious results.x 1 x 2 , then V 3 , VThe Harris procedure cannot work in a language that uses every phonologically allowable sequence to represent a morph, but no such language is known. There may be a few languages with so few phonemes that a large proportion of the allowable sequences are used, and if there are the procedure would be inefficient for them. Even in languages where it is most efficient, the procedure is not likely to find all the morph boundaries in a text, and it is likely to mark some that subsequent analyses do not retain. On the one hand, if vowel sequences are narrowly restricted by phonological rules, and consonant sequences likewise, but vowel-consonant sequences relatively unrestricted, there will be a relative maximum in V i at each transition from vowel to consonant or vice versa, whenever there are two phones of the same class before the transition. These phonological maxima will be large only in peculiar cases, however, and can therefore, perhaps, be disregarded. They can be eliminated if adequate phonological analyses are performed in advance of the Harris procedure; V i can then be calculated as the ratio of observed variability to phonologically allowable variability in each position. On the other hand, if a morph can be followed only by a few other morphs (for example, if every verb stem must be followed by one of two or three suffixes), V i will not have a relative maximum at its boundary. In spite of these qualifications, an experiment with English showed the procedure to be more than 80 per cent effective, † and that figure could be improved by the use of refined technique and a larger sample.The object of a grouping procedure like Harris's is not to find the morphs in a language, but to find a set of units with which analysis can proceed. If the procedure is 80 to 90 per cent accurate, as measured against the results of a psycholinguistic or form-meaning analysis, the tentative morphs that it produces can be submitted to further distributional study. Procedures lately suggested by Lamb and Garvin require a text with marked morph boundaries as input, but it would be easy to adjust their methods and other methods of the same general kind, so that one output would be a revision of those boundaries.Given a tentative list of morphs, the next problem is to determine their mutual relations, i.e. their distributions relative to one another. One aspect of the problem is classification of the morphs; two morphs belong to the same class if they have identical distributions. The other aspect of the problem is the listing of constructions, i.e. of admissible combinations of morphs. In dependency terms, constructions are characterized by functions and agreement requirements. In any terms, the initial difficulty is that there are too many morphs and too few occurrences, even in a large text. The distributional regularities that will be summarized in a construction list do not appear until equivalent morphs are classed together, since there are not enough occurrences of individual morphs to bring out the regularities. The judgment that two morphs have equivalent distributions, hence can be assigned to a single class, cannot well be based on their few occurrences in a text, since no two morphs have similar distributions in terms of linear context and individual morphs. The judgment would be easier if it could be made in terms of classes of morphs and constructional context, but at first neither constructions nor classes are known. The deadlock can only be broken by an iterative approach that starts with a crude classification and tentative list of constructions, gradually refining the two together.Garvin [24] begins with a rough classification of morphs based on gross features of their distributions. He requires that two kinds of boundaries be marked in the text; roughly speaking, these are word and sentence boundaries or morph and utterance boundaries. The small-unit boundaries determine the items to be classified, and the large-unit boundaries furnish the distributional criteria for an initial classification. Each occurrence of a morph is characterized as adjacent to and preceding a major boundary, hence final; as adjacent to and following a major boundary, hence initial; or as not adjacent to a boundary, hence medial. Considering all occurrences of a morph simultaneously, it can be assigned to one of seven categories accordingly as it occurs in all three positions (class IMF), only two (classes IM, IF, and MF), or one (classes I, M, F). Since counts of occurrence in each position can be made, subtler classification is possible. But now a category symbol can be assigned to each morph occurrence in the text and a search for constructions started. Garvin has suggested some techniques for the search and is continuing his investigation of this problem.Lamb's procedure [25] is to form tentative constructions first, deriving tentative classes from them. He argues that a syntactic relation is a limitation on the variety of morph sequences that occur; hence local restrictions reveal (or may reveal) relations. For each morph in a text, he determines the variation in its neighbors, taking those to the right and those to the left separately. The morph with least variation is temporarily assumed to form a construction with its neighbor. Thus, if morph M i is almost always followed by some morph M j , every occurrence of M i is assumed to be in construction with the following occurrence, whether M j or some other morph. (If M 1 had regularly been preceded by some M j , the construction would consist of M i and its left neighbor). Not just the single morph with least variation among its neighbors, but all the morphs with variation below a threshold are treated in this way. Each such tentative construction is given a name, and the calculation of variation coefficients is repeated, with different results because the constructions now appear in the text instead of their constituents.Classes are not formed until second-order constructions appear, in which one of the constituents is itself a construction. Then all the morphs that occur as partners of M i in the first-order construction when the construction is in turn the partner of M j are classed together. The rationale is that these morphs have the same distribution to a second degree of approximation, and that cases of third-degree differences in morph distribution are rare. The morphs that are classed together take the same partner and with it form constructions that take the same partner; the partners of the second-order construction could vary with the morph contained in the first-order construction, but experience says that such variation is unlikely in natural languages.Since the criterion by which constructions are formed is approximate, the procedure must allow for dissolution of constructions. Lamb recalculates variation coefficients whenever a construction or class is formed. Every occurrence in text of a tentative construction is replaced with an arbitrary symbol standing for the construction, and every occurrence of a morph in a tentative class is replaced with an arbitrary symbol standing for the class. Using this text, the variation coefficients are recalculated for individual morphs, for constructions, and so on, and it may happen that the constructions originally established, when variation coefficients had to be calculated on the basis of individual morphs, will now be replaced by others. Lamb's example is that a preposition, whose following neighbors include articles, nouns, and adjectives (in English), may at first be put in construction with that neighbor so that, e.g. (in + the) is marked as a construction. Later, when a class of nouns begins to develop, the coefficient of variation among following neighbors of 'the' will decrease until (the+noun) is formed and replaces 'the' as partner of 'in'.The units that have been called morphs in the description of Lamb's procedure might be the product of Harris's procedure, they might be the units occurring between blanks in printed text, or they might have been obtained in some other way. If they are forms, as found between blanks, they can be segmented into stems and endings by some procedure like Harris's and submitted to a morphological-agreement analysis by crosstabulation of the kind discussed in Section 3. If they are tentative morphs, from Harris's procedure, they need to be checked. One step is to look for constructions that do not involve classes when Lamb's procedure ceases to be productive. Those constructions can be taken to reassemble morphs that the Harris procedure erroneously dissected. Again, any class of tentative morphs can be inspected for phonic or graphic similarity. If Lamb's procedure gives a class in which every morph ends with a particular letter or group of letters, that ending is possibly a morph; if the same ending occurs in several distributional classes, it should certainly be recognized. Thus syntactic criteria can be applied to readjust morphological findings, and the 80 to 90 per cent accuracy of Harris's method is not a final result by any means.Lamb is not concerned with dependency theory, but if his procedure is accepted up to this point it can be extended to the determination of dependency connections. Consider a construction, say X = Y i Z j . In each occurrence of the construction, Y i and Z j are either morphs or constructions. The identity of the construction over all of its occurrences is established by two facts: the construction has a homogeneous distribution, and all the Y i (or, instead, all the Z j ) belong to a single class. If all the Y i belong to one class, the Z j may belong to one class or to several classes; we can consider the case in which they belong to a single class, since in the other case the following procedure would merely be repeated for every class of Z j . For the present, suppose that each Z j is either a morph or a construction whose members are morphs (it could also be a construction whose members are constructions; that case is treated below). Thus, either Z j = M j , a morph, or Z j = M j1 M j2 , a construction of two morphs. If Z j is a morph, Y i and Z j are connected by a dependency link. If Z j = M j1 M j2 , then Y i is linked by dependency to either M j1 or M j2 . Form two sets; the first is composed of all the M j such that Z j = M j for some j, together with all the M j1 such that Z j = M jl M j2 for some j, and the second set contains the M j 's and M j2 's. Calculate a coefficient of variation for each set. If the coefficient of the first set is smaller, the first member of the construction M j1 M j2 is linked to Y i by dependency, and otherwise the second member. Now suppose that Z j can take one of three forms:M j , M j1 M j2 , or M j1 (M j2 M j3 ). That is, one partner in the construction being analysed is either a morph, or a construction whose members are morphs, or a construction whose members are a morph and a construction whose members are morphs. The dependency connection is between Y i and M j if Z j has the first form, between Y i and either M jl or M j2 if Z j has the second form, and between Y i and one of M j1 M j2 , M j3 if Z j has the third form. There are three sets to be assembled:If Z j = M j If Z j = M j1 M j2 If Z j = M j (M j2 M j3 ) The first set contains M j M j1 M j1 The second set contains M j M j2 M j2 The third set contains M j M j2 M j3The omission of all other possible sets is justified by the assumption that in a construction of a given type the two positions are distinct. Again, variation coefficients are calculated and the dependency links are established in the same manner as before. Note that it is not always possible to determine the dependency links within a structure such as M j1 (M j2 M j3 ) before comparing it with its partners. It may, in fact, be necessary to consider still more complex cases, but the general rules are implicit in the example just given. If projectivity is postulated as a universal feature of natural languages, it simplifies the search for dependency links, but its use would require a long discussion that would be out of place here.The procedures just given lead to the establishment of dependency links, but they do not indicate the direction of dependency; they do not differentiate governors and dependents. When a long span of text (e.g. a sentence), is connected by dependency links, it only remains to choose one occurrence in the span as origin and all links are automatically directed toward that occurrence. For this purpose, it is important to introduce the restriction of projectivity. In a projective language, every origin occurrence lies on the unique path between the first and last occurrences in a connected span. That is to say, the first occurrence in a sentence is connected to one or more following occurrences, among which one or more are connected to following occurrences, and so on, until some sequence of connections leads to the last occurrence in the sentence. This sequence of connections forms a path through certain occurrences, and one of them must be independent; projectivity would be violated if any occurrence not on the path were chosen as the independent, or origin, occurrence in the sentence. Moreover, every occurrence not on the path depends, directly or indirectly, on some occurrence on the path; hence all connections outside the path are directed, their governors and dependents differentiated, as soon as the universal of projectivity is adopted.Let us assume that every morph has been assigned to some class, and consider all pairs of occurrences such that the first belongs to some class, say X, the second to some class Y (the two classes may be the same or different), and the two occurrences are connected. If the direction of dependency is sometimes from X to Y, sometimes from Y to X, the description of the language is more complex than if the dependency always goes in one direction. A partial test of the projectivity postulate is to determine, for each such pair, whether the directions induced by projectivity are consistent or variable. Variability for many pairs would make the postulate doubtful for the language. If, on the other hand, consistency is found, the determination of an origin occurrence for each sentence can also be based on a consistency argument. Define a dependency type by the classes of two con-nected morphs and their order, and assign to each dependency type found in the text a direction according to the findings just described. For some types, i.e. those that occur only on the initial-final paths of text sentences, direction will be undetermined. In text, mark each connection on an initial-final path with the direction pertaining to its type. In each connected span, there are three possibilities, (i) Every connection is marked, and a unique origin occurrence is determined. Every connection is directed toward it. (ii) Not all connections are marked, but the marks are consistent; those near the beginning of the span point toward the end, those near the end point toward the beginning. If any connections are unmarked before the last right-pointing connection or after the first left-pointing connection, they can be marked and their types assigned the indicated direction. The origin occurrence lies in an unmarked zone and remains to be chosen (see below), (iii) The marked connections are inconsistent; they do not point toward a single occurrence or cluster of occurrences. In such spans, one or more occurrences must be located according to a minimization of inconsistency. If the result is unique, the origin is determined; otherwise, the origin of the span remains to be chosen by the procedure below.The procedure above can be iterated, since it sometimes determines the direction of a new dependency type. When it stabilizes, there may remain spans (sentences) with indeterminate origins. In fact, there may be many such spans, and the number of possible origin occurrences in each may be large. Note that no dependency type within the indeterminacy region of any span occurs anywhere else. Hence all such dependency types could be given the same direction without inconsistency. If that plan is not satisfactory, all dependency types with undetermined direction can be partially ordered by sequence of occurrence and those up to any arbitrary point in the partial order made right-pointing, those beyond it left-pointing.Another criterion can be introduced here, or even earlier if it is regarded as linguistically more important than the kind of consistency used up to this point. Some classes must occur at sentence origins; it makes linguistic sense to minimize the number of different classes there. If some origins are determined by projectivity and consistency, they determine certain origin classes, and members of those classes can be sought in each sentence with indeterminate origin. If there is one, in a sentence, its origin is determined. If there are two, the choice can wait on consistency (every time an origin is chosen, new dependency types are given directions). The sentences containing no possible origin of a known origin class are collected and choices made simultaneously for all of them in such a way as to minimize the number of new origin classes; this calculation is feasible if the number of choices to be made is not too large.Thus if Lamb's procedure, or Garvin's, can give a phrase-structure to each span of text, it is possible to extend the analysis to a dependency structure. It remains to be seen whether the procedures of Lamb and Garvin will be satisfactory; almost without doubt, they will need elaboration. Lamb's has been applied, in tentative fashion, to a small amount of English text with gratifying results; such a trial, as Lamb remarked in reporting it, is far from a demonstration of workability. The dependency procedure added here has not been tried at all.According to the viewpoint developed earlier, the determination of morphemic structure, relations among morphs, is not the end of linguistic research. First the sound or letter sequences were segmented into morphs, then the morph sequences analysed for syntactic relations so that a dependency diagram could be given for each sentence. (In cases of ambiguity, there are alternative diagrams, of course.) The identification of morphs with similar spelling and identical or closely related distributions as alternative representations of the same morpheme remains to be done, but that is a side problem that can at best reduce the difficulty of the following main step, which is analogous to the segmentation of the original letter sequence: the dependency diagrams have to be segmented into semes (Lamb's sense of the term [26] , approximately). These semes are more nearly the units wanted in translation than the morphs or morphemes that comprise them; they have syntactic relations of their own; and a sentence must satisfy simultaneously conditions best stated in terms of (a) letter or sound sequences, (b) dependency diagrams over morphs or morphemes, and (c) diagrams over semes or sememes. The research procedures that can now be envisaged are very like those already discussed in this section, and this formulation of the problem of 'deep grammar' as Hockett [27] calls it or semantic compatibility in the traditional terms is so recent that little can be said beyond a plea for attention to it.The procedures described in Section 3 are not 'discovery' procedures; they are merely aids to the linguist who uses them along with all his knowledge of linguistic theory, semantics, and the rest. He is aided, if he is fortunate, by insight or intuition, or perhaps by fortunate guesses. His result may be a grammar in the formal sense, or merely a collection of observations. The procedures of Section 4 are 'discovery' procedures in the linguistic sense, but they are not infallible. They can be applied, to the extent that they have been specified, without any use of semantics, intuition, or judgment. Their application, however, will not always lead to a complete, consistent grammar capable of assigning at least one description to every sentence in the text on which it is based and to some other sentences not in that text.On the contrary, such discovery procedures can be written without difficulty. For example, given a text, cut it at random into 'morph occurrences', insert 'sentence boundaries', and assign every morph to one or both of two classes: class X does not occur just before a sentence boundary, class Y always does. Adopt two dependency rules: an occurrence of a class X morph governs a following occurrence of a class X morph or of a class Y morph. An occurrence of a class Y morph therefore governs nothing. This grammar covers the text and can generate an endless number of additional sentences. It will account for new texts chosen at random, except for the necessity of adding some new 'morphs' to the dictionary. It is unambiguous, in that it assigns exactly one structure to every sentence. Unfortunately, this grammar will accept a great many intuitively undesirable sentences, help but little in machine translation or information retrieval, and recognize too few morphs in new text. This morphemic grammar, moreover, will show no relation to any higher or lower-stratum grammar. The two classes, X and Y, are not morphologically differentiated, even approximately, and the discovery of semes would be fortuitous. Thus its internal simplicity is matched by the enormous complexity of its external relations.Bar-Hillel, during the Advanced Study Institute at which these lectures were given, stated several theorems that have not yet been published. Their general tone, when applied to problems of empirical linguistics, is to denigrate 'discovery' procedures. Given an infinite set of sentences, it is impossible to determine their grammar, even if it is known in advance that they have a context-free phrase-structure grammar; the theorems quoted are even stronger and broader, but their essential feature is the impossibility of absolute inference from a finite analysis to the infinite set of sentences. Given a finite text, as we have seen, finite grammars are easy to obtain. The issue is extrapolation.One could suspect, even before the enunciation of these theorems, that there would be difficulties. Supposing the existence of an infinite set of sentences for theoretical purposes and deciding whether a given sequence belongs to the infinite class of 'English sentences' for empirical purposes are two distinctly different problems. The only ways to decide, empirically, about a given sequence are to find it in text and to ask an informant. Text usually gives no answer; the number of possible sequences over a given alphabet or vocabulary is much greater than the number of sentences even in an immense text, and the linguist wants to extrapolate, not to describe the given finite text. Asking an informant gives an uncertain answer, one that varies from informant to informant and even from time to time with a single informant; the answer depends on the kind of question asked as well as on the sentence given, and there is not unanimity about the question. The answer to these difficulties has always been to impose more and more criteria on the grammar derived from a finite text, to check it against new text, to check it, overall, not in minute detail, against intuition, and to include criteria of interstratal consistency: Syntax must accord with morphology and semantics or sememics.The new theorems confirm this approach by denying the possibility of any other. The empirically difficult concept of an original infinite set of sentences for which a grammar must be found is now seen to be theoretically worthless, since the correspondence of grammar and 'language' (infinite set of sentences) would be unverifiable. Intuition and insight could yield a perfect grammar, but its perfection would be untestable. Systematic procedures may never yield a perfect grammar, but their connection with finite text samples, via criteria of analysis, can be explicated, as the connection of an intuitively derived grammar cannot be. The basic concepts of linguistics, replacing the empirically and theoretically difficult concept of an a priori infinite set of sentences, will therefore be the finite collection of textually validated sentences and the set of sentences generated by a grammar. (There are theoretical difficulties about the latter set, but they do not influence this discussion.) The connection between these two sets is made in two steps: criteria for derivation of a grammar from a finite text, and procedures for the generation of a set of sentences under the control of a grammar. The grammar is rigidly connected with the finite sample. Its connections with the rest of the 'natural language' for which it is proposed as a summary description necessarily remain vague, but the linguist can test its adequacy for the recognition of sentences in new text by mechanical procedures, and he can test, by recourse to informants, the acceptability of its analyses of given sentences and the acceptability of sentences that it generates. The grammar has become the instrument of extrapolation, as Chomsky once hinted [12] , and the criteria of its derivation determine the extrapolation made.† Presented at the NATO Advanced Study Institute on Automatic Translation of Languages, Venice, 15-31 July 1962.† Unpublished seminar paper by C. Chomsky, cited in[23]. : EVEN at the 1962 Institute where these lectures were presented, it was hard to find much interest in linguistic research of the empirical sort. Two areas were far more attractive: the design and refinement of translation algorithms, and the establishment of mathematical theory for linguistics. Yet each algorithm either contains or presupposes a body of empirical fact which, in fact, does not presently exist, and theory is pertinent to linguistics and its applications only insofar as it guides the collection and organization of data. During the Institute, it occasionally seemed that the theoreticians were refusing this aid to the empiricists; some of the theorems stated, and some of the interpretations given, suggested that it is theoretically impossible for linguistic theory to guide the collection of data. The theorems are undoubtedly true, but the interpretations are indubitably false.These lectures, therefore, have to maintain a double argument: that the adoption of systematic procedures for collection and organization of linguistic data is (i) necessary and (ii) possible. Necessary, in the sense that practical applications (such as automatic translation) cannot be developed to the point of usefulness without empirical studies that are unmanageable unless they follow systematic procedures. Possible, in the sense that undecidability theorems do not apply to the situations that arise in practise. Beyond this argument, these lectures are concerned with techniques, with the steps to be carried out in a real program of data collection. Convenience, economy, and avoidance or control of errors are, as they must be in large-scale operations, central questions. Finally, it will be necessary to emphasize, even here, the need for additional theory. The aspects of language that have been studied most widely and formalized most adequately heretofore are not the only aspects of language relevant to automatic translation, and systems of automatic translation that rely entirely on present-day theory have not proved satisfactory.The written version of these lectures was prepared after the Institute, and the author took advantage, where possible, of what was said there by students and other lecturers. It will be obvious that he is especially indebted to Professor Bar-Hillel, whose work stimulated much more than the construction of counter-arguments on specific points. Insofar as the lectures are based on earlier publications of the same author, they draw most heavily from [1] and [2] . 140 DAVID G. HAYS Appendix:
null
null
null
null
{ "paperhash": [ "miller|decision_units_in_the_perception_of_speech", "harper|studies_in_machine_translation-8:_manual_for_postediting_russian_text.", "harris|from_phoneme_to_morpheme", "borko|the_construction_of_an_empirically_based_mathematically_derived_classification_system" ], "title": [ "Decision units in the perception of speech", "Studies in machine translation-8: manual for postediting Russian text.", "From Phoneme to Morpheme", "The construction of an empirically based mathematically derived classification system" ], "abstract": [ "It has been shown experimentally that speech intelligibility is a function of grammatical content. This fact implies that automatic speech recognizers may well need to include information about linguistic structure.", "Abstract : The present study is a practical guide to editors who refine partially machine-translated text as a basis for linguistic analysis. The posteditors' tasks are: to code preferred English equivalents, to code English structural symbols, to resolve grammatic properties, and to code syntactic connections (dependencies). A general introduction to the field of machine translation is contained in RM-2060.", "The following investigation1 presents a constructional procedure segmenting an utterance in a way which correlates well with word and morpheme boundaries. The procedure requires a large set of utterances, elicited in a certain manner from an informant (or found in a very large corpus); and it requires that all the utterances be written in the same phonemic representation, determined without reference to morphemes. It then investigates a particular distributional relation among the phonemes in the utterances thus collected; and on the basis of this relation among the phonemes, it indicates particular points of segmentation within one utterance at a time. For example, in the utterance /hiyzkwikǝr/ He’s quicker it will indicate segmentation at the points marked by dots: /hiy. z. kwik. Ər/; and it will do so purely by comparing this phonemic sequence with the phonemic sequences of other utterances.", "This study describes a method for developing an empirically based, computer derived classification system. 618 psychological abstracts were coded in machine language for computer processing. The total text consisted of approximately 50,000 words of which nearly 6,800 were unique words. The computer program arranged these words in order of frequency of occurrence. From the list of words which occurred 20 or more times, excluding syntactical terms, such as, and, but, of, etc., the investigator selected 90 words for use as index terms. These were arranged in a data matrix with the terms on the horizontal and the document number on the vertical axis. The cells contained the number of times the term was used in the document. Based on these data, a correlation matrix, 90x90 in size, was computed which showed the relationship of each term to every other term. The matrix was factor analyzed and the first 10 eigenvectors were selected as factors. These were rotated for meaning and interpreted as major categories in a classification system. These factors were compared with, and shown to be compatible but not identical to, the classification system used by the American Psychological Association. The results demonstrate the feasibility of an empirically derived classification system and establish the value of factor analysis as a technique in language data processing." ], "authors": [ { "name": [ "G. A. Miller" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "K. Harper", "D. G. Hays", "B. Scott" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Z. Harris" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. Borko" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null ], "s2_corpus_id": [ "32915653", "60517049", "203462101", "6483337" ], "intents": [ [], [], [], [] ], "isInfluential": [ false, false, false, false ] }
null
751
0
null
null
null
null
null
null
null
null
2378d5f0faa16c3cecc4e113e7328f073d0f3781
39628235
null
Un Syst{\`e}me morphologique, compromis entre les Facilit{\'e}s de la Compilation, les Recherches Syntaxiques et l{'}Adaptation a de futurs Programmes de {T}.{A}
Examen de certains problèmes généraux relatifs à l'organisation d'un dictionnaire pour la traduction automatique. Définition formelle élémentaire. Recherche d'une forme dans un dictionnaire. Structure des informations linguistiques enregistrées dans le dictionnaire, structures morphologiques et sémantiques. Principes d'un système d'analyse morphologique d'une langue flexionnelle. Applications à la langue russe.
{ "name": [ "Dupuis, L." ], "affiliation": [ null ] }
null
null
Automatic Translation of Languages NATO Summer School
1962-07-01
4
0
null
Etant donné l'importance et les difficultés des problèmes syntaxiques et sémantiques dans la T.A., le titre de cet exposé risque de paraître trivial. Il est bien connu, depuis les premières recherches faites dans les années 1954, que le traitement automatique de la morphologie d'une langue naturelle ne présente pas de difficultés insurmontables. Il suffit de bien vouloir y consacrer un peu de connaissances de la langue considérée et un peu d'ingéniosité dans la réalisation des programmes particuliers à ce genre de problèmes pour obtenir des résultats concrets et relativement satisfaisants.En fait, notre intention est de situer l'objet de cet exposé dans le cadre plus général des études faites en vue de la réalisation d'un dictionnaire à l'usage de la T.A. et d'envisager, à ce propos, certains des problèmes relatifs à la compilation des informations linguistiques intéressant la T.A.Les recherches faites ces dernières années à propos de la construction des grammaires montrent que la qualité des résultats obtenus par un programme d'analyse syntaxique utilisant une grammaire d'un type donné dépend étroitement de la quantité d'informations linguistiques qui est incorporée dans cette grammaire. Une grammaire faible associée à un programme de reconnaissance des structures apporte des résultats qui ne sont pas dénués d'intérêt pour la recherche linguistique, mais il est très malaisé d'enrichir et d'améliorer une telle grammaire par la suite (cf. [8] ). Mieux vaut, semble-t-il, énoncer très tôt les contraintes sémantiques susceptibles d'éliminer les nombreuses structures manifestement fausses données par une grammaire faible, tout en reconnaissant que l'énoncé de ces contraintes est l'un des problèmes les plus difficiles de la T.A. En raison de l'arbitraire et de la motivation relative du signe linguistique, il est probable que ce problème n'admet pas de solution générale.Ces résultats permettent de mieux situer la place du dictionnaire dans la T.A. Les performances techniques: compacité des informations enregistrées, vitesse de consultation, ne sont pas des facteurs essentiels, tout au moins dans un avenir immédiat; ce qui ne signifie pas qu'il s'agit là de problèmes faciles et que tout perfectionnement dans ce sens ne soit pas le bienvenu.Par contre, la facilité de la compilation des informations linguistiques, la commodité de la consultation de ces informations, la mise en évidence et l'utilisation de structures relatives à ces informations, sont des facteurs susceptibles de remédier à la difficulté théorique des problèmes sémantiques, en particulier la construction de catégories sémantiques et au fait indiqué ci-dessus à propos des programmes de reconnaissance des structures. Le dictionnaire n'est pas un simple répertoire que l'on consulte pour des raisons de commodité; il fait partie intégrante des programmes de traduction et il doit faciliter au maximum l'apport des nombreuses informations linguistiques nécessaires à la qualité des traductions, automatiques ou non.Un dictionnaire est une suite finie de couples:(F 1 S 1 )...(F i .S i )...(F n S n ). Chaque F i est une forme graphique ou assemblage linéaire de symboles élémentaires appartenant à un alphabet donné. Cet assemblage de symboles est construit suivant des règles formelles déterminées.Chaque S i est un ensemble d'informations associées à la forme F i . Consulter le dictionnaire pour une forme F i , c'est rechercher la forme F i dans la suite F 1 S 1 , . . F n S n , et lire l'ensemble des informations associé à la forme F i .Pour un langage naturel, le couple (F, S) est un mot (ou forme linguistique) F est le signifiant de ce mot (les symboles élémentaires sont les lettres de l'alphabet de ce langage), S est le signifié de ce mot (représenté pratiquement par l'ensemble des informations morphologiques, syntaxiques et sémantiques associées à ce mot et par une définition dans le cas d'un dictionnaire monolingue ou par une liste d'équivalents de la langue-cible dans le cas d'un dictionnaire bilingue).Constituter un dictionnaire automatique, c'est enregistrer la suite F 1 S 1 , . . . F n S n , dans la mémoire d'un calculateur électronique et établir un programme-machine (programme de consultation) pour effectuer l'opération de recherche d'une forme F i et l'opération de lecture des informations S i .Un article d'un tel dictionnaire est le couple (F, S), la forme graphique est son en-tête, l'ensemble des informations S est son contenu, Le temps de consultation T est l'intervalle de temps moyen pour trouver dans le dictionnaire une forme quelconque F i d'un texte écrit et pour lire l'ensemble des informations correspondantes S i (qui peut être nul, si F i n'est pas dans le dictionnaire). Les performances relatives à la vitesse de consultation ne sont significatives que par rapport au volume V des informations linguistiques enregistrées dans le dictionnaire et représentées par la suite F 1 S 1 ...F n S n .En principe, la forme F est une suite de lettres de l'alphabet de la langue considérée, comprise entre deux intervalles ou blancs. S est l'ensemble de toutes les informations linguistiques que l'on peut énoncer à propos de cette forme F.Il est bien connu que cette définition demande certaines précautions :(1) dans l'énumération des lettres de l'alphabet, on doit tenir compte de certains signes diacritiques, par exemple, en français, les accents, la cédille, le tréma (ainsi, l'alphabet français a 39 lettres et non 26); certains signes de ponctuation sont susceptibles d'être considérés comme des éléments de la forme F (par exemple, le trait-d'union, l'apostrophe).(2) les mots homographes sont des mots différents qui ont une forme graphique F commune mais des signifiés S 1 , S 2 ... différents. Les nécessités de l'automatisation imposent en général que l'on énonce simultanément ces divers signifiés à propos de la forme F, mais il est important de séparer nettement les informations linguistiques correspondantes, même si elles sont très voisines.(3) l'unité de signification qui correspond naturellement au symbole formel S précédent, n'est pas nécessairement associée à une forme graphique F limitée par deux intervalles. Elle peut être associée, soit à une forme plus petite: mots composés formés de deux unités de signification ou plus, mots dérivés obtenus par l'adjonction de préfixes ou suffixes porteurs d'une certaine signification, mots fléchis par l'adjonction d'une désinence, soit à une forme plus grande. C'est le cas, non seulement des idiotismes mais aussi de nombreux groupes de mots techniques qui sans être de véritables idiotismes, constituent de véritables unités de signification dans lesquels il n'y a pas lieu de distinguer a priori des mots composants, sinon dans un lexique bien délimité et pour des raisons de simplification. Certains doublets ou triplets n'existent pas dans la langue-une comparaison lettre par lettre de la forme entière n'est pas économique. On utilise le procédé seulement pour les 4 ou 5 premières lettres, ce qui constitue un prérepérage de la zone où se trouve la forme cherchée. La recherche définitive de la forme est faite par comparaison (cf. [3] Si de telles singularités linguistiques ne sont pas très fréquentes dans une catégorie morphologique déterminée, il suffit d'ajouter aux informations communes S, les informations linguistiques particulières à une forme, au prix d'une certaine complication du contenu de l'article et de sa consultation.Sinon, il est préférable de renoncer provisoirement à l'utilisation de paradigmes linguistiques pour la catégorie morphologique en question (par exemple, dans le dictionnaire CETAP actuel, on a constitué pour un verbe autant d'articles qu'il y a de bases de participes).(2) du point de Ce principe donne des résultats satisfaisants pour la langue russe. Au moyen de l'introduction de désinences artificielles convenables, on peut espérer réduire le nombre des cas de faux splittages, à une valeur faible, moins de 10% du nombre des formes différentes de la langue. Ce choix des désinences artificielles est fait commodément à l'aide du système de paradigmes décrit ci-dessus.Une suite de k mots de la langue :U = (F 1 S 1 ) (F 2 S 2 )... (F k S k ) constitue une unité de signification si la signification globale S que l'on doit associer à la suite formelle F = F 1 F 2 .. . F k ne dérive pas des significations usuelles S 1 . . . S k associées respectivement à chacune des formes F 1 ... et de l'application de règles grammaticales très générales, mais résulte au contraire d'une convention particulière adoptée par les utilisateurs de la suite F.Habituellement, cette notion est réservée à la catégorie très particulière des idiotismes. En fait, elle est beaucoup plus générale. Par exemple, les significations des deux suites formelles : 'nombre rationnel', 'extraire une racine' ne dérivent pas des significations usuelles des mots isolés: 'nombre', 'rationnel', 'extraire', 'racine' et de la connaissance de leurs catégories grammaticales usuelles. Elles résultent plus précisément des définitions données par les mathématiciens.Par suite, une telle unité de signification doit constituer un article de dictionnaire dont l'en-tête est la suite des formes F = F 1 . . . F k et le contenu, les informations linguistiques que l'on doit associer à la signification globale S. Dans un dictionnaire usuel, on trouve effectivement de tels articles, mais en général chacun d'eux est incorporé dans un article relatif à une forme isolée, par exemple F 1 .Si l'on s'astreint, pour des raisons de commodité, à constituer seulement des articles de dictionnaires simples du type (F i S i ), les signifiés de ces mots isolés sont les significations S 1 . . . S k que l'on doit associer respectivement aux formes F 1 . . . F k , compte tenu de la signification globale.Par exemple, si l'on peut établir une correspondance entre ces formes et les formes F' 1 ... F' k de la traduction de l'unité de signification, les signifiés S 1 ... S k seront représentés par les équivalents respectifs F' 1 ... F' k dans la langue-cible des formes F 1 ... F k de la languesource.En général, une forme déterminée F 1 peut participer à plusieurs unités de signification et par suite admettre plusieurs signifiés S 1 S 2 S 3 . . . (polysémie). En raison de l'arbitraire du signe linguistique dans les langues naturelles et du petit nombre de formes effectivement utilisées dans une langue par rapport au nombre d'unités de signification l'existence de telles polysémies est un fait très général. Contrairement à une hypothèse souvent admise, un mot isolé est, a priori, polysème, s'il ne l'est pas, c'est un cas particulier dont il faut profiter mais qu'on ne peut ériger en règle générale.La résolution de telles polysémies ne peut être envisagée que dans la mesure où l'on considère des relations entre les signifiés de la langue, c'est-à-dire, si l'on définit des structures sémantiques sur l'ensemble des mots de cette langue.Considérons un ensemble de n unités de signification :U 1 = (F 1 1 . .. F k1 1 ) S 1 G 1 U n = (F 1n ... F kn n ) S n G n chaque unité U i étant caractérisée par une suite formelle F 1 i ... F ki i , une signification globale S i est une structure grammaticale (au sens habituel) G i .On définit une structure sémantique élémentaire sur cet ensemble si, compte tenu des diverse significations globales S 1 ... S n , on peut associer à chacune des formes isolées F j i un signifié unique S j i , valable dans toutes les unités de signification où F j i apparaît. Une telle structure définit une catégorie sémantique C 1 que l'on peut associer à chacun des mots isolés de l'ensemble considéré. Un mot polysème est un mot susceptible d'appartenir à plusieurs catégories sémantiques distinctes C 1 C 2 ...Une telle catégorie sémantique, définie par des relations entre les signifiés des mots isolés d'un ensemble d'unités de signification, est voisine de celle obtenue en considérant les signifiés des mots d'un domaine technique particulier (microglossaire). Elle peut être plus restreinte car certains mots d'un domaine technique sont susceptibles d'être polysèmes par exemple, le mot 'entier' n'a pas la même signification dans 'nombre entier' et 'fonction entière'. Elle peut être plus étendue car un mot technique peut conserver la même signification en dehors de son domaine de définition: le mot 'nombre' garde en général la même signification en dehors des textes mathématiques.Le paradigme sémantique d'un mot M pour une relation grammaticale déterminée G est l'ensemble des mots appartenant à la même catégorie sémantique C que le mot M et susceptibles d'être associés au mot M par la relation grammaticale G.Cette notion de paradigme sémantique précise les informations sémantiques que l'on peut trouver dans un dictionnaire habituel. Par exemple, dans l'expression:'donner quelque chose à quelqu'un' 'quelque chose' désigne l'ensemble des objets qui peuvent être associés au mot 'donner' (muni de la signification: faire don de) par la relation grammaticale: verbe-complément direct d'objet. Cet ensemble est donc le paradigme sémantique du mot donner pour cette relation grammaticale. Cette notion permet de classer les unités de signification d'après la longueur du paradigme sémantique de chacun de leurs mots composants, c'est à dire suivant le nombre des mots qui composent ce paradigme.(1) Idiotismes. Le paradigme sémantique de chacun des mots composants est réduit à un seul mot (ou à un nombre très restrient de mots). C'est le cas d'expressions telles que 'parce que', 'avaler la pilule' (dans le sens se déterminer à une chose pénible) et de quelques expressions techniques.(2) Expressions techniques. Le paradigme sémantique de chacun des mots composants peut comporter un nombre relativement élevé de mots, mais ces mots appartiennent à un domaine technique bien déterminé et sont susceptibles d'être énumérés: par exemple nombre entier, naturel, rationnel, irrationnel, complexe, imaginaire . . . ensemble borné, ouvert, fermé, dense, dénombrable....(3) Expressions du langage courant. Ou expressions communes à l'ensemble des individus utilisant le langage considéré. Il s'agit d'expressions telles que: 'construire une maison', 'écrire une lettre', 'donner quelque chose à quelqu'un'. Dans de telles expressions, les mots composants ont une signification très générale et on ne peut prétendre énumérer tous les éléments de leur paradigme. Dans ce cas, un paradigme ne peut être construit que par extrapolation, à partir d'une suite d'éléments déjà recensés, en essayant de dégager les propriétés sémantiques générales de ses éléments (être animé, inanimé, action faite dans un but déterminé . . . ).Cette structure a pour objet de faire entrer dans le cadre général des structures définies ci-dessus, les déclinaisons et les conjugaisons étudiées dans les grammaires traditionnelles.Elle est relative à des formes graphiques élémentaires : chacune des formes considérées est une suite de lettres de l'alphabet limitée par 2 intervalles ou blancs.On suppose que le signifié associé à chacune de ces formes appartient à une catégorie grammaticale déterminée G (l'une des catégories traditionnelles, nom, adjectif, verbe . . . ). Dans le cas où il est susceptible d'appartenir à plusieurs catégories (homographie externe), on suppose qu'il s'agit de plusieurs mots distincts.Les unités morphologiques considérées sont des unités relatives aux modalités de la signification: cas, genre, nombre.... Une modalité est considérée comme une variable grammaticale V j susceptible de prendre un nombre fini de valeurs v k i . La réunion des valeurs des variables grammaticales particulières à un mot sera notée : Les formes F 1 . . . F k associées à cette suite de valeurs de variables grammaticales sont les diverses formes d'un mot linguistique déterminé (dans le sens habituel de ce terme), c'est-à-dire les formes obtenues dans le cours de la déclinaison ou de la conjugaison d'un mot.W i = v i i v k 2 vV = V 1 V 2 V 3 V 4 =En première approximation, cet ensemble de formes constitue bien une unité morphologique dans le sens défini ci-dessus:(1) en principe, les informations linguistiques associées à ces formes (autres que celles associées aux W) sont toutes identiques.(2) en général, ces formes admettent un élément commun qui est la base (ou radical) du mot considéré. Dans certains cas, l'unité morphologique peut admettre plusieurs bases (par exemple, en français, le mot: oeil, yeux (1) On se donne une liste de désinences admissibles A 1 A 2 .... Ces désinences sont choisies de façon à optimiser le temps de consultation moyen compte tenu des faux découpages (voir ci-dessus) δ 1 δ 2 sont les codes machine de ces désinences.(2) Pour chacune des unités morphologiques d'une catégorie morphologique, on détermine la ou les bases B compte tenu des désinences de la liste précédente et le paradigme linguistique correspondant Π = Δ 1 W 1 , Δ 2 W 2 , ... Les désinences identiques d'un tel paradigme sont regroupées. Dans le paradigme réduit ainsi déterminé, on remplace les Δ et les W par leurs codes machines. Le paradigme obtenu est le paradigme-machine relatif à la base B.La correspondance entre le paradigme linguistique et le paradigme-machine est établie à l'aide d'un simple numéro d'ordre (le numéro de paradigme) facile à noter et à perforer.Les informations linguistiques communes aux diverses formes d'une même unité morphologique (en particulier la catégorie grammaticale) sont représentées par le code commun (C.C.) Une unité morphologique est ainsi représentée par B. Π (C.C.)Les paradigmes sont recensés systématiquement en examinant si toutes les combinaisons possibles des critères généraux relatifs aux déclinaisons et aux conjugaisons habituelles correspondent à des paradigmes susceptibles d'exister. Par exemple, pour le substantif russe: défectivité au singulier ou au pluriel, genre, déclinaison dure ou molle, animation etc. Les critères sont trop nombreux et parfois insuffisants pour être utilisés systématiquement en machine mais ils sont utiles pour dresser une première liste des paradigmes.Les paradigmes irréguliers sont en général bien recensés par les grammaires et il suffit de les ajouter aux paradigmes déterminés par les critères généraux. Remarquons en particulier que ces paradigmes irréguliers sont traités en machine comme des paradigmes ordinaires et qu'il est toujours possible de rajouter des paradigmes aux tables existantes sans modifier le programme lui-même.Il est parfois nécessaire de considérer des paradigmes défectifs (par exemple, dans le cas d'une voyelle mobile pour un substantif russe). En général, la réunion de 2 paradigmes défectifs est elle-même un paradigme complet. Ceci permet de réduire le nombre de numéros de paradigmes en affectant à un paradigme complet, deux numéros de paradigme : ceux des paradigmes défectifs dont ce paradigme est la réunion.La recherche d'un numéro de paradigme par le linguiste chargé de coder les mots d'un dictionnaire est facilitée par l'utilisation d'un arbre de détermination qui met en évidence les divers critères linguistiques généraux mentionnés ci-dessus.Le système précédent a été utilisé à la section de Paris du CETAP pour la réalisation d'un dictionnaire de bases de la langue russe, fonctionnant effectivement sur ordinateur 650 à disques 355.Les mots russes sont classés en 5 catégories morphologiques :
null
null
null
null
Main paper: définition formelle élémentaire d'un dictionnaire: Un dictionnaire est une suite finie de couples:(F 1 S 1 )...(F i .S i )...(F n S n ). Chaque F i est une forme graphique ou assemblage linéaire de symboles élémentaires appartenant à un alphabet donné. Cet assemblage de symboles est construit suivant des règles formelles déterminées.Chaque S i est un ensemble d'informations associées à la forme F i . Consulter le dictionnaire pour une forme F i , c'est rechercher la forme F i dans la suite F 1 S 1 , . . F n S n , et lire l'ensemble des informations associé à la forme F i .Pour un langage naturel, le couple (F, S) est un mot (ou forme linguistique) F est le signifiant de ce mot (les symboles élémentaires sont les lettres de l'alphabet de ce langage), S est le signifié de ce mot (représenté pratiquement par l'ensemble des informations morphologiques, syntaxiques et sémantiques associées à ce mot et par une définition dans le cas d'un dictionnaire monolingue ou par une liste d'équivalents de la langue-cible dans le cas d'un dictionnaire bilingue).Constituter un dictionnaire automatique, c'est enregistrer la suite F 1 S 1 , . . . F n S n , dans la mémoire d'un calculateur électronique et établir un programme-machine (programme de consultation) pour effectuer l'opération de recherche d'une forme F i et l'opération de lecture des informations S i .Un article d'un tel dictionnaire est le couple (F, S), la forme graphique est son en-tête, l'ensemble des informations S est son contenu, Le temps de consultation T est l'intervalle de temps moyen pour trouver dans le dictionnaire une forme quelconque F i d'un texte écrit et pour lire l'ensemble des informations correspondantes S i (qui peut être nul, si F i n'est pas dans le dictionnaire). Les performances relatives à la vitesse de consultation ne sont significatives que par rapport au volume V des informations linguistiques enregistrées dans le dictionnaire et représentées par la suite F 1 S 1 ...F n S n .En principe, la forme F est une suite de lettres de l'alphabet de la langue considérée, comprise entre deux intervalles ou blancs. S est l'ensemble de toutes les informations linguistiques que l'on peut énoncer à propos de cette forme F.Il est bien connu que cette définition demande certaines précautions :(1) dans l'énumération des lettres de l'alphabet, on doit tenir compte de certains signes diacritiques, par exemple, en français, les accents, la cédille, le tréma (ainsi, l'alphabet français a 39 lettres et non 26); certains signes de ponctuation sont susceptibles d'être considérés comme des éléments de la forme F (par exemple, le trait-d'union, l'apostrophe).(2) les mots homographes sont des mots différents qui ont une forme graphique F commune mais des signifiés S 1 , S 2 ... différents. Les nécessités de l'automatisation imposent en général que l'on énonce simultanément ces divers signifiés à propos de la forme F, mais il est important de séparer nettement les informations linguistiques correspondantes, même si elles sont très voisines.(3) l'unité de signification qui correspond naturellement au symbole formel S précédent, n'est pas nécessairement associée à une forme graphique F limitée par deux intervalles. Elle peut être associée, soit à une forme plus petite: mots composés formés de deux unités de signification ou plus, mots dérivés obtenus par l'adjonction de préfixes ou suffixes porteurs d'une certaine signification, mots fléchis par l'adjonction d'une désinence, soit à une forme plus grande. C'est le cas, non seulement des idiotismes mais aussi de nombreux groupes de mots techniques qui sans être de véritables idiotismes, constituent de véritables unités de signification dans lesquels il n'y a pas lieu de distinguer a priori des mots composants, sinon dans un lexique bien délimité et pour des raisons de simplification. Certains doublets ou triplets n'existent pas dans la langue-une comparaison lettre par lettre de la forme entière n'est pas économique. On utilise le procédé seulement pour les 4 ou 5 premières lettres, ce qui constitue un prérepérage de la zone où se trouve la forme cherchée. La recherche définitive de la forme est faite par comparaison (cf. [3] Si de telles singularités linguistiques ne sont pas très fréquentes dans une catégorie morphologique déterminée, il suffit d'ajouter aux informations communes S, les informations linguistiques particulières à une forme, au prix d'une certaine complication du contenu de l'article et de sa consultation.Sinon, il est préférable de renoncer provisoirement à l'utilisation de paradigmes linguistiques pour la catégorie morphologique en question (par exemple, dans le dictionnaire CETAP actuel, on a constitué pour un verbe autant d'articles qu'il y a de bases de participes).(2) du point de Ce principe donne des résultats satisfaisants pour la langue russe. Au moyen de l'introduction de désinences artificielles convenables, on peut espérer réduire le nombre des cas de faux splittages, à une valeur faible, moins de 10% du nombre des formes différentes de la langue. Ce choix des désinences artificielles est fait commodément à l'aide du système de paradigmes décrit ci-dessus.Une suite de k mots de la langue :U = (F 1 S 1 ) (F 2 S 2 )... (F k S k ) constitue une unité de signification si la signification globale S que l'on doit associer à la suite formelle F = F 1 F 2 .. . F k ne dérive pas des significations usuelles S 1 . . . S k associées respectivement à chacune des formes F 1 ... et de l'application de règles grammaticales très générales, mais résulte au contraire d'une convention particulière adoptée par les utilisateurs de la suite F.Habituellement, cette notion est réservée à la catégorie très particulière des idiotismes. En fait, elle est beaucoup plus générale. Par exemple, les significations des deux suites formelles : 'nombre rationnel', 'extraire une racine' ne dérivent pas des significations usuelles des mots isolés: 'nombre', 'rationnel', 'extraire', 'racine' et de la connaissance de leurs catégories grammaticales usuelles. Elles résultent plus précisément des définitions données par les mathématiciens.Par suite, une telle unité de signification doit constituer un article de dictionnaire dont l'en-tête est la suite des formes F = F 1 . . . F k et le contenu, les informations linguistiques que l'on doit associer à la signification globale S. Dans un dictionnaire usuel, on trouve effectivement de tels articles, mais en général chacun d'eux est incorporé dans un article relatif à une forme isolée, par exemple F 1 .Si l'on s'astreint, pour des raisons de commodité, à constituer seulement des articles de dictionnaires simples du type (F i S i ), les signifiés de ces mots isolés sont les significations S 1 . . . S k que l'on doit associer respectivement aux formes F 1 . . . F k , compte tenu de la signification globale.Par exemple, si l'on peut établir une correspondance entre ces formes et les formes F' 1 ... F' k de la traduction de l'unité de signification, les signifiés S 1 ... S k seront représentés par les équivalents respectifs F' 1 ... F' k dans la langue-cible des formes F 1 ... F k de la languesource.En général, une forme déterminée F 1 peut participer à plusieurs unités de signification et par suite admettre plusieurs signifiés S 1 S 2 S 3 . . . (polysémie). En raison de l'arbitraire du signe linguistique dans les langues naturelles et du petit nombre de formes effectivement utilisées dans une langue par rapport au nombre d'unités de signification l'existence de telles polysémies est un fait très général. Contrairement à une hypothèse souvent admise, un mot isolé est, a priori, polysème, s'il ne l'est pas, c'est un cas particulier dont il faut profiter mais qu'on ne peut ériger en règle générale.La résolution de telles polysémies ne peut être envisagée que dans la mesure où l'on considère des relations entre les signifiés de la langue, c'est-à-dire, si l'on définit des structures sémantiques sur l'ensemble des mots de cette langue.Considérons un ensemble de n unités de signification :U 1 = (F 1 1 . .. F k1 1 ) S 1 G 1 U n = (F 1n ... F kn n ) S n G n chaque unité U i étant caractérisée par une suite formelle F 1 i ... F ki i , une signification globale S i est une structure grammaticale (au sens habituel) G i .On définit une structure sémantique élémentaire sur cet ensemble si, compte tenu des diverse significations globales S 1 ... S n , on peut associer à chacune des formes isolées F j i un signifié unique S j i , valable dans toutes les unités de signification où F j i apparaît. Une telle structure définit une catégorie sémantique C 1 que l'on peut associer à chacun des mots isolés de l'ensemble considéré. Un mot polysème est un mot susceptible d'appartenir à plusieurs catégories sémantiques distinctes C 1 C 2 ...Une telle catégorie sémantique, définie par des relations entre les signifiés des mots isolés d'un ensemble d'unités de signification, est voisine de celle obtenue en considérant les signifiés des mots d'un domaine technique particulier (microglossaire). Elle peut être plus restreinte car certains mots d'un domaine technique sont susceptibles d'être polysèmes par exemple, le mot 'entier' n'a pas la même signification dans 'nombre entier' et 'fonction entière'. Elle peut être plus étendue car un mot technique peut conserver la même signification en dehors de son domaine de définition: le mot 'nombre' garde en général la même signification en dehors des textes mathématiques.Le paradigme sémantique d'un mot M pour une relation grammaticale déterminée G est l'ensemble des mots appartenant à la même catégorie sémantique C que le mot M et susceptibles d'être associés au mot M par la relation grammaticale G.Cette notion de paradigme sémantique précise les informations sémantiques que l'on peut trouver dans un dictionnaire habituel. Par exemple, dans l'expression:'donner quelque chose à quelqu'un' 'quelque chose' désigne l'ensemble des objets qui peuvent être associés au mot 'donner' (muni de la signification: faire don de) par la relation grammaticale: verbe-complément direct d'objet. Cet ensemble est donc le paradigme sémantique du mot donner pour cette relation grammaticale. Cette notion permet de classer les unités de signification d'après la longueur du paradigme sémantique de chacun de leurs mots composants, c'est à dire suivant le nombre des mots qui composent ce paradigme.(1) Idiotismes. Le paradigme sémantique de chacun des mots composants est réduit à un seul mot (ou à un nombre très restrient de mots). C'est le cas d'expressions telles que 'parce que', 'avaler la pilule' (dans le sens se déterminer à une chose pénible) et de quelques expressions techniques.(2) Expressions techniques. Le paradigme sémantique de chacun des mots composants peut comporter un nombre relativement élevé de mots, mais ces mots appartiennent à un domaine technique bien déterminé et sont susceptibles d'être énumérés: par exemple nombre entier, naturel, rationnel, irrationnel, complexe, imaginaire . . . ensemble borné, ouvert, fermé, dense, dénombrable....(3) Expressions du langage courant. Ou expressions communes à l'ensemble des individus utilisant le langage considéré. Il s'agit d'expressions telles que: 'construire une maison', 'écrire une lettre', 'donner quelque chose à quelqu'un'. Dans de telles expressions, les mots composants ont une signification très générale et on ne peut prétendre énumérer tous les éléments de leur paradigme. Dans ce cas, un paradigme ne peut être construit que par extrapolation, à partir d'une suite d'éléments déjà recensés, en essayant de dégager les propriétés sémantiques générales de ses éléments (être animé, inanimé, action faite dans un but déterminé . . . ). un exemple d'application: structure morphologique relative aux déclinaisons et aux conjugaisons d'une langue flexionnelle: Cette structure a pour objet de faire entrer dans le cadre général des structures définies ci-dessus, les déclinaisons et les conjugaisons étudiées dans les grammaires traditionnelles.Elle est relative à des formes graphiques élémentaires : chacune des formes considérées est une suite de lettres de l'alphabet limitée par 2 intervalles ou blancs.On suppose que le signifié associé à chacune de ces formes appartient à une catégorie grammaticale déterminée G (l'une des catégories traditionnelles, nom, adjectif, verbe . . . ). Dans le cas où il est susceptible d'appartenir à plusieurs catégories (homographie externe), on suppose qu'il s'agit de plusieurs mots distincts.Les unités morphologiques considérées sont des unités relatives aux modalités de la signification: cas, genre, nombre.... Une modalité est considérée comme une variable grammaticale V j susceptible de prendre un nombre fini de valeurs v k i . La réunion des valeurs des variables grammaticales particulières à un mot sera notée : Les formes F 1 . . . F k associées à cette suite de valeurs de variables grammaticales sont les diverses formes d'un mot linguistique déterminé (dans le sens habituel de ce terme), c'est-à-dire les formes obtenues dans le cours de la déclinaison ou de la conjugaison d'un mot.W i = v i i v k 2 vV = V 1 V 2 V 3 V 4 =En première approximation, cet ensemble de formes constitue bien une unité morphologique dans le sens défini ci-dessus:(1) en principe, les informations linguistiques associées à ces formes (autres que celles associées aux W) sont toutes identiques.(2) en général, ces formes admettent un élément commun qui est la base (ou radical) du mot considéré. Dans certains cas, l'unité morphologique peut admettre plusieurs bases (par exemple, en français, le mot: oeil, yeux (1) On se donne une liste de désinences admissibles A 1 A 2 .... Ces désinences sont choisies de façon à optimiser le temps de consultation moyen compte tenu des faux découpages (voir ci-dessus) δ 1 δ 2 sont les codes machine de ces désinences.(2) Pour chacune des unités morphologiques d'une catégorie morphologique, on détermine la ou les bases B compte tenu des désinences de la liste précédente et le paradigme linguistique correspondant Π = Δ 1 W 1 , Δ 2 W 2 , ... Les désinences identiques d'un tel paradigme sont regroupées. Dans le paradigme réduit ainsi déterminé, on remplace les Δ et les W par leurs codes machines. Le paradigme obtenu est le paradigme-machine relatif à la base B.La correspondance entre le paradigme linguistique et le paradigme-machine est établie à l'aide d'un simple numéro d'ordre (le numéro de paradigme) facile à noter et à perforer.Les informations linguistiques communes aux diverses formes d'une même unité morphologique (en particulier la catégorie grammaticale) sont représentées par le code commun (C.C.) Une unité morphologique est ainsi représentée par B. Π (C.C.)Les paradigmes sont recensés systématiquement en examinant si toutes les combinaisons possibles des critères généraux relatifs aux déclinaisons et aux conjugaisons habituelles correspondent à des paradigmes susceptibles d'exister. Par exemple, pour le substantif russe: défectivité au singulier ou au pluriel, genre, déclinaison dure ou molle, animation etc. Les critères sont trop nombreux et parfois insuffisants pour être utilisés systématiquement en machine mais ils sont utiles pour dresser une première liste des paradigmes.Les paradigmes irréguliers sont en général bien recensés par les grammaires et il suffit de les ajouter aux paradigmes déterminés par les critères généraux. Remarquons en particulier que ces paradigmes irréguliers sont traités en machine comme des paradigmes ordinaires et qu'il est toujours possible de rajouter des paradigmes aux tables existantes sans modifier le programme lui-même.Il est parfois nécessaire de considérer des paradigmes défectifs (par exemple, dans le cas d'une voyelle mobile pour un substantif russe). En général, la réunion de 2 paradigmes défectifs est elle-même un paradigme complet. Ceci permet de réduire le nombre de numéros de paradigmes en affectant à un paradigme complet, deux numéros de paradigme : ceux des paradigmes défectifs dont ce paradigme est la réunion.La recherche d'un numéro de paradigme par le linguiste chargé de coder les mots d'un dictionnaire est facilitée par l'utilisation d'un arbre de détermination qui met en évidence les divers critères linguistiques généraux mentionnés ci-dessus.Le système précédent a été utilisé à la section de Paris du CETAP pour la réalisation d'un dictionnaire de bases de la langue russe, fonctionnant effectivement sur ordinateur 650 à disques 355.Les mots russes sont classés en 5 catégories morphologiques : 1.: Etant donné l'importance et les difficultés des problèmes syntaxiques et sémantiques dans la T.A., le titre de cet exposé risque de paraître trivial. Il est bien connu, depuis les premières recherches faites dans les années 1954, que le traitement automatique de la morphologie d'une langue naturelle ne présente pas de difficultés insurmontables. Il suffit de bien vouloir y consacrer un peu de connaissances de la langue considérée et un peu d'ingéniosité dans la réalisation des programmes particuliers à ce genre de problèmes pour obtenir des résultats concrets et relativement satisfaisants.En fait, notre intention est de situer l'objet de cet exposé dans le cadre plus général des études faites en vue de la réalisation d'un dictionnaire à l'usage de la T.A. et d'envisager, à ce propos, certains des problèmes relatifs à la compilation des informations linguistiques intéressant la T.A.Les recherches faites ces dernières années à propos de la construction des grammaires montrent que la qualité des résultats obtenus par un programme d'analyse syntaxique utilisant une grammaire d'un type donné dépend étroitement de la quantité d'informations linguistiques qui est incorporée dans cette grammaire. Une grammaire faible associée à un programme de reconnaissance des structures apporte des résultats qui ne sont pas dénués d'intérêt pour la recherche linguistique, mais il est très malaisé d'enrichir et d'améliorer une telle grammaire par la suite (cf. [8] ). Mieux vaut, semble-t-il, énoncer très tôt les contraintes sémantiques susceptibles d'éliminer les nombreuses structures manifestement fausses données par une grammaire faible, tout en reconnaissant que l'énoncé de ces contraintes est l'un des problèmes les plus difficiles de la T.A. En raison de l'arbitraire et de la motivation relative du signe linguistique, il est probable que ce problème n'admet pas de solution générale.Ces résultats permettent de mieux situer la place du dictionnaire dans la T.A. Les performances techniques: compacité des informations enregistrées, vitesse de consultation, ne sont pas des facteurs essentiels, tout au moins dans un avenir immédiat; ce qui ne signifie pas qu'il s'agit là de problèmes faciles et que tout perfectionnement dans ce sens ne soit pas le bienvenu.Par contre, la facilité de la compilation des informations linguistiques, la commodité de la consultation de ces informations, la mise en évidence et l'utilisation de structures relatives à ces informations, sont des facteurs susceptibles de remédier à la difficulté théorique des problèmes sémantiques, en particulier la construction de catégories sémantiques et au fait indiqué ci-dessus à propos des programmes de reconnaissance des structures. Le dictionnaire n'est pas un simple répertoire que l'on consulte pour des raisons de commodité; il fait partie intégrante des programmes de traduction et il doit faciliter au maximum l'apport des nombreuses informations linguistiques nécessaires à la qualité des traductions, automatiques ou non. Appendix:
null
null
null
null
{ "paperhash": [ "kelly|glossary_lookup_made_easy", "lamb|a_high-speed_large-capacity_dictionary_system" ], "title": [ "Glossary Lookup Made Easy", "A high-speed large-capacity dictionary system" ], "abstract": [ "Abstract : Most of the work on the dictionary problem for machine translation has consisted of attempts to reduce the amount of information involved, thus bringing the problem within the capabilities of presently available or soon-to-be-available computing equipment. This paper presents a technique for handling the problem with currently available computing equipment and without the complexities of information compression. In essence, the approach is to compile a glossary of forms from the current text and then to retrieve information about each from the dictionary as the information is needed in the translation process. (Author)", "This paper describes a method of adapting dictionaries for use by a computer in such a way that comprehensiveness of vocabulary coverage can be maximized while look-up time is minimized. Although the programming of the system has not yet been completed, it is estimated at the time of writing that it will allow for a dictionary of 20,000 entries or more, with a total look-up time of about 8 milliseconds (.008 seconds) per word, when used on an IBM 704 computer with 32,000 words of core storage. With a proper system of segmentation, a dictionary of 20,000 entries can handle several hundred thousand different words, thus providing ample coverage for a single fairly broad field of science. Although the system has been designed specifically for purposes of machine translation of Russian, it is applicable to other areas of linguistic data processing in which dictionaries are needed." ], "authors": [ { "name": [ "H. Kelly", "Theodore W. Ziehe" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Lamb", "W. Jacobsen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "61076748", "26800633" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
Problem: The paper addresses the general issues related to organizing a dictionary for automatic translation, focusing on the formal definition, search for forms in a dictionary, and the structure of linguistic information recorded in the dictionary, including morphological and semantic structures. Solution: The hypothesis of the paper is that by incorporating linguistic information efficiently into a dictionary for machine translation, particularly focusing on morphological analysis and semantic constraints, it is possible to enhance the quality of translation results and address the challenges posed by syntax and semantics in machine translation systems.
751
0
null
null
null
null
null
null
null
null
0fa11ffbdc32460d48c4101c2ecff8e7ce3a735b
207974610
null
Four Lectures on Algebraic Linguistics and Machine Translation
THE ROLE OF GRAMMATICAL MODELS IN MACHINE TRANSLATION LINGUISTICS, as every other empirical science, is a complex mixture of theory and observation. The precise nature of this mixture is still not too well understood, and in this respect the difference between linguistics and, say, physics is probably at most one of degree. This lack of methodological insight has often led to futile disputes between linguists and other scientists dealing with language, such as psychologists, logicians, or communication theoreticians, as well as among linguists themselves. Recently, however, considerable progress has been made in the understanding of the function of theory in linguistics, as a result of which theoretical linguistics has come into full-fledged existence. Interestingly enough, the present customary name for this new subdiscipline is rather mathematical linguistics. This is slightly unfortunate: though the adjective 'mathematical' is quite all right if 'mathematics' is understood in the sense of 'theory of formal systems', which is indeed one of its many legitimate senses, it is misleading inasmuch as it is still associated, at least among the non-specialists, including the bulk of the linguists themselves, with numbers and quantitative treatment. That subdiscipline of linguistics, however, which deals with numbers and statistics should better be called statistical linguistics and rather carefully be kept apart from mathematical linguistics qua theoretical linguistics. Should one prefer to regard 'mathematical linguistics' as a term for a genus of which statistical linguistics is a species, then the other species should perhaps be named algebraic linguistics. After this terminological aside which, I think, was not superfluous, let us briefly sketch the background and development of algebraic linguistics. In the hands of such authors as Harris [1] and Hockett [2] in the United States, Hjelmslev [3] and Uldall [4] in Europe, structural linguistics became more and more conscious of the chasm between theory and observation, and linguistic theory deliberately got an algebraic look. At the same time, Carnap [5] and the Polish logicians, especially Ajdukiewicz [6], developed the logical syntax of language which was, however, too much preoccupied with rules of deduction, and too little with rules of formation, to exert a great influence on current linguistics. Finally, Post [7] succeeded in formally assimilating rules of formation to rules of deduction, thereby paving the way for the application of the recently developed powerful theory of recursive functions, a branch of mathematical logic, to all ordinary languages viewed as combinatorial systems
{ "name": [ "Bar-Hillel, Yehoshua" ], "affiliation": [ null ] }
null
null
Automatic Translation of Languages NATO Summer School
1962-07-01
26
9
null
, while Curry [9] became more and more aware of the implications of combinatorial logic to theoretical linguistics. It is, though, perhaps not too surprising that the ideas of Post and Curry should be no better known to professional linguists than those of Carnap and Ajdukiewicz.It seems that a major change in the peaceful but uninspiring co-existence of structural linguists and syntax-oriented logicians came along when the idea of mechanizing the determination of syntactic structure began to take hold of the imagination of various authors. Though this idea was originally but a natural outcome of the professional preoccupation of a handful of linguists and logicians, it made an almost sensational breakthrough in the early fifties when it became connected with, and a cornerstone of, automatic translation between natural languages. At one stroke, structural linguistics had become useful. Just as mathematical logic, regarded for years as the most abstract and abstruse scientific discipline, became overnight an essential tool for the designer and programmer of electronic digital computers, so structural linguistics, regarded for years as the most abstract and speculative branch of linguistics, is now considered by many a must for the designer of automatic translation routines. The impact of this development was at times revolutionary and dramatic. In Soviet Russia, for instance, structural linguistics had, before 1954, unfailingly been condemned as idealistic, bourgeois and formalistic. However, when the Russian government awakened from its dogmatic slumber to the tune of the Georgetown University demonstration of machine translation in January 1954, structural linguistics became within a few weeks a discipline of high prestige and priority. And just as mathematical logic has its special offspring to deal with digital computers, i.e. the theory of automata, so structural linguistics has its special offspring to deal with mechanical structure determination, i.e. algebraic linguistics, also called, when this application is particularly stressed, computational linguistics or mechano-linguistics. As a final surprise, it has recently turned out that these two disciplines, automata theory and algebraic linguistics, exhibit extremely close relationships which at times amount to practical identity.To complete this historical sketch: around 1954, Chomsky, influenced by, and in constant exchange of ideas with Harris, started his investigations into a new typology of linguistic structures. In a series of publications, of which the booklet Syntactic Structures [10] is the best known, but also the least technical, he defined and constantly refined a complex hierarchy of such structures, meant to serve as models for natural languages with varying degrees of adequacy. Though models for the treatment of linguistic structures were also developed by many other authors, Chomsky's publications exhibited a degree of rigor and testability which was unheard of before that in the linguistic literature and therefore quickly became for many a standard of comparison for other contributions.I shall now turn to a presentation of the work of the Jerusalem group in linguistic model theory before I continue with the description and evaluation of some other contributions to this field.In 1937, while working on a master's thesis on the logical antinomies, I came across Ajdukiewicz's work [6] . Fourteen years later, having become acquainted in the meantime with structural linguistics, and especially with the work of Harris [1] , and instigated by my work at that time on machine translation, I realized the importance of Ajdukiewicz's approach for the mechanization of the determination of syntactic structure, and published an adaptation of Ajdukiewicz's ideas [11] .The basic heuristic concept behind the type of grammar proposed in this paper, and later further developed by Lambek [12] , [13] , [14] , myself [15] and others, is the following: the grammar was meant to be a recognition (identification or operational) grammar, i.e. a device by which the syntactic structure, and in particular the sentencehood, of a given string of elements of a given language could be determined. This determination had to be formal, i.e. dependent exclusively on the shape and order of the elements, and preferably effective, i.e. leading after a finite number of steps to the decision as to the structure, or structures, of the given string. This aim was to be achieved by assuming that each of the finitely many elements of the given natural language had finitely many syntactic functions, by developing a suitable notation for these syntactical functions (or categories, as we became used to calling them, in the tradition of Aristotle, Husserl, and Leśniewski), and by designing an algorithm operating on this notation.More specifically, the assumption was investigated that natural languages have what is known to linguists as a contiguous immediate-constituent structure, i.e. that every sentence can be parsed, according to finitely many rules, into two or more contiguous constituents, either of which is already a final constituent or else is itself parsible into two or more immediate constituents, etc. This parsing was not supposed to be necessarily unique. Syntactically ambiguous sentences allowed for two or more different parsings. Examples should not be necessary here.The variation introduced by Ajdukiewicz into this conception of linguistic structure, well known in a crude form already to elementary school students, was to regard the combination of constituents into constitutes (or syntagmata) not a concatenation inter pares but rather as the result of the operation of one of the constituents (the governor, in some terminologies) upon the others (the governed or dependent units). The specific form which the approach took with Ajdukiewicz was to assign to each word (or other appropriate element) of a given natural language a finite number of fundamental and/or operator categories and to employ an extremely simple set of rules operating upon these categories, so-called 'cancellation' rules.Just for the sake of illustration, let me give here the definition of bidirectional categorial grammar, in a slight variation of the one presented in a recent publication of our group [16] .We define it as an ordered quintuple < V, C, , R, >, where V is a finite set of elements (the vocabulary), C is the closure of a finite set of fundamental categories, say  1 ,…, n , under the operations of right and left diagonalization (i.e. whenever  and  are categories, [/] and [\] are categories),  is a distinguished category of C (the category of sentences), R is the set of the two cancellation rules [ i / j ],  j   i , and  i ,[ i \ j ]   j , and  is a function from V to finite sets of C (the assignment function).We say that a category sequence  directly cancels to , if  results from a by one application of one of the cancellation rules, and that a cancels to , if  results from  by finitely many applications of these rules (more exactly, if there exist category sequences  1 ,  2 ,...,  n such that  = 1 ,  =  n , and  i directly cancels to  j+1 , for i = 1, ...,n -1).A string x = A 1 ... A k over V is defined to be a sentence if, and only if, at least one of the category sequences assigned to x by  cancels to . The set of all sentences is then the language determined (or represented) by the given categorial grammar. A language representable by such a grammar is a categorial language.In addition to bidirectional categorial grammars, we also dealt with unidirectional categorial grammars, employing either right or left diagonalization only for the formation of categories, and more specifically with what we called restricted categorial grammars, whose set of categories consists only of the (finitely many) fundamental categories  i , and the operator categories[ i \ j ] and [ i \ i \ k ]] (or, alternatively, [ i / j ] and [ i /[ j / k ]]).A heuristically (though not essentially) different approach to the formalization of immediate-constituent grammars was taken by Chomsky, within the framework of his general typology. He looked upon a grammar as a device, or a system of rules, for generating (or recursively enumerating) the class of all sentences. In particular, a context-free phrase structure grammar, a CF grammar for short, may be defined, again in slight variation from Chomsky's original definition, as an ordered quadruple < V, T, S, P>, where V is the (total) vocabulary, T (the terminal vocabulary) is a subset of V, S (the initial symbol) is a distinguished element of V-T (the auxiliary vocabulary), and P is a finite set of production rules of the form X x, where XV-T and x is a string over V.We say that a string x directly generates y, if y results from x by one application of one of the production rules, and that x generates y, if y results from x by finitely many applications of these rules (more exactly, if there exist sequences of strings z 1 ,z 2 ,...,z n such that x = z 1 , y = z n and z i directly generates z i+1 , for i = 1,..., n-1).A string over T is defined to be a sentence if it is generated by S. The set of all sentences is the language determined (or represented) by the given CF grammar.My conjecture that the classes of CF languages and bidirectional categorial languages are identical-in other words, that for each CF language there exists a weakly equivalent bidirectional categorial language and vice versa-was proved in 1959 by Gaifman [16] , by a method that is too complex to be described here. He proved, as a matter of fact, slightly more, namely that for each CF grammar there exists a weakly equivalent restricted categorial grammar and vice versa. The equivalent representation can in all cases be effectively obtained from the original representation.This equivalence proof was preceded by another in which it was shown that the notion of a finite state grammar, FS grammar for short, occupying the lowest position in Chomsky's hierarchy of generation grammars, was equivalent to that of a finite automaton, in the sense of Rabin and Scott [17] , which can be viewed as another kind of recognition grammar. The proof itself was rather straightforward and almost trivial, relying mainly on the equivalence of deterministic and non-deterministic finite automata, shown by Rabin and Scott. It has been adequately described in a recently published paper [18] .Chomsky had already shown that the FS languages formed a proper subclass of the CF languages. We have recently been able to prove [19] that the problem whether a CF language is also representable by a FS grammar-a problem which has considerable linguistic importance-is recursively unsolvable. The method used was reduction to Post's correspondence problem, a famous problem in mathematical logic which was shown by Post [20] to be recursively unsolvable.Among other results recently obtained, let me only mention the following: whereas FS languages are, in view of the equivalence of FS grammars to finite automata and wellknown results of Kleene [21] and others, closed under various Boolean and other operations, CF languages whose vocabulary contains at least two symbols are not closed under complementation and intersection, though closed under various other operations. The union of two CF languages is again a CF language, and a representation can be effectively constructed from the given representation. The intersection of a CF language and a FS language is a CF language.Undecidable are such problems as the equivalence problem between two CF grammars, the inclusion problem of languages represented by CF grammars, the problem of disjointedness of such languages, etc. In this connection, interesting relationships have been shown to exist between CF grammars and two-tape finite automata, as defined and treated by Rabin and Scott, for which the disjointedness problem of the sets of acceptable tapes is similarly unsolvable.A particular proper subset of the CF languages, apparently of greater importance for the treatment of programming languages, such as ALGOL, than for natural languages, is the set of so-called sequential languages, studied in particular by Ginsburg [22] , [23] and Shamir [24] . I have no time for more than just this remark.In a somewhat different approach, closely related to the classical notions of government and syntagmata, the notions of dependency grammars and projective grammars have been developed by Hays [25] , Lecerf [26] , and others, including some Russian authors, utilizing ideas most fully presented in Tesnière's posthumous book [27] , and are thought to be of particular importance for machine translation. However, it has not been too difficult to guess, and has indeed been rigorously proven by Gaifman [28] , that these grammars, which are being discussed in other lectures presented in this Institute, are equivalent to CF grammars in a certain sense, which is somewhat stronger than the one used above, but that this is not necessarily so with regard to what might be called natural strong equivalence. More precisely, whereas for every dependency grammar there exists, and can be effectively constructed, a CF grammar naturally and strongly equivalent to it, this is not necessarily the case in the opposite direction, not if the CF grammar is of infinite degree. Let me add that the dependency grammars are very closely related to a type of categorial grammars which I discussed in earlier publications [11] but later on replaced by grammars of a seemingly simpler structure. In the original categorial grammars, I did consider categories of the form  m …. 2  1 \/ 1  2 … n , with ,  i , and  j being either fundamental or operator categories themselves, with a corresponding cancellation rule. It should be rather obvious how to transform a dependency grammar into a categorial grammar of this particular type.These grammars are equivalent to grammars in which all categories have the form \/ where ,, and  are fundamental categories and where  and  may be empty (in which case the corresponding diagonal will be omitted, too, from the symbol). Finally, in view of Gaifman's theorem mentioned above, these grammars in their turn are equivalent to grammars all of whose categories are of the form / (or \), with the same conditions. I think that these remarks (strongly connected with considerations of combinatory logic [9] ) should definitely settle the question of the exact formal status of the dependency grammars and their like. One side result is that dependency grammars are weakly reducible to binary dependency grammars, i.e. grammars in which each unit governs at most two other units. This result, I presume, is not particularly surprising, especially if we remember that the equivalence proven will in general not be a natural one.Still another class of grammars, sometimes [29] called push-down store grammars and originating, though not in a very precise form, with Yngve [30] , [31] , has recently been shown by Chomsky to be once more equivalent to CF grammars, again to nobody's particular surprise. Since push-down stores are regarded by many workers in the fields of MT and programming languages as particularly useful devices for the mechanical determination of syntactic structure of sentences belonging to natural and programming languages, respectively, this result should be helpful in clarifying the exact scope of those schemes of syntactic analysis which are based on these devices.Of theoretically greater importance is the fact that push-down store grammars form a proper sub-set of linear bounded automata, one of the many classes of automata lying between Turing machines and finite automata which have recently been investigated by many authors, due to the fact that Turing machines are too idealized to be of much direct applicability, whereas finite automata are too restricted for this purpose. The investigation of these automata, initiated by Myhill [32] , is, however, still in its infancy, similar to that of many other classes of automata reported by McNaughton in his excellent review [33] . Still more in the dark is the linguistic relevance of all these models though, judging from admittedly limited experience, almost every single one of them will sooner or later be shown to have such relevance.To wind up this discussion, let me only mention that during the last few years various classes of grammars whose potency is intermediate between FS and CF grammars have been investigated. These intermediate grammars will probably turn out to be of greater importance for the study of grammars of programming and other artificial formalized languages than for natural languages. In addition to the sequential grammars mentioned before, let me now mention the linear and metalinear grammars studied by Chomsky.It might be useful to present, at this stage, a picture of the various grammars discussed in the present section, together with the two important classes of transformational and context-sensitive phrase structure grammars (which I could not discuss, for lack of time) in the form of a directed graph based on the (partial) ordering relation Determine-a-moreextensive-class-of-languages-than (the staggered lines indicating that the exact relationship has not yet been fully determined):The last two questions I would now like to discuss are the following: (1) In view of the fact that so many models of linguistic structure have turned out to be (weakly) equivalent, how do they compare from the point of view of pedagogy and MT-directed application?(2) What is the degree of adequacy with which natural languages can be described by CF grammars and their equivalents?As to the first question, I am afraid that not much can be said at this stage. I am not aware of any experiments made as yet to determine the pedagogical status of the various equivalent grammars. Some programmatic statements have been made on occasion, but I would not want to attribute much weight to them. I myself, for instance, have a feeling that the governor-dependent terminology of the dependency and projective grammars has an unfortunate, and intrinsically, of course, unwarranted, side-effect of strengthening dogmatic approaches to the decision of what governs what. The operator-operand terminology of the categorial grammars seems to be emotionally less loaded, but again, these are surely minor issues. Altogether, I would advocate the performance of pedagogical experiments in which the same miniature language would be taught with the help of various equivalent grammars. I do not foresee any particular complications for such projects.Turning now to the second question which has been much discussed during the last few years, often with great fervor, the situation should be reasonably clear. FS grammars are definitely inadequate for describing any natural language, unless this last term is mutilated, for what must be regarded as arbitrary and ad hoc reasons. I am sorry that Yngve's otherwise extremely useful recent contributions did becloud this issue. As to CF grammars, the situation is more complex and more interesting. It is almost, but not quite, certain that such grammars, too, are inadequate in principle, for reasons which I shall not repeat here, since they have been stated many times in the recent literature and been authoritatively restated by Chomsky [28] . But of even greater importance, particularly for applications, such as MT, is the fact that such grammars seem definitely to be inadequate in practice, in the sense that the number and complexity of grammatical rules of this type, in order to achieve a tolerable, if not perfect, degree of adequacy, will have to be so immense as to defeat the practical purpose of establishing these rules. Transformational grammars seem to have a much better chance of being both adequate and practical, though this point is still far from being settled. In view of this fact, which does not appear to have been seriously challenged by most workers on MT, it is surprising to see that most, if not all, current programs of automatic syntactic analysis are based on impractical grammars. In some groups, where the impracticability and/or inadequacy has received serious attention, attempts are being made at present to classify the 'recalcitrant' phenomena and to find ad hoc remedies for them. You will not be surprised if I say that I take a rather dim view of these attempts. But this already leads to issues which I intend to discuss in subsequent sections.Extremely little is known about syntactic complexity, though this notion has come up in many discussions of style, readability, and, more recently, of mechanization of syntactic analysis. Its explication has been universally regarded as a matter of great difficulty, this probably being the reason why it has also been, to my knowledge, universally shunned. When such authors as Flesch [34] developed their readability measures, they could not help facing the problem but, unable to cope with it, replaced syntactic complexity in their formulae by length, whose measure poses incomparably fewer problems, while still standing in some high statistical correlation with the elusive syntactic complexity.Very often one hears, or reads, of an author, a professional group, of even a whole linguistic community being accused of expressing themselves with greater syntactic complexity than necessary. Such slogans as 'What can be said at all, can be said simply and clearly in any civilized language, or in a suitable system of symbols', formulated by the British philosopher C. D. Broad in elaboration of a well-known dictum by Wittgenstein, were used by philosophers of certain schools to criticize philosophers of other schools, and have gained particular respectability in this context. On a less exalted level, most people interested in information processing and, in particular, in the condensation of information, preferably by machine, seem to be convinced that most, if not all, of what is ordinarily said could be said not only in syntactically simpler sentences but in syntactically simple sentences, the analysis of which would be a pleasure for a machine. Often, informationlossless transformation into syntactically simple sentences is regarded as a helpful, perhaps even necessary step prior to further processing. In the context of machine translation, Harris, e.g., once expressed the hunch that mechanical translation of kernel sentences, which would presumably rank lowest on any scale of syntactic complexity, should be a simpler affair than translation of any old sentences.It is my conviction that the topic of syntactic complexity is, beyond certain very narrow limits of a vaguely felt consensus, ridden with bias, prejudice and fallacies to such a degree as to make almost everything that has been said on it completely worthless. In particular, I think that the 'Wittgensteinian' slogan mentioned above is misleading in the extreme. I tend to believe that its attractiveness is due to its being understood not as a statement of fact but rather as a kind of general and vague advice to say whatever one wants to say as simply and clearly 'as possible', something to which one could hardly object, though, as we shall see, even in this interpretation it is not unequivocally good advice, when simplicity is understood as syntactic simplicity, since the price to be paid for reducing syntactic complexity, even when it is 'possible', may well turn out to be too high.So far, I have been using 'syntactic complexity' in its pretheoretical and unanalysed vague sense. It is time to become more systematic.One should not be surprised that the explication of syntactic complexity to which we shall presently turn will reveal that the pretheoretical term is high equivocal, though one might well be surprised to learn how equivocal it is.When I said in the opening phrase that 'extremely little is known about syntactic complexity', I intended the modifier 'extremely little' to be understood literally and not as a polite version of 'nothing'. Such terms as 'nesting', 'discontinuous constituents', 'selfembedding' and 'syntactic depth' are being used in increasing frequency by linguists in general and-perhaps unfortunately so-by applied linguists in particular, especially when programming for machine analysis is discussed. But not until very recently have these notions been provided with a reasonably rigid formal definition which alone makes possible their responsible discussion. The most recent and most elaborate discussion that has come to my attention is that by Chomsky and Miller [35] . They discuss there various explicata for 'syntactic complexity', with varying degrees of tentativeness, as befits such a first attempt, and I shall make much use of this treatment in what follows.Let me first discard one notion which, as already mentioned, has a certain prima facie appeal to serve as a possible explicatum for syntactic complexity, namely length, measured, say, by the number of words in the sentence (or in whatever other construction is under investigation). Though, as said before, it is obvious that there should exist a fairly high statistical correlation between syntactic complexity and length, it should be equally obvious that length is entirely inadequate to serve as an explicatum for syntactic complexity. Take as many sentences as you wish of the form '. . . is' (such as 'John is hungry', 'Paul is thirsty', etc.) whose intuitive degree of syntactic complexity is close, if not equal, to the lowest one possible, join them by repeated occurrences of 'and' (a procedure resulting in something like 'John is hungry and Paul is thirsty and Mary is sleepy and . . .'), and you will get sentences of any length you wish whose intuitive degree of syntactic complexity should still be close to the minimum. True enough, a sentence of this form, containing fifty clauses of the type mentioned, always with different proper names in the first position and different adjectives in the third position would be difficult to remember exactly. Therefore such a sentence will be 'complex', in one of the many senses of this word, but surely not syntactically so. No normal English-speaking person will have the slightest difficulty in telling the exact syntactic form, up to a parameter, of the resulting sentence, and there will be no increase in this difficulty even if the number of clauses will be 100, 1000, or any number you wish. In one very important sense of 'understanding', the increased length of sentences of this type will not increase the difficulty of understanding them. And the sense in question is, of course, precisely that of grasping the syntactic structure.The next remark, prior to presenting some of the more interesting explicata, refers to a fact which I want very much to call to your careful attention. I hope it will not be as surprising to you as it was to me, the first time I hit upon it. For a time, I thought that the only relativization needed for explicating syntactic complexity would be the trivial one to a given language. (Logicians, and some linguists, know plenty of examples where the 'same' sentence may belong to entirely different languages; in that case, nobody would be surprised to learn that it also has-or rather that they also have-different degrees of syntactic complexity, relative to their respective languages.) What did shock me, however, though only for a moment until I realized that it could not be otherwise, was that degree of complexity must also be explicated as being relative to a grammar, that the same sentence of the same language may have one degree of complexity when analysed from the point of view of one grammar and a different one when analysed from the point of view of another grammar, and that, of two different sentences, one may have a higher degree of complexity than the other relative to one grammar, but a lower degree relative to another grammar.This doubtless being the case, may I be allowed a certain amount of speculation for a minute? It is a simple and well-known fact that the same sentence will sometimes be better understood by person A than by B, though they have about the same IQ, about the same background knowledge, and though they read or hear it with about equal attention, as far as one can make out. Could it be that they are (subconsciously, of course) analysing this same sentence according to different grammars, relative to which this sentence has different degrees of syntactic complexity? Could it be that part of the improvement in understanding obtained through training and familiarization is due to the trainee'' learning to employ another grammar (whose difference from the one he was accustomed to employ before might be only minimal, so that the acquisition of this new grammar might not have been too difficult, perhaps)? Could it be that many, if not all, of us work with more than one grammar simultaneously, switching from the one to the other when the employment of the one runs us into trouble, e.g. when according to one grammar the degree of complexity of a given sentence is greater than one can stand? More about this later. Attractive as these speculations are, let me stress that at this moment I don't know of any way of putting them to a direct empirical test. But I wish someone would think up such a way. Let me also add that he who does not like this picture of different grammars for the same language lying peacefully side by side somewhere in our brain, may look upon the situation as one system of grammatical rules (the set-theoretical union of the two sets discussed so far) being stored in the brain, and allowing the same sentence to be analysed and understood in two different ways with two different degrees of complexity, with a control element deciding which rules to apply in a given case and allowing the switch to other rules when trouble strikes. That there are syntactically ambiguous sentences has, of course, always been well known, but I am speaking at the moment about a particular kind of syntactic ambiguity, one that has no semantic ambiguities in its wake, but where the difference in the analysis still creates a difference in comprehensibility. At this point it is probably worthwhile to present an extremely simple example. The English sentence, 'John loves Mary', can be analysed (and has been analysed) in two different ways, each of which will be expressed here in two different but equivalent notations which have been simplified for our present purposes:( S ( NP John)( VP ( Vt loves)( NP Mary))) ( S ( NP John)( Vt loves)( NP Mary))These analyses correspond to the following two 'grammars', G 1 and G 2 :G 1 : SNP+VP G 2 : SNP+Vt+NP VPVt+NP NP John, Mary NP  John, Mary Vt  loves Vt lovesor, if you prefer, they both correspond to the grammar G 3 , which is the set-theoretical union of G 1 and G 2 , and consists therefore of just the rules of G 1 plus the first rule of G 2 . (Both G 1 and G 2 are, of course, CF grammars; G 1 is binary, but G 2 , and therefore also G 3 , is not.) Though the difference in structure assigned to this sentence by the two analyses is palpable, it is less clear whether this difference implies a difference in the intuitive degree of syntactic complexity, and if so, according to which analysis the sentence is more complex. As a matter of fact, good reasons can be given for both views: in the first analysis, more rules are applied but each rule has a particularly simple form; in the second analysis fewer rules are applied, but one of them has a more complicated form. This situation seems to indicate that we have more than one explicandum before us, more than one notion which, in the pretheoretical stage, is entitled to be called 'syntactic complexity'.There are still more aspects to the intuitive uses of 'syntactic complexity', but perhaps it is time to turn directly to the explicata which, hopefully, will take care of at least some of these aspects.To follow Chomsky once again [35] rather closely, we might introduce the terms 'depth of postponed symbols' and 'node/terminal-node ratio' to denote the following two relevant measures: the first for Yngve's well-known depth-measure, which, I trust, will again be explained in his lectures at this Institute, the second for a new concept which has not yet been discussed in the literature. Both measures refer to the tree representing the sentence and are therefore applicable only to such grammars which assign tree structure to each sentence generated by them.If we assign, in the Yngve fashion, numbers to the nodes and branches (with the branches leading to the terminal symbols left out), we see that the greatest number assigned to any of the nodes of the left tree is 1, so that its depth of postponed symbols is also 1, whereas the corresponding number for the second tree is 2. On the other hand, the total number of nodes of the first tree is 5, the number of its terminal nodes is 3, so that its node/terminalnode ratio is 5/3, whereas the corresponding numbers for the second tree are 4, 3, and 4/3 respectively, Each node number (in parentheses) is equal to the sum of the number assigned to the branch leading to this node and the number of the node from which the branch comes.There are at least three more notions that are entitled to be considered as explicata for other aspects of syntactic complexity. The one that has been most studied is the degree of nesting. The reasons for the attention given to it are that it has been known for a long time that a highly nested sentence causes difficulties in comprehension and, more recently, that it creates troubles for mechanical syntactic analysis. One rough explication of this notion (there are others) might run as follows, again relative to tree grammars: The degree of nesting of a labelled tree is the largest integer m, such that there exists in this tree a path through m+1 nodes N 0 ,N l ,..,N m , with the same or different labels, where eachN i (i  1)is an inner node in the subtree rooted in N i-1 . The same degree of nesting is also assigned to the terminal expression as analysed by this tree.A special case of nesting is self-embedding, to whose importance Chomsky has called attention. In order to define the degree of self-embedding of a labelled tree, one has only to change in the above definition of degree of nesting the phrase 'with the same or different labels' by the phrase 'each with the same label'. (Other definitions are again possible.)To present one more stock example, the following tree has a degree of nesting (equal, in this particular case, to its degree of self-embedding) of 4. (Its depth, incidentally, is 7 and its node/terminal-node ratio is 21/15 = 7/5.)Though this tree could have been derived from a grammar G 4 differing from G 3 only by containing the additional rulesNP  NP+Ra+NP+ VtRa whom there are very good reasons why sentences of the type John whom Ann hates loves Mary and their ramifications should, in the framework of the whole English language, not be regarded as being produced by a CF-grammar containing G 4 as a proper part, but rather by a transformational grammar built upon a CF grammar of English containing, in addition, a transformation rule, which I shall not specify here, allowing the derivation of NP 1 +Ra+NP 3 + Vt+ Vt+NP 2 from NP 1 + Vt+NP 2 and NP 3 + Vt+NP 1 . (There is no need to stress that all this is only a very rough approximation to the incomparably more refined treatment which a full-fledged transformational grammar of English would require. The transformational rule, for instance, should refer to the trees representing the strings under discussion rather than to the strings themselves.) It is worthwhile noticing that the node-terminal-node ratio (7/5) of the resulting tree is smaller than the ratios (5/3) of the underlying trees.The fifth aspect of syntactic complexity is, then, transformational history. I am, of course, not using the term 'measure' now, because it is very doubtful whether measures can be usefully assigned to this concept. So far, no attempt in this direction has been made. I shall, therefore, say no more about this notion here.It is not particularly difficult to develop these five notions, and many more could be thought of. The decisive questions are twofold: What are the exact formal properties of the various notions and perhaps even more important, what is their psychological reality, to use a term of Sapir's? In general, one would tend to require that if one sentence is syntactically more complex than another, then, ceteris paribus, it should, perhaps only on the average, create more difficulties in its comprehension. What can we say on this point?Well, very little, and nothing so far under controlled experimental conditions. Highly nested constructions just don't occur at all in normal speech and very rarely in writing, with the notable exception of logical or mathematical formulae. Their syntactic structure can be grasped only by using extraordinary means such as going over them more than once and using special marks for pairing off expressions that belong together but between which other expressions have been nested. A formula such as[[p  [q  ([r  [s  t]]  u]]]  r]is certainly not a very complex one among the formulae of the propositional calculus, as they go, but testing its well-formedness would either require some artificial aids, such as the use of a pencil for marking off paired brackets, or the acquisition of a special algorithm based upon a particular counting procedure, or else just an extraordinary (and unanalysed) effort and concentration. It is doubtful whether any effort, without external aids, would suffice to determine that the 'literal' English rendition of the formula as:If if p then if q then if if r then if s then t then u then v is well-formed, when one listens to such a sentence without prior warning.It is interesting that in order to explain our difficulties in either uttering or grasping the structure of such sentences we need assume nothing more than that we are finite automata with a finite number of internal states. For Chomsky [36] , in effect, has shown that when the number of these states is some number n, then, relative to a given grammar G, there exists a number m (depending on n) such that this device will not be able to correctly analyse the syntactic structure of all sentences whose degree of nesting is greater than or equal to m. (As a matter of fact, Chomsky showed this for degree of self-embedding rather than for nesting, but the proof can be trivially extended to this case.)On the incomparably stronger assumptions that natural languages (such as English) can be adequately determined by tree grammars, that human speakers of such a language have at least one such tree grammar stored in their permanent memory, that they utter the sentences of these languages by going through (one of) their tree(s) 'from top to bottom and from left to right', that all storage required for this process is done in an immediate memory of the push-down store form containing, say, n cells, we arrive at the conclusion that only sentences whose depth of postponed symbols is no higher than n can be uttered by such speakers.Now, though Yngve continues to believe that there exists good evidence for the soundness of these assumptions, Chomsky has on various occasions [37] , [38] expressed his doubts as to this evaluation of the evidence. He believes that most of the positive evidence invoked by Yngve can already be explained on the basis of the weaker assumption mentioned above, whereas he mentions the existence of other evidence which tends to refute Yngve's stronger assumptions though not his own weak one. I have no time to go further into this controversy. Let me only state that Chomsky's arguments seem to me to be the more conclusive ones, This, of course, by no means diminishes the credit due to Yngve for having been the first to have raised certain types of questions that were never asked before, and to have ventured to provide for them interesting answers, though they may well turn out to be the wrong ones.It is time now to say at least a few words on the 'Wittgensteinian Thesis'. In one sense, this thesis is, of course, perfectly true: After all, all of us do manage to say most of what we have to say in sentences of a low degree of nesting and, if really necessary, could rephrase even those things for the expression of which we do use highly nested strings, such as occur in many mathematical formulae, in syntactically less complex ways, which will be presently investigated. But in this sense, the thesis is no more than a rather uninteresting truism. What Wittgenstein, Broad and the innumerably many other people who invoked this slogan doubtless had in mind was that most, if not all, of the things that are expressed (usually, by such and such an author, by such and such a cultural group, etc.) by sentences with high syntactic complexity could have been expressed with sentences of lower syntactic complexity, without any compensation. In this interesting interpretation, Wittgenstein's Thesis seems to me wrong, almost demonstrably so. I would, on the contrary, want to express and justify, if not really demonstrate, the following 'Anti-Wittgensteinian Thesis': For most languages, and for all interesting (sufficiently rich) ones, there are things worth saying which cannot be expressed in sentences with a low degree of syntactic complexity, without a loss being incurred in other communicationally important respects.Though a fuller justification will have to be postponed for another occasion, let me make here the following remarks. Consider one of the simplest calculi ever invented by logicians, the so-called implicational propositional calculus [39, p. 140] . We are here interested only in its rules of formation but not in its axioms or theorems.The rules of formation of one of the many formulations of this calculus are as follows: Its primitive symbols are the three improper symbols ], , [ and the infinitely many proper symbolsp l , p 2 , p 3 ….Its rules of formation are just the following two:Fl. Each proper symbol is well-formed (wf) F2. Whenever  and  are wf, so is [  ] (with the understanding that nothing is wf unless it is so by virtue of Fl and F2). There exists no bound to the degree of nesting of the wf formulae of this calculus, as is obvious from the series of wf formulaep 1 , [p 1  p 2 ], [p 1  [p 2  p 3 ]], [p 1  [p 2  [p 3  p 4 ]]], It is less obvious, but can at any rate be rigorously proved, that for none of these formulae does there exist in the calculus another formula which is logically equivalent to it but has a lesser degree of nesting. (The term 'logically equivalent' needs explanation in our context, but I shall nevertheless not provide it. For logicians the required explanation would be rather obvious, for non-logicians it would take too much time.) Wittgenstein's Thesis does not hold in this calculus.Consider now the (logically uninteresting) conjunctional propositional calculus, whose rules of formation are analogous to those of the implicational calculus, except that '' is to be replaced by '' in both the list of improper symbols and F2. Here, too, it can be shown, by a somewhat more complicated argument, that for each n there exist wf formulae whose degree of nesting is higher than n such that they are not logically equivalent to any wf formula with a lesser degree of nesting.But there exists the following interesting difference between the two calculi: The conjunctional calculus, as presented here, looks unduly complex. Since conjunction is 'associative', i.e. since[p 1  [p 2  p 3 ]] and [[p 1  p 2 ]  p 3 ]are equivalent, the brackets fulfil no semantically important function within the calculus and could as well have been omitted from the list of improper symbols, with a corresponding simplification in rule F2. In this version, all wf formulae would have had a degree of nesting 0, as can easily be verified! True enough, all formulae with at least two conjunction signs would have become syntactically ambiguous, but, in this particular calculus, syntactic ambiguity would not have entailed semantic ambiguity. Syntactic simplification could have been achieved, and in the most extreme fashion, without any semantic loss whatsoever! This is by no means the case for the implicational calculus. Implication is not associative, so that the syntactic ambiguity introduced by omission of brackets would have entailed semantic ambiguity, a price no logician could possibly be ready to pay in this connection, though again all resulting formulae would have got a degree of nestedness 0.(As for conjunctional calculus, as soon as it is combined with some other calculus, say the disjunctional calculus, omission of brackets would again entail semantic ambiguity, since, say,[p 1  [p 2  p 3 ]] and [[p 1  p 2 ]  p 3 ] are not equivalent.)For those of you who have heard of the so-called Polish bracket-free notation, let me add the following remark. One might have thought that the nesting (which in this particular case is also self-embedding) is due to the use of brackets for scoping purposes, in accordance with standard mathematical usage, since it seems that the brackets 'cause' the branchings to be 'inner' ones, and might therefore have cherished the hope that a bracket-free notation would eliminate, or at least reduce, nesting. But this hope is illusory. Inner branching, thrown out through the front door, would re-enter through the back door. With 'C' as the only improper symbol and F2 changed to: Whenever  and  are wf, so is C, expansion of  (though not of ) causes inner branching. Notice further that in Polish notation calculi you cannot introduce syntactic ambiguity, harmless or harmful, even if you want to, by omitting symbols, since there are no special scoping symbols to omit.As far as natural languages are concerned, the situation is much more confused. In speech, it seems that we can express distinctions of scope up to a degree of nesting of 3, anything beyond that becoming blurred, whereas in writing things are still worse, punctuation marks not being consistently used for scoping purposes and anyhow not being adequate for this task, with the result that syntactic ambiguities abound, which may or may not be reduced through context or background knowledge. Sometimes, when the resulting semantic ambiguity becomes intolerable, extraordinary measures are taken, such as using scoping symbols like parentheses in ways ordinarily reserved for mathematical formulae only, indentation at various depths, ad hoc abbreviations, etc.Natural languages have many so-to-speak built-in devices for syntactic simplification. These devices, and their effectiveness, are badly in need of further study, after the extremely interesting beginnings by Yngve [30] .Certain 'simplifications', beloved by editors who are out to split up involved sentences, may well turn out to be spurious and perhaps even downright harmful, in spite of appearances. An editor who rewrites an author's 'Since p and q and r, therefore s' (where you have to imagine the letters p, q, r, and s replaced by sentences which on occasion will themselves have considerable syntactic complexity) by 'p. q. r. Therefore s.' is probably under the illusion that he has simplified something and therefore improved something. Now, he has doubtless replaced one long sentence with a degree of syntactic complexity of, say, n, by four shorter sentences, each with a degree of syntactic complexity of at most n-1, and has even used three words less for this purpose. But there is a price connected with this procedure, even a twofold one. First, the word 'therefore' has become semantically much more indefinite. What for? 's, for r.', or 's, for q and r.', or 's, for p and q and r.'? (And this might not be all. p will be preceded by other sentences, so that, at least from a purely syntactic point of view, it is totally indefinite how far back one has to go in the list of possible antecedents to s.) Secondly, even if the exact antecedent is settled, in order to understand the full content of the argument and to judge its validity, the reader (or listener) will have to recall, or re-read, the antecedent (which, so let us speculate, might have been removed into some larger, more permanent and less easily accessible storage than the immediate memory it was occupying during the syntactic processing), with the result that the overall economy of the 'improvement' is, to say the least, very doubtful. There is at least a good chance that the total effort required of the receiver of the message will be higher in the case of the 'split-up' sentence than with regard to the original sentence, though it might well be easier on the sender, had he wanted to express himself originally in this less definite way. (I used to teach geometry in high school and still remember the type of student who, when required to demonstrate a certain theorem, would start rattling off a list of congruences or inequalities, as the case might be, and finish with a triumphant 'Therefore (or, 'From this it follows that)...'. And he was not even wrong. Because from his list, and in accordance with certain theorems already proved, his conclusion did indeed follow. Except that he left the task of finding out how, in detail, the conclusion followed from the premises, to the listeners, including myself in that case, and provided no indication of the fact that he himself knew the details.)An investigation, recently begun in Jerusalem, seems to lead to interesting results as to the mutual relationships between (semantic) equivalence among the sentences of a given formal system, the (syntactic) simplicity of these sentences and the existence of a recursive simplification function for this system. The results will be published in a forthcoming Technical Report. Let me only mention here one of the more significant results. (I hope to nobody's particular surprise.) The existence of a syntactic simplification algorithm is rather the exception, and the proof of such existence, if at all, will in general require that the system fulfil fairly tough conditions. The details, unfortunately, require a good knowledge of recursive function theory and shall therefore not be given here.As already mentioned in the opening sentence of Section 1, many of us believe that during the last few years we have gained valuable insights into the relationship between theory and observation in science. I myself have already tried on a few occasions to apply these insights to certain controversial issues of modern linguistics [40] , [41] . I would now like to do the same with regard to the central term of linguistics, namely 'language' itself. As you will soon realize, this methodological point is of vital importance for the so-called 'research methodology' in MT, and insufficient understanding of it has already caused superfluous controversies.The term 'language' has, of course, been 'defined' innumerably many times, but the fact that these definitions are usually mutually inconsistent, at least at first sight, has equally often been forgotten and neglected, so that seemingly contradictory statements about 'language' were usually interpreted as inconsistent statements about the same explicatum (in Carnap's terminology) rather than consistent statements about different explicata.You will, for instance, find in the literature that language has often been treated as a set of sentences (or utterances, which two terms will not be distinguished for the moment). This, of course, is an abstraction from ordinary usage, and has been recognized as such. Leaving aside for our present purposes the discussion of how good and useful this abstraction is, let me point out that the characterization can be understood (and has been understood) in at least the following five senses:(1) A given set of utterances, such as recorded on a certain tape by so-and-so on suchand-such an occasion, or of inscriptions, found on such-and-such a tablet. Such sets are, of course, finite and most of them contain relatively few members. They can be, and sometimes are, represented as lists, under certain transcriptions. As a matter of fact, such sets are only exceptionally called 'languages', the more usual term being 'corpus'.(2) The set of all utterances (spoken and/or written) made until July 1962, say, by the members of such and such a community during their lifetime until then. This set is certainly finite, too, but cannot, in general, be presented in list form and is rather indefinite, due to the indefiniteness of the term "community" and for dozens of other obvious reasons, such as those centring around idiolects, dialects, bilingualness, not to forget the vagueness of 'utterance' itself.(3) The set of all utterances, past, present, and future, made by members of such a community. This set differs from that treated under (2) only in having a still greater degree of indeterminacy.(4) The set of all 'possible' utterances of a certain kind. The notion 'possible' occurring in this characterization is notorious for its complexities and philosophical perplexities, and I trust I shall be forgiven if I don't go any deeper into this hornet's nest here. Under most conceptions, this set will turn out to be infinite. (5) The set of all 'sentences' (well-formed expressions, grammatical expressions, etc.). (For recent discussions of this and related hierarchies see, e.g., Quine [42] and Ziff [43] .)It is true, of course, that (1) is a subset of (2), which again is a subset of (3), but this is not the crucial point. Much more important is that the term 'utterance' occurring in their characterization changes its meaning in the transition from (3) to (4), becomes less observational and more theoretical. At the same time, there is a change from a concrete, physical, three-or four-dimensional entity, a 'token', in Peirce's terminology, to an abstract entity, a 'type'. [When Paul and John say 'I am hungry.', we have two members of the set (1), since they uttered two different 'utterance-tokens', but only one member of the set (4), since these tokens are replicas of the same utterance-type.] The elements of set (5) , finally, are so overtly theoretical that the term 'utterance' seemed definitely inappropriate for them, and I had to shift to the term 'sentence'. Though these two terms in ordinary usage, as well as in the usage of most linguists, are almost synonymous, I have already suggested once before [41] to distinguish artificially between them qua technical terms and use 'utterance' for observational entities and 'sentence' for theoretical ones (with the adjective 'possible' performing as a category-shifting modifier, an extremely important and not fully analysed semantical fact). That 'sentence' is ordinarily used in both these senses, as is 'word' and many other terms of this area, is, of course, one of the major sources of confusion and futile controversies.Sets (2) and 3have little linguistic importance. Because of their indefiniteness it is difficult to make interesting statements about them. Sets (4) and 5-in all rigor I should have spoken about the classes of sets (4) and (5)-are by and large identical, at least under certain plausible interpretations of 'possible', the characterization of (4) being what Carnap [44] called 'quasi-psychologistic', while (5) is presumably characterized in an overtly and purely syntactical fashion.In many linguistic circles, it has been standard procedure to make believe that linguists, in their professional capacity, are dealing with sets of type (1) [or of types (2) or (3) ]. This fiction gave their endeavour, so they believed, a closeness-to-earth, an operational solidity which they were anxious not to lose. In fact, they all, with hardly an exception, dealt with sets of types (4) or (5) . All the talk about 'corpora' was only lip-service. Today we know that no science worth its salt could possibly stick to observation exclusively. Whoever is out to describe and nothing else will not describe well. Theorizare necesse est. Though I don't think that it is necessary, or even helpful, to say that every description already contains theoretical elements-as some recent methodologists are fond of stressing-it must be said that theorophobia is a disease, fashionable as it might be. All scientific statements must surely be connected with observations, but this connection can, and must, be much more oblique than many methodological simplicists believe.Returning from these generalities to our present problem of the relation between language and speech-with MT hovering in the back as a kind of proving ground-it should be superfluous to insist that the proper business of the theoretical linguist is to describe not the actual linguistic performance of some individual (or of so many individuals)-this 'natural history' stage being of limited interest only-but his linguistic competence (or that of a certain community of individuals), to use a dichotomy that has recently been much stressed by Miller and Chomsky [35] . Now competence is a disposition, perhaps even a higher-order disposition. To be a competent native speaker of English means not just to have performed in the past in a certain way, not even that he will (in all likelihood) perform in a certain way when presented with certain stimuli, but rather that one would perform, or would have performed (in all likelihood), in a certain way, were he to be presented (or had he been presented) with certain stimuli-in addition to many other things. I know perfectly well that no competent English speaker will ever in his life be presented with a certain utterance consisting of a few billion words, say of the form 'Kennedy is hungry, and Kruschev is thirsty, and De Gaulle is tired, ..., and Adenauer is old.', going over the whole present population of the world, but I know, and everybody else knows perfectly well, that were such a speaker, contrary to fact, to be presented with such an utterance, he would understand it as a perfect specimen of an English sentence.There is no mechanical procedure to move from someone's performance to his competence, just as there is no mechanical procedure to move from any number of physical observations to a physical theory. But just as this fact does not free the physicist from his professional obligation to develop theories, so there is nothing to absolve the linguists from presenting theories of linguistic competence. Testing the validity of these theories will, again as in the other theoretical sciences, in general proceed not in any straightforward way but by standard indirect methods. That John is competent to understand a certain tenbillion-word sentence will not be tested by presenting John with a token of this sentence, but, as we all know, by entirely different, oblique methods. For the above sentence, for instance, it would suffice to find out that John understands such sentences as 'Paul is hungry.' and 'David is thirsty.' as well as that he has mastered the rule that whenever  and  are sentences,  followed by 'and' followed by  is a sentence. This latter finding might not be a very simple one or a very secure one, but we do often claim to have found out just such things.One often hears, in certain philosophical circles as well as among people interested in applied linguistics, statements to the effect that natural languages have no grammar. These people are aware of the paradoxical character of such statements, but nevertheless insist that they are true, and even trivially so. Every grammar, so they say, determines a certain fixed, 'static', set of sentences. But a natural language is a living affair, 'dynamic', constantly in change, and it is utterly impossible that the set of sentences should coincide with the set of utterances, as it should for an adequate grammar. It should now be obvious where the fallacy lies in this argument: in the unthinking identification of sentences and utterances, and in the complete misunderstanding of the relation between theory and observation. It is as if one wanted to argue that natural gases obey no physical laws, since these laws apply only to the fictitious 'ideal gases'. (Incidentally, such statements have indeed been made by obscurantists at all times.) To understand the exact relationship between the laws of gases of theoretical physics and the behaviour of real gases requires a lot of methodological sophistication, and no less should be expected for the understanding of the exact relationship between the grammatical rules of an artificial language and the utterances made by the members of the community speaking this language. Any naive identification will quickly result in paradox, futile discussions, and irrational distrust of theory.That the question of the adequacy of a given grammar is much more complex than ordinarily assumed does not mean that this question is a pointless one. On the contrary, since there exists no simple criterion for deciding which of two proposed grammars is 'better', more adequate than the other, the problem of finding any criterion, however partial and indirect, becomes of overwhelming importance. The fact is, of course, that extremely little is known here beyond programmatic declarations. We know that 'grammatical' should not be identified with 'comprehensible', nor is one of these concepts subsumed under the other, but neither are these two concepts incommensurable. In that connection we have the large complex of questions arising around degrees of grammaticalness, deviancy, oddness, and anomaly; all of vital importance to linguists and philosophers alike. Some of you know the valiant beginnings made toward an investigation of this problem by Chomsky, Ziff [43] and others, but it will, I hope, not deter you from following in their footsteps, if I state, rather dogmatically, that these attempts are woefully inadequate, while admitting that I have nothing better to offer, for the moment.As soon as it is understood that competence and performance are to be kept clearly apart, one will no longer be tempted to feel oneself obliged to impose upon, say, the English language a grammar which will not allow the generation of sentences of a higher degree of syntactic complexity than some small number, say 4, according to one or the other measures discussed in the previous lecture. True enough, 'corresponding' utterances are not normally found in speech or writing, and if artificially produced will not be grasped unless certain artificial auxiliary means are invoked. These limitations of human performance are doubtless of vital importance; have to be clearly stated and investigated; and should, sooner or later, be backed up by some neurophysiological theory. They are of equal importance for the programming of machines which are charged with determining the syntactic structure of all sentences of any given text of a given language. That sentences of a high degree of complexity can be disregarded for this purpose, because of their extreme rarity or just plain non-occurrence, may allow an organization of the computer's working space that could make all the difference between the economically feasible and the economically Utopian. But in order to do all this, it is by no means necessary to impose these restrictions on the grammar of English as such. Nothing is gained, and much is lost. Not only will certain arbitrary-looking restrictions on the recursive generation rules have to be imposed, thereby increasing the complexity of the grammar to a degree that can hardly be estimated at present, but this procedure is self-defeating. It is done in the name of 'sticking to the brute facts', but doing so in such a crude way will force the adherents of this approach to disregard other brute facts, such as that with the aid of certain auxiliary means, the syntactic structure of English word sequences of a degree of syntactic complexity of 5, or of 100 for that matter, will be perfectly grasped. Since these word sequences are not English sentences, according to the grammarians of performance, how come they are understood and what is the language they belong to?This does not mean, of course, that restrictions of performance will not reflect themselves in the grammar. I am convinced, e.g., that Professor Yngve has made a remark full of insight when he noticed and stressed the fact that by changing its mood from the active to the passive, the syntactic complexity of a given sentence can be reduced. And I have no objection to formulating this insight in the form that there exists a passive in English (and the same or other devices in other languages) in order to allow, among other things, the formulation of certain thoughts in sentences of a lower degree of complexity than would otherwise have been possible. But trying to obliterate the distinction between competence and performance, to say it for the last time, is only a sign of confusion and will breed further confusion. The sooner we get rid of these last traces of extreme operationalism, the better for all of us, including MT research workers.In order to describe and explain the facts of speech exhaustively and revealingly, a full-fledged, formal theory of language is needed, among many other things. Philosophical prejudice aside, there is no particular merit in keeping this theory 'close to the facts', in assuming that the rules of correspondence which connect the theory (in the narrower sense of the word) with observation will have a particularly simple form. Experience from other sciences should have taught us that such an assumption is baseless. Physics, e.g., has reached its present heights only because the free flight of fancy, 'the free play of ideas', has not been fettered by a narrow conception of scientific methodology. True enough, the particular logical status of these rules of correspondence has still not been deeply enough investigated, and I fully understand the attitude of those who, for this reason, regard this whole business with suspicion, and are afraid that the free flight of fancy will reintroduce uncontrollable metaphysics into science in general and linguistics in particular. But I hope that the necessary controls will be developed and better understood in the future and that in the meantime one will manage somehow. Occasional metaphysical aberrations are probably less damaging in the long run than the curtailment of creative scientific imagination.Let me stress, in this connection, that the extensive use of symbolism in the formulation of generative grammars has induced many linguists to accuse the authors of these formulations of having lost all connection with empirical science and indulged instead in some mathematical surrogate. I hope that it is now perfectly clear that this accusation is baseless. A formal grammar of English is an empirical theory of the English language, and its symbolic formulation, while it increases its precision and therefore its testability, by no means turns it into a mathematical theory. When according to a certain grammar 'Sincerity admires John.' turns out not to be a (formal) sentence whereas this very sequence is considered by someone to be an (intuitive) sentence, then this grammar is to that degree inadequate to his intuitions. It should only be kept in mind that the determination of the intuitive sentencehood of 'Sincerity admires John.' is by no means such a straightforward affair of observation, experimentation and statistics as some people believe. The notion of 'intuitive sentence' is highly theoretical itself (though without the benefit of a complete theory being formulated to back it up, which fact is, of course, the whole crux of this peculiar modifier 'intuitive'), and observations on utterances of people or their reaction to utterances alone will never settle in any clearcut way the question of the sentencehood of a particular word sequence. This is as it should be, and only wishful thinking and naive methodology make people believe otherwise. Confirmation and refutation of linguistic theories, as of theories in any other science, is not such a simple operation as one is taught to believe in high school. But the complexity of refutation does not make a linguistic theory empirically irrefutable and therefore does not turn it into a mathematical theory.My arguments against the feasibility of high-quality fully-automatic translation can be assumed to be well known in this audience. I have gone through them often enough in lectures and publications. I also have the impression that, after occasionally rather strong initial negative reactions, a good number of people who have been active in the field of MT for some years tend more and more to agree with these arguments, though they might prefer a more restrained formulation. On the other hand, the number of research groups which have taken up MT as their major field of activity is still on the increase, and by now there is hardly a country left in Europe and North America which does not feature at least one such group, with Japan, China, India and a couple of South American countries joining them, for good measure. Though a certain amount of involvement in MT, and in particular in its theoretical aspects, is certainly helpful and apt to yield fresh insights into the workings of language, most of the work that is at present going on under the auspices of MT seems to me to be a wanton expenditure of research money that could be put to better use in other fields and, still worse, a deplorable waste of research potential.The combined interest in MT is sometimes defended on the grounds that though it is indeed extremely unlikely that computers working according to rigid algorithms will ever produce high-quality translations, there still exists a possibility that computers with considerable learning ('self-organizing') abilities will be able through training and experience to improve their initial algorithms and thereby constantly improve their output until adequate quality is achieved. I myself mentioned the possibility in some prior publications but refrained from evaluating it, at that time regarding such an evaluation as premature [15] , [45] .During the last two years, however, while going through the pertinent literature once more and pondering over the whole issue of artificial intelligence, I came to more radical conclusions which I would like to expose and defend here. Today, I am convinced that even machines with learning abilities, as we know them today or foresee them according to known principles, will not be able to improve by much the quality of the translation output.For this purpose, let us notice once more the obvious prerequisites for high-quality human translation. There are at least the following five of them, though deeper analysis would doubtless reveal more:(1) competent mastery of the source language, (2) competent mastery of the target language,good general background knowledge,expertness in the field, and (5) intelligence (know-how). (I admit, of course, that the last of these prerequisites, intelligence, is not too well defined or understood, and shall therefore have to use it with a good amount of caution.)All this was surely common knowledge at all times, and certainly known to all of us 'machine translation pioneers' a dozen years ago. I knew then that nothing corresponding to items (3) and (4) could be expected of electronic computers, but thought that (1) and (2) should be within their reach, and entertained some hopes that by exploiting the redundancy of natural language texts better than human readers usually do, we should perhaps be in a position to enable the computers to overcome, at least partly, their lack of knowledge and understanding. True enough, scientists (and almost everyone else) write their articles with a reader in mind who, in addition to having a good command of the language, has a general background knowledge of, say, college level, has so many years of study behind him in the respective field, and is intelligent enough to know how to apply these three factors when called upon to do so. But it could have been, couldn't it, that, perhaps inadvertently, they do introduce sufficient formal clues in their publications to enable a very ingenious team of linguists and programmers to write a translation program whose output, though produced by the machine without understanding, would be indistinguishable from a translation done out of understanding? After all, cases are known of human translations that were done under similar conditions and were not always recognized as such.Well, it could have been so, but it just didn't turn out this way. For any given source language, there are countless sentences to which a competent human translator will provide in a given target language many, sometimes very many, distinct renderings which will sometimes differ from each other only by minor idiosyncrasies, but will at other times be toto coelo different. The original sentence will very often be, as the standard expression goes, multiply ambiguous by itself, morphologically, syntactically, and semantically, but the competent human translator will render it, in its particular context, uniquely to the general satisfaction of the human reader. The translator will resolve these ambiguities out of the last three factors mentioned. Though it is undoubtedly the case that some reduction of ambiguity can be obtained through better attention to certain formal clues, and though it has turned out many times that what superficial thinking regarded as definitely requiring understanding could be handled through certain refinements of purely formal methods, it should by now be perfectly clear that there are limits to what these refinements can achieve, limits that definitely block the way to autonomous, high-quality, machine translation.Could not perhaps computers with learning capacity do the job? Let me say rather dogmatically that a close study of one of the most publicized schemes for the mechanization of problem solving and a somewhat less detailed study of the whole field of Artificial Intelligence, has shown an amount of careless and irresponsible talk which is nothing short of appalling and sometimes close to lunatic. There is absolutely nothing in all this talk which shows any promise to be of real help in mechanizing translation. There is nothing to indicate how computers could acquire what the famous Swiss linguist de Saussure called, at the beginning of this century, the faculté de langage, an ability which is today innate in every human being, but which took evolution hundreds of millions of years to develop. Let nobody be deceived by the term 'machine language' which may be suggestive for other purposes but which has turned out to be detrimental in the present context. Surely computers can manipulate symbols if given the proper instructions and they do it splendidly, many times quicker and safer than humans, but the distance from symbol manipulation to linguistic understanding is enormous, and loose talk will not diminish it.Though certain electronic devices (such as perceptrons) have been built which can be 'trained' to perform certain tasks (such as pattern recognition) and indeed perform better after training than before, and though computers have been programmed to do certain things (such as playing checkers) and do these things better after a period of learning than before, it would be disastrous to extrapolate from these primitive exhibitions of artificial intelligence to something like translation. There just is no serious basis for such extrapolation. As to checkers, the definition of 'legal move' is extremely simple and is, of course, given the computer in full. After a few years of work the inventor of the checker playing program [46] succeeded in formalizing a good set of strategies so that the training had nothing more to achieve than to introduce certain changes in the rank-ordering of these strategies. There never was any question of training the computer to discover the rules of checkers, or to expand an incomplete set of rules into a complete one, or to add new strategies to those given it beforehand. But some people do talk about letting computers discover rules of grammar or expand an incomplete set of such rules fed into it, by going over large texts and using 'induction'. But let me repeat, this talk is quite irresponsible and 'induction' is nothing but a magic word in this connection. All attempts at formalizing what they believe to be inductive inference have completely failed, and inductive inference machines are pipe dreams even more than autonomous translation machines. Now children do learn, as we all know, their native language up to an almost complete mastery of its grammar by the time they are four or five years old. But by the time they reach this age, they have heard (and spoken) surely no more than a few hundred thousand utterances in their native language (only a part of which are good textbook specimens of grammatical sentences). If they succeeded in mastering the grammar, apparently 'by induction' from these utterances, why shouldn't a computer be able to do so? Even if we add the fact that these children were also told that so many word sequences were not grammatical sentences-whatever the form was by which they were given these pieces of instruction-could not the same procedure be mirrored for computers? Well, the answer to these two questions can be nothing but an uncompromising No. The children are able to perform as splendidly as they do because, in addition to the training and learning, their brain is not a tabula rasa general purpose computer but a computer which, after all those hundreds of millions of years of evolution mentioned before, is also special purpose structured in such a way that it possesses the unique faculté de langage which makes it so different from the brain of mice, monkeys, and machines. The fact that we know close to nothing about this structure does not turn the previous statement into a scholastic truism. Years of most patient and skilful attempts at teaching monkeys to use language intelligently succeeded in nothing better than making them use four single words with understanding, and monkeys' brains are in many respects vastly superior to those of computers. True enough, computers can do many things better than monkeys or humans, computing for instance, but then we know the corresponding algorithms, and know how to feed them into the computer. In some cases we know algorithms which, when fed into the computer, will enable it to construct for itself computing algorithms out of other data and instructions that can be fed into it. But nothing of the kind is known with respect to linguistic abilities. So long as we are unable to wire or program computers so that their initial state will be similar to that of a newborn human infant, physically or at least functionally, let's forget about teaching computers to construct grammars.Let me now turn to the first two items. What is the outlook for computers to master a natural language to approximately the same degree as does a native speaker of such a language? And by 'mastering a language' I now mean, of course, only a mastery of its grammar, i.e. vocabulary, morphology, and syntax, to the exclusion of its semantics and pragmatics. Until recently, I think that most of us who dealt with MT at one time or another believed that not only was this aim attainable, but that it would not be so very difficult to attain it, for the practical purpose at hand. One realized that the mechanization of syntactic analysis, based on this mastery, would lead on occasion to multiple analyses whose final reduction to a unique analysis would then be relegated to the limbo of semantics, but did not tend to take this drawback very seriously. It seems that here, too, a more sober appraisal of the situation is indicated and already is gaining ground, if I am not mistaken. More and more people have become convinced that the inadequacies of present methods of mechanical determination of syntactic structure, in comparison with what competent and linguistically trained native speakers are able to do, are not only due to the fact that we don't know as yet enough about the semantics of our language-though this is surely true enough-but also to the perhaps not too surprising fact that the grammars which were in the back of the minds of almost all MT people were of too simple a type, namely of the so-called immediate constituent type, though it is quite amazing to see how many variants of this type came up in this connection.Leaving aside the question of the theoretical inadequacy of immediate constituent grammars for natural languages, the following fact has come to the fore during the last few years: If one wants to increase the degree of approximate practical adequacy of such grammars, one has to pay an enormous price for this, namely a proliferation of rules (partly, but not wholly, caused by a proliferation of syntactic categories) of truly astronomic nature. The dialectics of the situation is distressing: the better the understanding of linguistic structure, and the greater our mastery of the language-the larger the set of grammatical rules we need to describe the language, the heavier the preparatory work of writing the grammar, and the costlier the machine operations of storing and working with such a grammar.It is very often said that our present computers are already good enough for the task of MT and will be more than sufficient in their next generation, but that the bottleneck lies mostly in our insufficient understanding of the workings of language. As soon as we know all of it, the problem will be licked. I shall not discuss here the extremely dubious character of this 'knowing all of it', but only point out that the more we shall know about linguistic structure, the more complex the description of this structure will become, so long as we stick to immediate constituent grammars. It is known that in some cases transformational grammars are able to reduce the complexity of the description by orders of magnitude. Whether this holds in general remains to be seen, but the time has come for those interested in the mechanical determination of syntactic structure, whether for its own sake, for MT or for other applications, to get out of the self-imposed straitjacket of immediate constituent grammars and start working with more powerful models, such as transformational grammars.Let me illustrate by just one example: one of the best programs in existence, on one of the best computers in existence, recently needed twelve minutes (and something like $100 on a commercial basis) to provide an exhaustive syntactic analysis of a 35-word sentence [47] . I understand that the program has been improved in the meantime and that the time required for such an analysis is now closer to one minute. However, the output of this analysis is multiple, leaving the selection of the single analysis, which is correct in accordance with context and background, to other parts of the program or to the human posteditor. But there are other troubles with using immediate constituent grammars only for MT purposes. In a lecture to this Institute, Mr. Gross gave an example of a French sentence in the passive mood which could be translated into English only by ad hoc procedures so long as its syntactic analysis is made on an immediate constituent basis only. The translation into English is straightforward as soon as the French sentence is first detransformed into the active mood. A grammar which is unable to provide this conversion, besides being scientifically unsatisfactory, will increase the difficulties of MT.I would like to return to what is perhaps the most widespread fallacy connected with MT, the fallacy I call, in variation of a well-known term of Whitehead, The Fallacy of Misplaced Economy. I refer to the idea that indirect machine translation through an intermediate language will result in considerable to vast economies over direct translation from source to target language, on the obvious condition that should MT turn out to be feasible at all, in some sense or other, many opportunities for simultaneous translation from one source language into many target languages (and vice versa) will arise. I already once before discussed both the attractiveness of this idea and the fallaciousness of the reasoning behind it. Let me therefore discuss here at some length only what I regard to be the kernel of the fallacy.The following argument has great prima facie appeal: Assume that we deal with ten languages, and that we are interested in translating from each language into every other, i.e. altogether ninety translation pairs. Assume, for simplicity's sake, that each translation algorithm-never mind the quality of the output-requires 100 man-years. Then the preparation of all the algorithms will require 9000 man-years. If one now designates one of these languages as the pivot-language, then only eighteen translation pairs will be needed, requiring 1800 man-years of preparation, an enormous saving. True enough, translation time for any of the remaining seventy-two language pairs will be approximately doubled, and the quality of the output will be somewhat reduced, but this would be a price worth paying. (In general, the argument is presented with some artificial language serving as the pivot. Though this move changes the appeal of the argument for the better-since this artificial pivot language is supposed to be equipped with certain magical qualities-as well as for the worse-since the number of translation algorithms now increases to twenty-I don't think that thereby the substance of the following counterargument is weakened.) However, in order to counteract even this deterioration, let us double our effort and spend, say, 200 man-years on the preparation of the algorithms for translating to and from the pivot language. We would still wind up with no more than 3600 man-years of work vs. the 9000 originally needed. Well?The fallacy, so it seems to me, lies in the following: the argument would hold if the preparation of the ninety algorithms were to be done independently and simultaneously by different people, with nobody learning from the experience of his co-workers. This is surely a highly unrealistic assumption. If preparing the Russian-to-English and German-to-English algorithms were to take 100 man-years each, when done this way, there can be no doubt that preparing the German-to-English algorithm after completion (or even partial completion) of a successful Russian-to-English algorithm will take much less time, perhaps half as much. The next pair, say Japanese-to-English, will take still less time, etc. All these figures being utterly arbitrary, I don't think we should go on bothering about the convergence of this series. Though we might still wind up with a larger time needed for the preparation of the ninety than of the eighteen 'double precision' algorithms, it is doubtful, to say the least, whether the overall quality/preparation-time/translation-time balance would be in favour of the pivot language approach.Add to this the fact that 100 man-years would be enough, by assumption, to start a working MT outfit along the direct approach, whereas 400 man-years will be needed even to start translating the first pair along the indirect approach, and the initial appeal of the intermediate language idea should completely vanish, when judged from a practical point of view. As to its speculative impact, enough has been said on other occasions.Autonomous, high-quality machine translation between natural languages according to rigid algorithms may safely be considered as dead. Such translation on the basis of learning abilities is still-born. Though machines could doubtless provide a great variety of aids to human translation, so far in no case has economic feasibility of any such aid been proven, though the outlook for the future is not all dark. So much for the debit side. On the credit side of the past MT efforts stands the enormous increase of interest which has already begun to pay off not only in an increased understanding of language as such, but also in such applications as the mechanical translation between programming languages. But this could already be a topic for another Institute.
null
null
null
null
Main paper: syntactic complexity: Extremely little is known about syntactic complexity, though this notion has come up in many discussions of style, readability, and, more recently, of mechanization of syntactic analysis. Its explication has been universally regarded as a matter of great difficulty, this probably being the reason why it has also been, to my knowledge, universally shunned. When such authors as Flesch [34] developed their readability measures, they could not help facing the problem but, unable to cope with it, replaced syntactic complexity in their formulae by length, whose measure poses incomparably fewer problems, while still standing in some high statistical correlation with the elusive syntactic complexity.Very often one hears, or reads, of an author, a professional group, of even a whole linguistic community being accused of expressing themselves with greater syntactic complexity than necessary. Such slogans as 'What can be said at all, can be said simply and clearly in any civilized language, or in a suitable system of symbols', formulated by the British philosopher C. D. Broad in elaboration of a well-known dictum by Wittgenstein, were used by philosophers of certain schools to criticize philosophers of other schools, and have gained particular respectability in this context. On a less exalted level, most people interested in information processing and, in particular, in the condensation of information, preferably by machine, seem to be convinced that most, if not all, of what is ordinarily said could be said not only in syntactically simpler sentences but in syntactically simple sentences, the analysis of which would be a pleasure for a machine. Often, informationlossless transformation into syntactically simple sentences is regarded as a helpful, perhaps even necessary step prior to further processing. In the context of machine translation, Harris, e.g., once expressed the hunch that mechanical translation of kernel sentences, which would presumably rank lowest on any scale of syntactic complexity, should be a simpler affair than translation of any old sentences.It is my conviction that the topic of syntactic complexity is, beyond certain very narrow limits of a vaguely felt consensus, ridden with bias, prejudice and fallacies to such a degree as to make almost everything that has been said on it completely worthless. In particular, I think that the 'Wittgensteinian' slogan mentioned above is misleading in the extreme. I tend to believe that its attractiveness is due to its being understood not as a statement of fact but rather as a kind of general and vague advice to say whatever one wants to say as simply and clearly 'as possible', something to which one could hardly object, though, as we shall see, even in this interpretation it is not unequivocally good advice, when simplicity is understood as syntactic simplicity, since the price to be paid for reducing syntactic complexity, even when it is 'possible', may well turn out to be too high.So far, I have been using 'syntactic complexity' in its pretheoretical and unanalysed vague sense. It is time to become more systematic.One should not be surprised that the explication of syntactic complexity to which we shall presently turn will reveal that the pretheoretical term is high equivocal, though one might well be surprised to learn how equivocal it is.When I said in the opening phrase that 'extremely little is known about syntactic complexity', I intended the modifier 'extremely little' to be understood literally and not as a polite version of 'nothing'. Such terms as 'nesting', 'discontinuous constituents', 'selfembedding' and 'syntactic depth' are being used in increasing frequency by linguists in general and-perhaps unfortunately so-by applied linguists in particular, especially when programming for machine analysis is discussed. But not until very recently have these notions been provided with a reasonably rigid formal definition which alone makes possible their responsible discussion. The most recent and most elaborate discussion that has come to my attention is that by Chomsky and Miller [35] . They discuss there various explicata for 'syntactic complexity', with varying degrees of tentativeness, as befits such a first attempt, and I shall make much use of this treatment in what follows.Let me first discard one notion which, as already mentioned, has a certain prima facie appeal to serve as a possible explicatum for syntactic complexity, namely length, measured, say, by the number of words in the sentence (or in whatever other construction is under investigation). Though, as said before, it is obvious that there should exist a fairly high statistical correlation between syntactic complexity and length, it should be equally obvious that length is entirely inadequate to serve as an explicatum for syntactic complexity. Take as many sentences as you wish of the form '. . . is' (such as 'John is hungry', 'Paul is thirsty', etc.) whose intuitive degree of syntactic complexity is close, if not equal, to the lowest one possible, join them by repeated occurrences of 'and' (a procedure resulting in something like 'John is hungry and Paul is thirsty and Mary is sleepy and . . .'), and you will get sentences of any length you wish whose intuitive degree of syntactic complexity should still be close to the minimum. True enough, a sentence of this form, containing fifty clauses of the type mentioned, always with different proper names in the first position and different adjectives in the third position would be difficult to remember exactly. Therefore such a sentence will be 'complex', in one of the many senses of this word, but surely not syntactically so. No normal English-speaking person will have the slightest difficulty in telling the exact syntactic form, up to a parameter, of the resulting sentence, and there will be no increase in this difficulty even if the number of clauses will be 100, 1000, or any number you wish. In one very important sense of 'understanding', the increased length of sentences of this type will not increase the difficulty of understanding them. And the sense in question is, of course, precisely that of grasping the syntactic structure.The next remark, prior to presenting some of the more interesting explicata, refers to a fact which I want very much to call to your careful attention. I hope it will not be as surprising to you as it was to me, the first time I hit upon it. For a time, I thought that the only relativization needed for explicating syntactic complexity would be the trivial one to a given language. (Logicians, and some linguists, know plenty of examples where the 'same' sentence may belong to entirely different languages; in that case, nobody would be surprised to learn that it also has-or rather that they also have-different degrees of syntactic complexity, relative to their respective languages.) What did shock me, however, though only for a moment until I realized that it could not be otherwise, was that degree of complexity must also be explicated as being relative to a grammar, that the same sentence of the same language may have one degree of complexity when analysed from the point of view of one grammar and a different one when analysed from the point of view of another grammar, and that, of two different sentences, one may have a higher degree of complexity than the other relative to one grammar, but a lower degree relative to another grammar.This doubtless being the case, may I be allowed a certain amount of speculation for a minute? It is a simple and well-known fact that the same sentence will sometimes be better understood by person A than by B, though they have about the same IQ, about the same background knowledge, and though they read or hear it with about equal attention, as far as one can make out. Could it be that they are (subconsciously, of course) analysing this same sentence according to different grammars, relative to which this sentence has different degrees of syntactic complexity? Could it be that part of the improvement in understanding obtained through training and familiarization is due to the trainee'' learning to employ another grammar (whose difference from the one he was accustomed to employ before might be only minimal, so that the acquisition of this new grammar might not have been too difficult, perhaps)? Could it be that many, if not all, of us work with more than one grammar simultaneously, switching from the one to the other when the employment of the one runs us into trouble, e.g. when according to one grammar the degree of complexity of a given sentence is greater than one can stand? More about this later. Attractive as these speculations are, let me stress that at this moment I don't know of any way of putting them to a direct empirical test. But I wish someone would think up such a way. Let me also add that he who does not like this picture of different grammars for the same language lying peacefully side by side somewhere in our brain, may look upon the situation as one system of grammatical rules (the set-theoretical union of the two sets discussed so far) being stored in the brain, and allowing the same sentence to be analysed and understood in two different ways with two different degrees of complexity, with a control element deciding which rules to apply in a given case and allowing the switch to other rules when trouble strikes. That there are syntactically ambiguous sentences has, of course, always been well known, but I am speaking at the moment about a particular kind of syntactic ambiguity, one that has no semantic ambiguities in its wake, but where the difference in the analysis still creates a difference in comprehensibility. At this point it is probably worthwhile to present an extremely simple example. The English sentence, 'John loves Mary', can be analysed (and has been analysed) in two different ways, each of which will be expressed here in two different but equivalent notations which have been simplified for our present purposes:( S ( NP John)( VP ( Vt loves)( NP Mary))) ( S ( NP John)( Vt loves)( NP Mary))These analyses correspond to the following two 'grammars', G 1 and G 2 :G 1 : SNP+VP G 2 : SNP+Vt+NP VPVt+NP NP John, Mary NP  John, Mary Vt  loves Vt lovesor, if you prefer, they both correspond to the grammar G 3 , which is the set-theoretical union of G 1 and G 2 , and consists therefore of just the rules of G 1 plus the first rule of G 2 . (Both G 1 and G 2 are, of course, CF grammars; G 1 is binary, but G 2 , and therefore also G 3 , is not.) Though the difference in structure assigned to this sentence by the two analyses is palpable, it is less clear whether this difference implies a difference in the intuitive degree of syntactic complexity, and if so, according to which analysis the sentence is more complex. As a matter of fact, good reasons can be given for both views: in the first analysis, more rules are applied but each rule has a particularly simple form; in the second analysis fewer rules are applied, but one of them has a more complicated form. This situation seems to indicate that we have more than one explicandum before us, more than one notion which, in the pretheoretical stage, is entitled to be called 'syntactic complexity'.There are still more aspects to the intuitive uses of 'syntactic complexity', but perhaps it is time to turn directly to the explicata which, hopefully, will take care of at least some of these aspects.To follow Chomsky once again [35] rather closely, we might introduce the terms 'depth of postponed symbols' and 'node/terminal-node ratio' to denote the following two relevant measures: the first for Yngve's well-known depth-measure, which, I trust, will again be explained in his lectures at this Institute, the second for a new concept which has not yet been discussed in the literature. Both measures refer to the tree representing the sentence and are therefore applicable only to such grammars which assign tree structure to each sentence generated by them.If we assign, in the Yngve fashion, numbers to the nodes and branches (with the branches leading to the terminal symbols left out), we see that the greatest number assigned to any of the nodes of the left tree is 1, so that its depth of postponed symbols is also 1, whereas the corresponding number for the second tree is 2. On the other hand, the total number of nodes of the first tree is 5, the number of its terminal nodes is 3, so that its node/terminalnode ratio is 5/3, whereas the corresponding numbers for the second tree are 4, 3, and 4/3 respectively, Each node number (in parentheses) is equal to the sum of the number assigned to the branch leading to this node and the number of the node from which the branch comes.There are at least three more notions that are entitled to be considered as explicata for other aspects of syntactic complexity. The one that has been most studied is the degree of nesting. The reasons for the attention given to it are that it has been known for a long time that a highly nested sentence causes difficulties in comprehension and, more recently, that it creates troubles for mechanical syntactic analysis. One rough explication of this notion (there are others) might run as follows, again relative to tree grammars: The degree of nesting of a labelled tree is the largest integer m, such that there exists in this tree a path through m+1 nodes N 0 ,N l ,..,N m , with the same or different labels, where eachN i (i  1)is an inner node in the subtree rooted in N i-1 . The same degree of nesting is also assigned to the terminal expression as analysed by this tree.A special case of nesting is self-embedding, to whose importance Chomsky has called attention. In order to define the degree of self-embedding of a labelled tree, one has only to change in the above definition of degree of nesting the phrase 'with the same or different labels' by the phrase 'each with the same label'. (Other definitions are again possible.)To present one more stock example, the following tree has a degree of nesting (equal, in this particular case, to its degree of self-embedding) of 4. (Its depth, incidentally, is 7 and its node/terminal-node ratio is 21/15 = 7/5.)Though this tree could have been derived from a grammar G 4 differing from G 3 only by containing the additional rulesNP  NP+Ra+NP+ VtRa whom there are very good reasons why sentences of the type John whom Ann hates loves Mary and their ramifications should, in the framework of the whole English language, not be regarded as being produced by a CF-grammar containing G 4 as a proper part, but rather by a transformational grammar built upon a CF grammar of English containing, in addition, a transformation rule, which I shall not specify here, allowing the derivation of NP 1 +Ra+NP 3 + Vt+ Vt+NP 2 from NP 1 + Vt+NP 2 and NP 3 + Vt+NP 1 . (There is no need to stress that all this is only a very rough approximation to the incomparably more refined treatment which a full-fledged transformational grammar of English would require. The transformational rule, for instance, should refer to the trees representing the strings under discussion rather than to the strings themselves.) It is worthwhile noticing that the node-terminal-node ratio (7/5) of the resulting tree is smaller than the ratios (5/3) of the underlying trees.The fifth aspect of syntactic complexity is, then, transformational history. I am, of course, not using the term 'measure' now, because it is very doubtful whether measures can be usefully assigned to this concept. So far, no attempt in this direction has been made. I shall, therefore, say no more about this notion here.It is not particularly difficult to develop these five notions, and many more could be thought of. The decisive questions are twofold: What are the exact formal properties of the various notions and perhaps even more important, what is their psychological reality, to use a term of Sapir's? In general, one would tend to require that if one sentence is syntactically more complex than another, then, ceteris paribus, it should, perhaps only on the average, create more difficulties in its comprehension. What can we say on this point?Well, very little, and nothing so far under controlled experimental conditions. Highly nested constructions just don't occur at all in normal speech and very rarely in writing, with the notable exception of logical or mathematical formulae. Their syntactic structure can be grasped only by using extraordinary means such as going over them more than once and using special marks for pairing off expressions that belong together but between which other expressions have been nested. A formula such as[[p  [q  ([r  [s  t]]  u]]]  r]is certainly not a very complex one among the formulae of the propositional calculus, as they go, but testing its well-formedness would either require some artificial aids, such as the use of a pencil for marking off paired brackets, or the acquisition of a special algorithm based upon a particular counting procedure, or else just an extraordinary (and unanalysed) effort and concentration. It is doubtful whether any effort, without external aids, would suffice to determine that the 'literal' English rendition of the formula as:If if p then if q then if if r then if s then t then u then v is well-formed, when one listens to such a sentence without prior warning.It is interesting that in order to explain our difficulties in either uttering or grasping the structure of such sentences we need assume nothing more than that we are finite automata with a finite number of internal states. For Chomsky [36] , in effect, has shown that when the number of these states is some number n, then, relative to a given grammar G, there exists a number m (depending on n) such that this device will not be able to correctly analyse the syntactic structure of all sentences whose degree of nesting is greater than or equal to m. (As a matter of fact, Chomsky showed this for degree of self-embedding rather than for nesting, but the proof can be trivially extended to this case.)On the incomparably stronger assumptions that natural languages (such as English) can be adequately determined by tree grammars, that human speakers of such a language have at least one such tree grammar stored in their permanent memory, that they utter the sentences of these languages by going through (one of) their tree(s) 'from top to bottom and from left to right', that all storage required for this process is done in an immediate memory of the push-down store form containing, say, n cells, we arrive at the conclusion that only sentences whose depth of postponed symbols is no higher than n can be uttered by such speakers.Now, though Yngve continues to believe that there exists good evidence for the soundness of these assumptions, Chomsky has on various occasions [37] , [38] expressed his doubts as to this evaluation of the evidence. He believes that most of the positive evidence invoked by Yngve can already be explained on the basis of the weaker assumption mentioned above, whereas he mentions the existence of other evidence which tends to refute Yngve's stronger assumptions though not his own weak one. I have no time to go further into this controversy. Let me only state that Chomsky's arguments seem to me to be the more conclusive ones, This, of course, by no means diminishes the credit due to Yngve for having been the first to have raised certain types of questions that were never asked before, and to have ventured to provide for them interesting answers, though they may well turn out to be the wrong ones.It is time now to say at least a few words on the 'Wittgensteinian Thesis'. In one sense, this thesis is, of course, perfectly true: After all, all of us do manage to say most of what we have to say in sentences of a low degree of nesting and, if really necessary, could rephrase even those things for the expression of which we do use highly nested strings, such as occur in many mathematical formulae, in syntactically less complex ways, which will be presently investigated. But in this sense, the thesis is no more than a rather uninteresting truism. What Wittgenstein, Broad and the innumerably many other people who invoked this slogan doubtless had in mind was that most, if not all, of the things that are expressed (usually, by such and such an author, by such and such a cultural group, etc.) by sentences with high syntactic complexity could have been expressed with sentences of lower syntactic complexity, without any compensation. In this interesting interpretation, Wittgenstein's Thesis seems to me wrong, almost demonstrably so. I would, on the contrary, want to express and justify, if not really demonstrate, the following 'Anti-Wittgensteinian Thesis': For most languages, and for all interesting (sufficiently rich) ones, there are things worth saying which cannot be expressed in sentences with a low degree of syntactic complexity, without a loss being incurred in other communicationally important respects.Though a fuller justification will have to be postponed for another occasion, let me make here the following remarks. Consider one of the simplest calculi ever invented by logicians, the so-called implicational propositional calculus [39, p. 140] . We are here interested only in its rules of formation but not in its axioms or theorems.The rules of formation of one of the many formulations of this calculus are as follows: Its primitive symbols are the three improper symbols ], , [ and the infinitely many proper symbolsp l , p 2 , p 3 ….Its rules of formation are just the following two:Fl. Each proper symbol is well-formed (wf) F2. Whenever  and  are wf, so is [  ] (with the understanding that nothing is wf unless it is so by virtue of Fl and F2). There exists no bound to the degree of nesting of the wf formulae of this calculus, as is obvious from the series of wf formulaep 1 , [p 1  p 2 ], [p 1  [p 2  p 3 ]], [p 1  [p 2  [p 3  p 4 ]]], It is less obvious, but can at any rate be rigorously proved, that for none of these formulae does there exist in the calculus another formula which is logically equivalent to it but has a lesser degree of nesting. (The term 'logically equivalent' needs explanation in our context, but I shall nevertheless not provide it. For logicians the required explanation would be rather obvious, for non-logicians it would take too much time.) Wittgenstein's Thesis does not hold in this calculus.Consider now the (logically uninteresting) conjunctional propositional calculus, whose rules of formation are analogous to those of the implicational calculus, except that '' is to be replaced by '' in both the list of improper symbols and F2. Here, too, it can be shown, by a somewhat more complicated argument, that for each n there exist wf formulae whose degree of nesting is higher than n such that they are not logically equivalent to any wf formula with a lesser degree of nesting.But there exists the following interesting difference between the two calculi: The conjunctional calculus, as presented here, looks unduly complex. Since conjunction is 'associative', i.e. since[p 1  [p 2  p 3 ]] and [[p 1  p 2 ]  p 3 ]are equivalent, the brackets fulfil no semantically important function within the calculus and could as well have been omitted from the list of improper symbols, with a corresponding simplification in rule F2. In this version, all wf formulae would have had a degree of nesting 0, as can easily be verified! True enough, all formulae with at least two conjunction signs would have become syntactically ambiguous, but, in this particular calculus, syntactic ambiguity would not have entailed semantic ambiguity. Syntactic simplification could have been achieved, and in the most extreme fashion, without any semantic loss whatsoever! This is by no means the case for the implicational calculus. Implication is not associative, so that the syntactic ambiguity introduced by omission of brackets would have entailed semantic ambiguity, a price no logician could possibly be ready to pay in this connection, though again all resulting formulae would have got a degree of nestedness 0.(As for conjunctional calculus, as soon as it is combined with some other calculus, say the disjunctional calculus, omission of brackets would again entail semantic ambiguity, since, say,[p 1  [p 2  p 3 ]] and [[p 1  p 2 ]  p 3 ] are not equivalent.)For those of you who have heard of the so-called Polish bracket-free notation, let me add the following remark. One might have thought that the nesting (which in this particular case is also self-embedding) is due to the use of brackets for scoping purposes, in accordance with standard mathematical usage, since it seems that the brackets 'cause' the branchings to be 'inner' ones, and might therefore have cherished the hope that a bracket-free notation would eliminate, or at least reduce, nesting. But this hope is illusory. Inner branching, thrown out through the front door, would re-enter through the back door. With 'C' as the only improper symbol and F2 changed to: Whenever  and  are wf, so is C, expansion of  (though not of ) causes inner branching. Notice further that in Polish notation calculi you cannot introduce syntactic ambiguity, harmless or harmful, even if you want to, by omitting symbols, since there are no special scoping symbols to omit.As far as natural languages are concerned, the situation is much more confused. In speech, it seems that we can express distinctions of scope up to a degree of nesting of 3, anything beyond that becoming blurred, whereas in writing things are still worse, punctuation marks not being consistently used for scoping purposes and anyhow not being adequate for this task, with the result that syntactic ambiguities abound, which may or may not be reduced through context or background knowledge. Sometimes, when the resulting semantic ambiguity becomes intolerable, extraordinary measures are taken, such as using scoping symbols like parentheses in ways ordinarily reserved for mathematical formulae only, indentation at various depths, ad hoc abbreviations, etc.Natural languages have many so-to-speak built-in devices for syntactic simplification. These devices, and their effectiveness, are badly in need of further study, after the extremely interesting beginnings by Yngve [30] .Certain 'simplifications', beloved by editors who are out to split up involved sentences, may well turn out to be spurious and perhaps even downright harmful, in spite of appearances. An editor who rewrites an author's 'Since p and q and r, therefore s' (where you have to imagine the letters p, q, r, and s replaced by sentences which on occasion will themselves have considerable syntactic complexity) by 'p. q. r. Therefore s.' is probably under the illusion that he has simplified something and therefore improved something. Now, he has doubtless replaced one long sentence with a degree of syntactic complexity of, say, n, by four shorter sentences, each with a degree of syntactic complexity of at most n-1, and has even used three words less for this purpose. But there is a price connected with this procedure, even a twofold one. First, the word 'therefore' has become semantically much more indefinite. What for? 's, for r.', or 's, for q and r.', or 's, for p and q and r.'? (And this might not be all. p will be preceded by other sentences, so that, at least from a purely syntactic point of view, it is totally indefinite how far back one has to go in the list of possible antecedents to s.) Secondly, even if the exact antecedent is settled, in order to understand the full content of the argument and to judge its validity, the reader (or listener) will have to recall, or re-read, the antecedent (which, so let us speculate, might have been removed into some larger, more permanent and less easily accessible storage than the immediate memory it was occupying during the syntactic processing), with the result that the overall economy of the 'improvement' is, to say the least, very doubtful. There is at least a good chance that the total effort required of the receiver of the message will be higher in the case of the 'split-up' sentence than with regard to the original sentence, though it might well be easier on the sender, had he wanted to express himself originally in this less definite way. (I used to teach geometry in high school and still remember the type of student who, when required to demonstrate a certain theorem, would start rattling off a list of congruences or inequalities, as the case might be, and finish with a triumphant 'Therefore (or, 'From this it follows that)...'. And he was not even wrong. Because from his list, and in accordance with certain theorems already proved, his conclusion did indeed follow. Except that he left the task of finding out how, in detail, the conclusion followed from the premises, to the listeners, including myself in that case, and provided no indication of the fact that he himself knew the details.)An investigation, recently begun in Jerusalem, seems to lead to interesting results as to the mutual relationships between (semantic) equivalence among the sentences of a given formal system, the (syntactic) simplicity of these sentences and the existence of a recursive simplification function for this system. The results will be published in a forthcoming Technical Report. Let me only mention here one of the more significant results. (I hope to nobody's particular surprise.) The existence of a syntactic simplification algorithm is rather the exception, and the proof of such existence, if at all, will in general require that the system fulfil fairly tough conditions. The details, unfortunately, require a good knowledge of recursive function theory and shall therefore not be given here. language and speech; theory vs. observation in linguistics: As already mentioned in the opening sentence of Section 1, many of us believe that during the last few years we have gained valuable insights into the relationship between theory and observation in science. I myself have already tried on a few occasions to apply these insights to certain controversial issues of modern linguistics [40] , [41] . I would now like to do the same with regard to the central term of linguistics, namely 'language' itself. As you will soon realize, this methodological point is of vital importance for the so-called 'research methodology' in MT, and insufficient understanding of it has already caused superfluous controversies.The term 'language' has, of course, been 'defined' innumerably many times, but the fact that these definitions are usually mutually inconsistent, at least at first sight, has equally often been forgotten and neglected, so that seemingly contradictory statements about 'language' were usually interpreted as inconsistent statements about the same explicatum (in Carnap's terminology) rather than consistent statements about different explicata.You will, for instance, find in the literature that language has often been treated as a set of sentences (or utterances, which two terms will not be distinguished for the moment). This, of course, is an abstraction from ordinary usage, and has been recognized as such. Leaving aside for our present purposes the discussion of how good and useful this abstraction is, let me point out that the characterization can be understood (and has been understood) in at least the following five senses:(1) A given set of utterances, such as recorded on a certain tape by so-and-so on suchand-such an occasion, or of inscriptions, found on such-and-such a tablet. Such sets are, of course, finite and most of them contain relatively few members. They can be, and sometimes are, represented as lists, under certain transcriptions. As a matter of fact, such sets are only exceptionally called 'languages', the more usual term being 'corpus'.(2) The set of all utterances (spoken and/or written) made until July 1962, say, by the members of such and such a community during their lifetime until then. This set is certainly finite, too, but cannot, in general, be presented in list form and is rather indefinite, due to the indefiniteness of the term "community" and for dozens of other obvious reasons, such as those centring around idiolects, dialects, bilingualness, not to forget the vagueness of 'utterance' itself.(3) The set of all utterances, past, present, and future, made by members of such a community. This set differs from that treated under (2) only in having a still greater degree of indeterminacy.(4) The set of all 'possible' utterances of a certain kind. The notion 'possible' occurring in this characterization is notorious for its complexities and philosophical perplexities, and I trust I shall be forgiven if I don't go any deeper into this hornet's nest here. Under most conceptions, this set will turn out to be infinite. (5) The set of all 'sentences' (well-formed expressions, grammatical expressions, etc.). (For recent discussions of this and related hierarchies see, e.g., Quine [42] and Ziff [43] .)It is true, of course, that (1) is a subset of (2), which again is a subset of (3), but this is not the crucial point. Much more important is that the term 'utterance' occurring in their characterization changes its meaning in the transition from (3) to (4), becomes less observational and more theoretical. At the same time, there is a change from a concrete, physical, three-or four-dimensional entity, a 'token', in Peirce's terminology, to an abstract entity, a 'type'. [When Paul and John say 'I am hungry.', we have two members of the set (1), since they uttered two different 'utterance-tokens', but only one member of the set (4), since these tokens are replicas of the same utterance-type.] The elements of set (5) , finally, are so overtly theoretical that the term 'utterance' seemed definitely inappropriate for them, and I had to shift to the term 'sentence'. Though these two terms in ordinary usage, as well as in the usage of most linguists, are almost synonymous, I have already suggested once before [41] to distinguish artificially between them qua technical terms and use 'utterance' for observational entities and 'sentence' for theoretical ones (with the adjective 'possible' performing as a category-shifting modifier, an extremely important and not fully analysed semantical fact). That 'sentence' is ordinarily used in both these senses, as is 'word' and many other terms of this area, is, of course, one of the major sources of confusion and futile controversies.Sets (2) and 3have little linguistic importance. Because of their indefiniteness it is difficult to make interesting statements about them. Sets (4) and 5-in all rigor I should have spoken about the classes of sets (4) and (5)-are by and large identical, at least under certain plausible interpretations of 'possible', the characterization of (4) being what Carnap [44] called 'quasi-psychologistic', while (5) is presumably characterized in an overtly and purely syntactical fashion.In many linguistic circles, it has been standard procedure to make believe that linguists, in their professional capacity, are dealing with sets of type (1) [or of types (2) or (3) ]. This fiction gave their endeavour, so they believed, a closeness-to-earth, an operational solidity which they were anxious not to lose. In fact, they all, with hardly an exception, dealt with sets of types (4) or (5) . All the talk about 'corpora' was only lip-service. Today we know that no science worth its salt could possibly stick to observation exclusively. Whoever is out to describe and nothing else will not describe well. Theorizare necesse est. Though I don't think that it is necessary, or even helpful, to say that every description already contains theoretical elements-as some recent methodologists are fond of stressing-it must be said that theorophobia is a disease, fashionable as it might be. All scientific statements must surely be connected with observations, but this connection can, and must, be much more oblique than many methodological simplicists believe.Returning from these generalities to our present problem of the relation between language and speech-with MT hovering in the back as a kind of proving ground-it should be superfluous to insist that the proper business of the theoretical linguist is to describe not the actual linguistic performance of some individual (or of so many individuals)-this 'natural history' stage being of limited interest only-but his linguistic competence (or that of a certain community of individuals), to use a dichotomy that has recently been much stressed by Miller and Chomsky [35] . Now competence is a disposition, perhaps even a higher-order disposition. To be a competent native speaker of English means not just to have performed in the past in a certain way, not even that he will (in all likelihood) perform in a certain way when presented with certain stimuli, but rather that one would perform, or would have performed (in all likelihood), in a certain way, were he to be presented (or had he been presented) with certain stimuli-in addition to many other things. I know perfectly well that no competent English speaker will ever in his life be presented with a certain utterance consisting of a few billion words, say of the form 'Kennedy is hungry, and Kruschev is thirsty, and De Gaulle is tired, ..., and Adenauer is old.', going over the whole present population of the world, but I know, and everybody else knows perfectly well, that were such a speaker, contrary to fact, to be presented with such an utterance, he would understand it as a perfect specimen of an English sentence.There is no mechanical procedure to move from someone's performance to his competence, just as there is no mechanical procedure to move from any number of physical observations to a physical theory. But just as this fact does not free the physicist from his professional obligation to develop theories, so there is nothing to absolve the linguists from presenting theories of linguistic competence. Testing the validity of these theories will, again as in the other theoretical sciences, in general proceed not in any straightforward way but by standard indirect methods. That John is competent to understand a certain tenbillion-word sentence will not be tested by presenting John with a token of this sentence, but, as we all know, by entirely different, oblique methods. For the above sentence, for instance, it would suffice to find out that John understands such sentences as 'Paul is hungry.' and 'David is thirsty.' as well as that he has mastered the rule that whenever  and  are sentences,  followed by 'and' followed by  is a sentence. This latter finding might not be a very simple one or a very secure one, but we do often claim to have found out just such things.One often hears, in certain philosophical circles as well as among people interested in applied linguistics, statements to the effect that natural languages have no grammar. These people are aware of the paradoxical character of such statements, but nevertheless insist that they are true, and even trivially so. Every grammar, so they say, determines a certain fixed, 'static', set of sentences. But a natural language is a living affair, 'dynamic', constantly in change, and it is utterly impossible that the set of sentences should coincide with the set of utterances, as it should for an adequate grammar. It should now be obvious where the fallacy lies in this argument: in the unthinking identification of sentences and utterances, and in the complete misunderstanding of the relation between theory and observation. It is as if one wanted to argue that natural gases obey no physical laws, since these laws apply only to the fictitious 'ideal gases'. (Incidentally, such statements have indeed been made by obscurantists at all times.) To understand the exact relationship between the laws of gases of theoretical physics and the behaviour of real gases requires a lot of methodological sophistication, and no less should be expected for the understanding of the exact relationship between the grammatical rules of an artificial language and the utterances made by the members of the community speaking this language. Any naive identification will quickly result in paradox, futile discussions, and irrational distrust of theory.That the question of the adequacy of a given grammar is much more complex than ordinarily assumed does not mean that this question is a pointless one. On the contrary, since there exists no simple criterion for deciding which of two proposed grammars is 'better', more adequate than the other, the problem of finding any criterion, however partial and indirect, becomes of overwhelming importance. The fact is, of course, that extremely little is known here beyond programmatic declarations. We know that 'grammatical' should not be identified with 'comprehensible', nor is one of these concepts subsumed under the other, but neither are these two concepts incommensurable. In that connection we have the large complex of questions arising around degrees of grammaticalness, deviancy, oddness, and anomaly; all of vital importance to linguists and philosophers alike. Some of you know the valiant beginnings made toward an investigation of this problem by Chomsky, Ziff [43] and others, but it will, I hope, not deter you from following in their footsteps, if I state, rather dogmatically, that these attempts are woefully inadequate, while admitting that I have nothing better to offer, for the moment.As soon as it is understood that competence and performance are to be kept clearly apart, one will no longer be tempted to feel oneself obliged to impose upon, say, the English language a grammar which will not allow the generation of sentences of a higher degree of syntactic complexity than some small number, say 4, according to one or the other measures discussed in the previous lecture. True enough, 'corresponding' utterances are not normally found in speech or writing, and if artificially produced will not be grasped unless certain artificial auxiliary means are invoked. These limitations of human performance are doubtless of vital importance; have to be clearly stated and investigated; and should, sooner or later, be backed up by some neurophysiological theory. They are of equal importance for the programming of machines which are charged with determining the syntactic structure of all sentences of any given text of a given language. That sentences of a high degree of complexity can be disregarded for this purpose, because of their extreme rarity or just plain non-occurrence, may allow an organization of the computer's working space that could make all the difference between the economically feasible and the economically Utopian. But in order to do all this, it is by no means necessary to impose these restrictions on the grammar of English as such. Nothing is gained, and much is lost. Not only will certain arbitrary-looking restrictions on the recursive generation rules have to be imposed, thereby increasing the complexity of the grammar to a degree that can hardly be estimated at present, but this procedure is self-defeating. It is done in the name of 'sticking to the brute facts', but doing so in such a crude way will force the adherents of this approach to disregard other brute facts, such as that with the aid of certain auxiliary means, the syntactic structure of English word sequences of a degree of syntactic complexity of 5, or of 100 for that matter, will be perfectly grasped. Since these word sequences are not English sentences, according to the grammarians of performance, how come they are understood and what is the language they belong to?This does not mean, of course, that restrictions of performance will not reflect themselves in the grammar. I am convinced, e.g., that Professor Yngve has made a remark full of insight when he noticed and stressed the fact that by changing its mood from the active to the passive, the syntactic complexity of a given sentence can be reduced. And I have no objection to formulating this insight in the form that there exists a passive in English (and the same or other devices in other languages) in order to allow, among other things, the formulation of certain thoughts in sentences of a lower degree of complexity than would otherwise have been possible. But trying to obliterate the distinction between competence and performance, to say it for the last time, is only a sign of confusion and will breed further confusion. The sooner we get rid of these last traces of extreme operationalism, the better for all of us, including MT research workers.In order to describe and explain the facts of speech exhaustively and revealingly, a full-fledged, formal theory of language is needed, among many other things. Philosophical prejudice aside, there is no particular merit in keeping this theory 'close to the facts', in assuming that the rules of correspondence which connect the theory (in the narrower sense of the word) with observation will have a particularly simple form. Experience from other sciences should have taught us that such an assumption is baseless. Physics, e.g., has reached its present heights only because the free flight of fancy, 'the free play of ideas', has not been fettered by a narrow conception of scientific methodology. True enough, the particular logical status of these rules of correspondence has still not been deeply enough investigated, and I fully understand the attitude of those who, for this reason, regard this whole business with suspicion, and are afraid that the free flight of fancy will reintroduce uncontrollable metaphysics into science in general and linguistics in particular. But I hope that the necessary controls will be developed and better understood in the future and that in the meantime one will manage somehow. Occasional metaphysical aberrations are probably less damaging in the long run than the curtailment of creative scientific imagination.Let me stress, in this connection, that the extensive use of symbolism in the formulation of generative grammars has induced many linguists to accuse the authors of these formulations of having lost all connection with empirical science and indulged instead in some mathematical surrogate. I hope that it is now perfectly clear that this accusation is baseless. A formal grammar of English is an empirical theory of the English language, and its symbolic formulation, while it increases its precision and therefore its testability, by no means turns it into a mathematical theory. When according to a certain grammar 'Sincerity admires John.' turns out not to be a (formal) sentence whereas this very sequence is considered by someone to be an (intuitive) sentence, then this grammar is to that degree inadequate to his intuitions. It should only be kept in mind that the determination of the intuitive sentencehood of 'Sincerity admires John.' is by no means such a straightforward affair of observation, experimentation and statistics as some people believe. The notion of 'intuitive sentence' is highly theoretical itself (though without the benefit of a complete theory being formulated to back it up, which fact is, of course, the whole crux of this peculiar modifier 'intuitive'), and observations on utterances of people or their reaction to utterances alone will never settle in any clearcut way the question of the sentencehood of a particular word sequence. This is as it should be, and only wishful thinking and naive methodology make people believe otherwise. Confirmation and refutation of linguistic theories, as of theories in any other science, is not such a simple operation as one is taught to believe in high school. But the complexity of refutation does not make a linguistic theory empirically irrefutable and therefore does not turn it into a mathematical theory. why machines won't learn to translate well: My arguments against the feasibility of high-quality fully-automatic translation can be assumed to be well known in this audience. I have gone through them often enough in lectures and publications. I also have the impression that, after occasionally rather strong initial negative reactions, a good number of people who have been active in the field of MT for some years tend more and more to agree with these arguments, though they might prefer a more restrained formulation. On the other hand, the number of research groups which have taken up MT as their major field of activity is still on the increase, and by now there is hardly a country left in Europe and North America which does not feature at least one such group, with Japan, China, India and a couple of South American countries joining them, for good measure. Though a certain amount of involvement in MT, and in particular in its theoretical aspects, is certainly helpful and apt to yield fresh insights into the workings of language, most of the work that is at present going on under the auspices of MT seems to me to be a wanton expenditure of research money that could be put to better use in other fields and, still worse, a deplorable waste of research potential.The combined interest in MT is sometimes defended on the grounds that though it is indeed extremely unlikely that computers working according to rigid algorithms will ever produce high-quality translations, there still exists a possibility that computers with considerable learning ('self-organizing') abilities will be able through training and experience to improve their initial algorithms and thereby constantly improve their output until adequate quality is achieved. I myself mentioned the possibility in some prior publications but refrained from evaluating it, at that time regarding such an evaluation as premature [15] , [45] .During the last two years, however, while going through the pertinent literature once more and pondering over the whole issue of artificial intelligence, I came to more radical conclusions which I would like to expose and defend here. Today, I am convinced that even machines with learning abilities, as we know them today or foresee them according to known principles, will not be able to improve by much the quality of the translation output.For this purpose, let us notice once more the obvious prerequisites for high-quality human translation. There are at least the following five of them, though deeper analysis would doubtless reveal more:(1) competent mastery of the source language, (2) competent mastery of the target language,good general background knowledge,expertness in the field, and (5) intelligence (know-how). (I admit, of course, that the last of these prerequisites, intelligence, is not too well defined or understood, and shall therefore have to use it with a good amount of caution.)All this was surely common knowledge at all times, and certainly known to all of us 'machine translation pioneers' a dozen years ago. I knew then that nothing corresponding to items (3) and (4) could be expected of electronic computers, but thought that (1) and (2) should be within their reach, and entertained some hopes that by exploiting the redundancy of natural language texts better than human readers usually do, we should perhaps be in a position to enable the computers to overcome, at least partly, their lack of knowledge and understanding. True enough, scientists (and almost everyone else) write their articles with a reader in mind who, in addition to having a good command of the language, has a general background knowledge of, say, college level, has so many years of study behind him in the respective field, and is intelligent enough to know how to apply these three factors when called upon to do so. But it could have been, couldn't it, that, perhaps inadvertently, they do introduce sufficient formal clues in their publications to enable a very ingenious team of linguists and programmers to write a translation program whose output, though produced by the machine without understanding, would be indistinguishable from a translation done out of understanding? After all, cases are known of human translations that were done under similar conditions and were not always recognized as such.Well, it could have been so, but it just didn't turn out this way. For any given source language, there are countless sentences to which a competent human translator will provide in a given target language many, sometimes very many, distinct renderings which will sometimes differ from each other only by minor idiosyncrasies, but will at other times be toto coelo different. The original sentence will very often be, as the standard expression goes, multiply ambiguous by itself, morphologically, syntactically, and semantically, but the competent human translator will render it, in its particular context, uniquely to the general satisfaction of the human reader. The translator will resolve these ambiguities out of the last three factors mentioned. Though it is undoubtedly the case that some reduction of ambiguity can be obtained through better attention to certain formal clues, and though it has turned out many times that what superficial thinking regarded as definitely requiring understanding could be handled through certain refinements of purely formal methods, it should by now be perfectly clear that there are limits to what these refinements can achieve, limits that definitely block the way to autonomous, high-quality, machine translation.Could not perhaps computers with learning capacity do the job? Let me say rather dogmatically that a close study of one of the most publicized schemes for the mechanization of problem solving and a somewhat less detailed study of the whole field of Artificial Intelligence, has shown an amount of careless and irresponsible talk which is nothing short of appalling and sometimes close to lunatic. There is absolutely nothing in all this talk which shows any promise to be of real help in mechanizing translation. There is nothing to indicate how computers could acquire what the famous Swiss linguist de Saussure called, at the beginning of this century, the faculté de langage, an ability which is today innate in every human being, but which took evolution hundreds of millions of years to develop. Let nobody be deceived by the term 'machine language' which may be suggestive for other purposes but which has turned out to be detrimental in the present context. Surely computers can manipulate symbols if given the proper instructions and they do it splendidly, many times quicker and safer than humans, but the distance from symbol manipulation to linguistic understanding is enormous, and loose talk will not diminish it.Though certain electronic devices (such as perceptrons) have been built which can be 'trained' to perform certain tasks (such as pattern recognition) and indeed perform better after training than before, and though computers have been programmed to do certain things (such as playing checkers) and do these things better after a period of learning than before, it would be disastrous to extrapolate from these primitive exhibitions of artificial intelligence to something like translation. There just is no serious basis for such extrapolation. As to checkers, the definition of 'legal move' is extremely simple and is, of course, given the computer in full. After a few years of work the inventor of the checker playing program [46] succeeded in formalizing a good set of strategies so that the training had nothing more to achieve than to introduce certain changes in the rank-ordering of these strategies. There never was any question of training the computer to discover the rules of checkers, or to expand an incomplete set of rules into a complete one, or to add new strategies to those given it beforehand. But some people do talk about letting computers discover rules of grammar or expand an incomplete set of such rules fed into it, by going over large texts and using 'induction'. But let me repeat, this talk is quite irresponsible and 'induction' is nothing but a magic word in this connection. All attempts at formalizing what they believe to be inductive inference have completely failed, and inductive inference machines are pipe dreams even more than autonomous translation machines. Now children do learn, as we all know, their native language up to an almost complete mastery of its grammar by the time they are four or five years old. But by the time they reach this age, they have heard (and spoken) surely no more than a few hundred thousand utterances in their native language (only a part of which are good textbook specimens of grammatical sentences). If they succeeded in mastering the grammar, apparently 'by induction' from these utterances, why shouldn't a computer be able to do so? Even if we add the fact that these children were also told that so many word sequences were not grammatical sentences-whatever the form was by which they were given these pieces of instruction-could not the same procedure be mirrored for computers? Well, the answer to these two questions can be nothing but an uncompromising No. The children are able to perform as splendidly as they do because, in addition to the training and learning, their brain is not a tabula rasa general purpose computer but a computer which, after all those hundreds of millions of years of evolution mentioned before, is also special purpose structured in such a way that it possesses the unique faculté de langage which makes it so different from the brain of mice, monkeys, and machines. The fact that we know close to nothing about this structure does not turn the previous statement into a scholastic truism. Years of most patient and skilful attempts at teaching monkeys to use language intelligently succeeded in nothing better than making them use four single words with understanding, and monkeys' brains are in many respects vastly superior to those of computers. True enough, computers can do many things better than monkeys or humans, computing for instance, but then we know the corresponding algorithms, and know how to feed them into the computer. In some cases we know algorithms which, when fed into the computer, will enable it to construct for itself computing algorithms out of other data and instructions that can be fed into it. But nothing of the kind is known with respect to linguistic abilities. So long as we are unable to wire or program computers so that their initial state will be similar to that of a newborn human infant, physically or at least functionally, let's forget about teaching computers to construct grammars.Let me now turn to the first two items. What is the outlook for computers to master a natural language to approximately the same degree as does a native speaker of such a language? And by 'mastering a language' I now mean, of course, only a mastery of its grammar, i.e. vocabulary, morphology, and syntax, to the exclusion of its semantics and pragmatics. Until recently, I think that most of us who dealt with MT at one time or another believed that not only was this aim attainable, but that it would not be so very difficult to attain it, for the practical purpose at hand. One realized that the mechanization of syntactic analysis, based on this mastery, would lead on occasion to multiple analyses whose final reduction to a unique analysis would then be relegated to the limbo of semantics, but did not tend to take this drawback very seriously. It seems that here, too, a more sober appraisal of the situation is indicated and already is gaining ground, if I am not mistaken. More and more people have become convinced that the inadequacies of present methods of mechanical determination of syntactic structure, in comparison with what competent and linguistically trained native speakers are able to do, are not only due to the fact that we don't know as yet enough about the semantics of our language-though this is surely true enough-but also to the perhaps not too surprising fact that the grammars which were in the back of the minds of almost all MT people were of too simple a type, namely of the so-called immediate constituent type, though it is quite amazing to see how many variants of this type came up in this connection.Leaving aside the question of the theoretical inadequacy of immediate constituent grammars for natural languages, the following fact has come to the fore during the last few years: If one wants to increase the degree of approximate practical adequacy of such grammars, one has to pay an enormous price for this, namely a proliferation of rules (partly, but not wholly, caused by a proliferation of syntactic categories) of truly astronomic nature. The dialectics of the situation is distressing: the better the understanding of linguistic structure, and the greater our mastery of the language-the larger the set of grammatical rules we need to describe the language, the heavier the preparatory work of writing the grammar, and the costlier the machine operations of storing and working with such a grammar.It is very often said that our present computers are already good enough for the task of MT and will be more than sufficient in their next generation, but that the bottleneck lies mostly in our insufficient understanding of the workings of language. As soon as we know all of it, the problem will be licked. I shall not discuss here the extremely dubious character of this 'knowing all of it', but only point out that the more we shall know about linguistic structure, the more complex the description of this structure will become, so long as we stick to immediate constituent grammars. It is known that in some cases transformational grammars are able to reduce the complexity of the description by orders of magnitude. Whether this holds in general remains to be seen, but the time has come for those interested in the mechanical determination of syntactic structure, whether for its own sake, for MT or for other applications, to get out of the self-imposed straitjacket of immediate constituent grammars and start working with more powerful models, such as transformational grammars.Let me illustrate by just one example: one of the best programs in existence, on one of the best computers in existence, recently needed twelve minutes (and something like $100 on a commercial basis) to provide an exhaustive syntactic analysis of a 35-word sentence [47] . I understand that the program has been improved in the meantime and that the time required for such an analysis is now closer to one minute. However, the output of this analysis is multiple, leaving the selection of the single analysis, which is correct in accordance with context and background, to other parts of the program or to the human posteditor. But there are other troubles with using immediate constituent grammars only for MT purposes. In a lecture to this Institute, Mr. Gross gave an example of a French sentence in the passive mood which could be translated into English only by ad hoc procedures so long as its syntactic analysis is made on an immediate constituent basis only. The translation into English is straightforward as soon as the French sentence is first detransformed into the active mood. A grammar which is unable to provide this conversion, besides being scientifically unsatisfactory, will increase the difficulties of MT.I would like to return to what is perhaps the most widespread fallacy connected with MT, the fallacy I call, in variation of a well-known term of Whitehead, The Fallacy of Misplaced Economy. I refer to the idea that indirect machine translation through an intermediate language will result in considerable to vast economies over direct translation from source to target language, on the obvious condition that should MT turn out to be feasible at all, in some sense or other, many opportunities for simultaneous translation from one source language into many target languages (and vice versa) will arise. I already once before discussed both the attractiveness of this idea and the fallaciousness of the reasoning behind it. Let me therefore discuss here at some length only what I regard to be the kernel of the fallacy.The following argument has great prima facie appeal: Assume that we deal with ten languages, and that we are interested in translating from each language into every other, i.e. altogether ninety translation pairs. Assume, for simplicity's sake, that each translation algorithm-never mind the quality of the output-requires 100 man-years. Then the preparation of all the algorithms will require 9000 man-years. If one now designates one of these languages as the pivot-language, then only eighteen translation pairs will be needed, requiring 1800 man-years of preparation, an enormous saving. True enough, translation time for any of the remaining seventy-two language pairs will be approximately doubled, and the quality of the output will be somewhat reduced, but this would be a price worth paying. (In general, the argument is presented with some artificial language serving as the pivot. Though this move changes the appeal of the argument for the better-since this artificial pivot language is supposed to be equipped with certain magical qualities-as well as for the worse-since the number of translation algorithms now increases to twenty-I don't think that thereby the substance of the following counterargument is weakened.) However, in order to counteract even this deterioration, let us double our effort and spend, say, 200 man-years on the preparation of the algorithms for translating to and from the pivot language. We would still wind up with no more than 3600 man-years of work vs. the 9000 originally needed. Well?The fallacy, so it seems to me, lies in the following: the argument would hold if the preparation of the ninety algorithms were to be done independently and simultaneously by different people, with nobody learning from the experience of his co-workers. This is surely a highly unrealistic assumption. If preparing the Russian-to-English and German-to-English algorithms were to take 100 man-years each, when done this way, there can be no doubt that preparing the German-to-English algorithm after completion (or even partial completion) of a successful Russian-to-English algorithm will take much less time, perhaps half as much. The next pair, say Japanese-to-English, will take still less time, etc. All these figures being utterly arbitrary, I don't think we should go on bothering about the convergence of this series. Though we might still wind up with a larger time needed for the preparation of the ninety than of the eighteen 'double precision' algorithms, it is doubtful, to say the least, whether the overall quality/preparation-time/translation-time balance would be in favour of the pivot language approach.Add to this the fact that 100 man-years would be enough, by assumption, to start a working MT outfit along the direct approach, whereas 400 man-years will be needed even to start translating the first pair along the indirect approach, and the initial appeal of the intermediate language idea should completely vanish, when judged from a practical point of view. As to its speculative impact, enough has been said on other occasions.Autonomous, high-quality machine translation between natural languages according to rigid algorithms may safely be considered as dead. Such translation on the basis of learning abilities is still-born. Though machines could doubtless provide a great variety of aids to human translation, so far in no case has economic feasibility of any such aid been proven, though the outlook for the future is not all dark. So much for the debit side. On the credit side of the past MT efforts stands the enormous increase of interest which has already begun to pay off not only in an increased understanding of language as such, but also in such applications as the mechanical translation between programming languages. But this could already be a topic for another Institute. : , while Curry [9] became more and more aware of the implications of combinatorial logic to theoretical linguistics. It is, though, perhaps not too surprising that the ideas of Post and Curry should be no better known to professional linguists than those of Carnap and Ajdukiewicz.It seems that a major change in the peaceful but uninspiring co-existence of structural linguists and syntax-oriented logicians came along when the idea of mechanizing the determination of syntactic structure began to take hold of the imagination of various authors. Though this idea was originally but a natural outcome of the professional preoccupation of a handful of linguists and logicians, it made an almost sensational breakthrough in the early fifties when it became connected with, and a cornerstone of, automatic translation between natural languages. At one stroke, structural linguistics had become useful. Just as mathematical logic, regarded for years as the most abstract and abstruse scientific discipline, became overnight an essential tool for the designer and programmer of electronic digital computers, so structural linguistics, regarded for years as the most abstract and speculative branch of linguistics, is now considered by many a must for the designer of automatic translation routines. The impact of this development was at times revolutionary and dramatic. In Soviet Russia, for instance, structural linguistics had, before 1954, unfailingly been condemned as idealistic, bourgeois and formalistic. However, when the Russian government awakened from its dogmatic slumber to the tune of the Georgetown University demonstration of machine translation in January 1954, structural linguistics became within a few weeks a discipline of high prestige and priority. And just as mathematical logic has its special offspring to deal with digital computers, i.e. the theory of automata, so structural linguistics has its special offspring to deal with mechanical structure determination, i.e. algebraic linguistics, also called, when this application is particularly stressed, computational linguistics or mechano-linguistics. As a final surprise, it has recently turned out that these two disciplines, automata theory and algebraic linguistics, exhibit extremely close relationships which at times amount to practical identity.To complete this historical sketch: around 1954, Chomsky, influenced by, and in constant exchange of ideas with Harris, started his investigations into a new typology of linguistic structures. In a series of publications, of which the booklet Syntactic Structures [10] is the best known, but also the least technical, he defined and constantly refined a complex hierarchy of such structures, meant to serve as models for natural languages with varying degrees of adequacy. Though models for the treatment of linguistic structures were also developed by many other authors, Chomsky's publications exhibited a degree of rigor and testability which was unheard of before that in the linguistic literature and therefore quickly became for many a standard of comparison for other contributions.I shall now turn to a presentation of the work of the Jerusalem group in linguistic model theory before I continue with the description and evaluation of some other contributions to this field.In 1937, while working on a master's thesis on the logical antinomies, I came across Ajdukiewicz's work [6] . Fourteen years later, having become acquainted in the meantime with structural linguistics, and especially with the work of Harris [1] , and instigated by my work at that time on machine translation, I realized the importance of Ajdukiewicz's approach for the mechanization of the determination of syntactic structure, and published an adaptation of Ajdukiewicz's ideas [11] .The basic heuristic concept behind the type of grammar proposed in this paper, and later further developed by Lambek [12] , [13] , [14] , myself [15] and others, is the following: the grammar was meant to be a recognition (identification or operational) grammar, i.e. a device by which the syntactic structure, and in particular the sentencehood, of a given string of elements of a given language could be determined. This determination had to be formal, i.e. dependent exclusively on the shape and order of the elements, and preferably effective, i.e. leading after a finite number of steps to the decision as to the structure, or structures, of the given string. This aim was to be achieved by assuming that each of the finitely many elements of the given natural language had finitely many syntactic functions, by developing a suitable notation for these syntactical functions (or categories, as we became used to calling them, in the tradition of Aristotle, Husserl, and Leśniewski), and by designing an algorithm operating on this notation.More specifically, the assumption was investigated that natural languages have what is known to linguists as a contiguous immediate-constituent structure, i.e. that every sentence can be parsed, according to finitely many rules, into two or more contiguous constituents, either of which is already a final constituent or else is itself parsible into two or more immediate constituents, etc. This parsing was not supposed to be necessarily unique. Syntactically ambiguous sentences allowed for two or more different parsings. Examples should not be necessary here.The variation introduced by Ajdukiewicz into this conception of linguistic structure, well known in a crude form already to elementary school students, was to regard the combination of constituents into constitutes (or syntagmata) not a concatenation inter pares but rather as the result of the operation of one of the constituents (the governor, in some terminologies) upon the others (the governed or dependent units). The specific form which the approach took with Ajdukiewicz was to assign to each word (or other appropriate element) of a given natural language a finite number of fundamental and/or operator categories and to employ an extremely simple set of rules operating upon these categories, so-called 'cancellation' rules.Just for the sake of illustration, let me give here the definition of bidirectional categorial grammar, in a slight variation of the one presented in a recent publication of our group [16] .We define it as an ordered quintuple < V, C, , R, >, where V is a finite set of elements (the vocabulary), C is the closure of a finite set of fundamental categories, say  1 ,…, n , under the operations of right and left diagonalization (i.e. whenever  and  are categories, [/] and [\] are categories),  is a distinguished category of C (the category of sentences), R is the set of the two cancellation rules [ i / j ],  j   i , and  i ,[ i \ j ]   j , and  is a function from V to finite sets of C (the assignment function).We say that a category sequence  directly cancels to , if  results from a by one application of one of the cancellation rules, and that a cancels to , if  results from  by finitely many applications of these rules (more exactly, if there exist category sequences  1 ,  2 ,...,  n such that  = 1 ,  =  n , and  i directly cancels to  j+1 , for i = 1, ...,n -1).A string x = A 1 ... A k over V is defined to be a sentence if, and only if, at least one of the category sequences assigned to x by  cancels to . The set of all sentences is then the language determined (or represented) by the given categorial grammar. A language representable by such a grammar is a categorial language.In addition to bidirectional categorial grammars, we also dealt with unidirectional categorial grammars, employing either right or left diagonalization only for the formation of categories, and more specifically with what we called restricted categorial grammars, whose set of categories consists only of the (finitely many) fundamental categories  i , and the operator categories[ i \ j ] and [ i \ i \ k ]] (or, alternatively, [ i / j ] and [ i /[ j / k ]]).A heuristically (though not essentially) different approach to the formalization of immediate-constituent grammars was taken by Chomsky, within the framework of his general typology. He looked upon a grammar as a device, or a system of rules, for generating (or recursively enumerating) the class of all sentences. In particular, a context-free phrase structure grammar, a CF grammar for short, may be defined, again in slight variation from Chomsky's original definition, as an ordered quadruple < V, T, S, P>, where V is the (total) vocabulary, T (the terminal vocabulary) is a subset of V, S (the initial symbol) is a distinguished element of V-T (the auxiliary vocabulary), and P is a finite set of production rules of the form X x, where XV-T and x is a string over V.We say that a string x directly generates y, if y results from x by one application of one of the production rules, and that x generates y, if y results from x by finitely many applications of these rules (more exactly, if there exist sequences of strings z 1 ,z 2 ,...,z n such that x = z 1 , y = z n and z i directly generates z i+1 , for i = 1,..., n-1).A string over T is defined to be a sentence if it is generated by S. The set of all sentences is the language determined (or represented) by the given CF grammar.My conjecture that the classes of CF languages and bidirectional categorial languages are identical-in other words, that for each CF language there exists a weakly equivalent bidirectional categorial language and vice versa-was proved in 1959 by Gaifman [16] , by a method that is too complex to be described here. He proved, as a matter of fact, slightly more, namely that for each CF grammar there exists a weakly equivalent restricted categorial grammar and vice versa. The equivalent representation can in all cases be effectively obtained from the original representation.This equivalence proof was preceded by another in which it was shown that the notion of a finite state grammar, FS grammar for short, occupying the lowest position in Chomsky's hierarchy of generation grammars, was equivalent to that of a finite automaton, in the sense of Rabin and Scott [17] , which can be viewed as another kind of recognition grammar. The proof itself was rather straightforward and almost trivial, relying mainly on the equivalence of deterministic and non-deterministic finite automata, shown by Rabin and Scott. It has been adequately described in a recently published paper [18] .Chomsky had already shown that the FS languages formed a proper subclass of the CF languages. We have recently been able to prove [19] that the problem whether a CF language is also representable by a FS grammar-a problem which has considerable linguistic importance-is recursively unsolvable. The method used was reduction to Post's correspondence problem, a famous problem in mathematical logic which was shown by Post [20] to be recursively unsolvable.Among other results recently obtained, let me only mention the following: whereas FS languages are, in view of the equivalence of FS grammars to finite automata and wellknown results of Kleene [21] and others, closed under various Boolean and other operations, CF languages whose vocabulary contains at least two symbols are not closed under complementation and intersection, though closed under various other operations. The union of two CF languages is again a CF language, and a representation can be effectively constructed from the given representation. The intersection of a CF language and a FS language is a CF language.Undecidable are such problems as the equivalence problem between two CF grammars, the inclusion problem of languages represented by CF grammars, the problem of disjointedness of such languages, etc. In this connection, interesting relationships have been shown to exist between CF grammars and two-tape finite automata, as defined and treated by Rabin and Scott, for which the disjointedness problem of the sets of acceptable tapes is similarly unsolvable.A particular proper subset of the CF languages, apparently of greater importance for the treatment of programming languages, such as ALGOL, than for natural languages, is the set of so-called sequential languages, studied in particular by Ginsburg [22] , [23] and Shamir [24] . I have no time for more than just this remark.In a somewhat different approach, closely related to the classical notions of government and syntagmata, the notions of dependency grammars and projective grammars have been developed by Hays [25] , Lecerf [26] , and others, including some Russian authors, utilizing ideas most fully presented in Tesnière's posthumous book [27] , and are thought to be of particular importance for machine translation. However, it has not been too difficult to guess, and has indeed been rigorously proven by Gaifman [28] , that these grammars, which are being discussed in other lectures presented in this Institute, are equivalent to CF grammars in a certain sense, which is somewhat stronger than the one used above, but that this is not necessarily so with regard to what might be called natural strong equivalence. More precisely, whereas for every dependency grammar there exists, and can be effectively constructed, a CF grammar naturally and strongly equivalent to it, this is not necessarily the case in the opposite direction, not if the CF grammar is of infinite degree. Let me add that the dependency grammars are very closely related to a type of categorial grammars which I discussed in earlier publications [11] but later on replaced by grammars of a seemingly simpler structure. In the original categorial grammars, I did consider categories of the form  m …. 2  1 \/ 1  2 … n , with ,  i , and  j being either fundamental or operator categories themselves, with a corresponding cancellation rule. It should be rather obvious how to transform a dependency grammar into a categorial grammar of this particular type.These grammars are equivalent to grammars in which all categories have the form \/ where ,, and  are fundamental categories and where  and  may be empty (in which case the corresponding diagonal will be omitted, too, from the symbol). Finally, in view of Gaifman's theorem mentioned above, these grammars in their turn are equivalent to grammars all of whose categories are of the form / (or \), with the same conditions. I think that these remarks (strongly connected with considerations of combinatory logic [9] ) should definitely settle the question of the exact formal status of the dependency grammars and their like. One side result is that dependency grammars are weakly reducible to binary dependency grammars, i.e. grammars in which each unit governs at most two other units. This result, I presume, is not particularly surprising, especially if we remember that the equivalence proven will in general not be a natural one.Still another class of grammars, sometimes [29] called push-down store grammars and originating, though not in a very precise form, with Yngve [30] , [31] , has recently been shown by Chomsky to be once more equivalent to CF grammars, again to nobody's particular surprise. Since push-down stores are regarded by many workers in the fields of MT and programming languages as particularly useful devices for the mechanical determination of syntactic structure of sentences belonging to natural and programming languages, respectively, this result should be helpful in clarifying the exact scope of those schemes of syntactic analysis which are based on these devices.Of theoretically greater importance is the fact that push-down store grammars form a proper sub-set of linear bounded automata, one of the many classes of automata lying between Turing machines and finite automata which have recently been investigated by many authors, due to the fact that Turing machines are too idealized to be of much direct applicability, whereas finite automata are too restricted for this purpose. The investigation of these automata, initiated by Myhill [32] , is, however, still in its infancy, similar to that of many other classes of automata reported by McNaughton in his excellent review [33] . Still more in the dark is the linguistic relevance of all these models though, judging from admittedly limited experience, almost every single one of them will sooner or later be shown to have such relevance.To wind up this discussion, let me only mention that during the last few years various classes of grammars whose potency is intermediate between FS and CF grammars have been investigated. These intermediate grammars will probably turn out to be of greater importance for the study of grammars of programming and other artificial formalized languages than for natural languages. In addition to the sequential grammars mentioned before, let me now mention the linear and metalinear grammars studied by Chomsky.It might be useful to present, at this stage, a picture of the various grammars discussed in the present section, together with the two important classes of transformational and context-sensitive phrase structure grammars (which I could not discuss, for lack of time) in the form of a directed graph based on the (partial) ordering relation Determine-a-moreextensive-class-of-languages-than (the staggered lines indicating that the exact relationship has not yet been fully determined):The last two questions I would now like to discuss are the following: (1) In view of the fact that so many models of linguistic structure have turned out to be (weakly) equivalent, how do they compare from the point of view of pedagogy and MT-directed application?(2) What is the degree of adequacy with which natural languages can be described by CF grammars and their equivalents?As to the first question, I am afraid that not much can be said at this stage. I am not aware of any experiments made as yet to determine the pedagogical status of the various equivalent grammars. Some programmatic statements have been made on occasion, but I would not want to attribute much weight to them. I myself, for instance, have a feeling that the governor-dependent terminology of the dependency and projective grammars has an unfortunate, and intrinsically, of course, unwarranted, side-effect of strengthening dogmatic approaches to the decision of what governs what. The operator-operand terminology of the categorial grammars seems to be emotionally less loaded, but again, these are surely minor issues. Altogether, I would advocate the performance of pedagogical experiments in which the same miniature language would be taught with the help of various equivalent grammars. I do not foresee any particular complications for such projects.Turning now to the second question which has been much discussed during the last few years, often with great fervor, the situation should be reasonably clear. FS grammars are definitely inadequate for describing any natural language, unless this last term is mutilated, for what must be regarded as arbitrary and ad hoc reasons. I am sorry that Yngve's otherwise extremely useful recent contributions did becloud this issue. As to CF grammars, the situation is more complex and more interesting. It is almost, but not quite, certain that such grammars, too, are inadequate in principle, for reasons which I shall not repeat here, since they have been stated many times in the recent literature and been authoritatively restated by Chomsky [28] . But of even greater importance, particularly for applications, such as MT, is the fact that such grammars seem definitely to be inadequate in practice, in the sense that the number and complexity of grammatical rules of this type, in order to achieve a tolerable, if not perfect, degree of adequacy, will have to be so immense as to defeat the practical purpose of establishing these rules. Transformational grammars seem to have a much better chance of being both adequate and practical, though this point is still far from being settled. In view of this fact, which does not appear to have been seriously challenged by most workers on MT, it is surprising to see that most, if not all, current programs of automatic syntactic analysis are based on impractical grammars. In some groups, where the impracticability and/or inadequacy has received serious attention, attempts are being made at present to classify the 'recalcitrant' phenomena and to find ad hoc remedies for them. You will not be surprised if I say that I take a rather dim view of these attempts. But this already leads to issues which I intend to discuss in subsequent sections. Appendix:
null
null
null
null
{ "paperhash": [ "ginsburg|two_families_of_languages_related_to_algol", "rabin|finite_automata_and_their_decision_problems", "martin|from_a_logical_point_of_view", "post|a_variant_of_a_recursively_unsolvable_problem", "carnap|logical_syntax_of_language", "ginsburg|some_recursively_unsolvable_problems_in_algol-like_languages" ], "title": [ "Two Families of Languages Related to ALGOL", "Finite Automata and Their Decision Problems", "From a Logical Point of View", "A variant of a recursively unsolvable problem", "Logical Syntax of Language", "Some Recursively Unsolvable Problems in ALGOL-Like Languages" ], "abstract": [ "A serious drawback in the application of modern data processing systems is the cost and time consumed in programming these complexes. The user's problems and their solutions are described in a natural language such as English. To utilize the services of a data processor, it is necessary to convert this language description into machine language, to wit, program steps. Recently, attempts have arisen to bridge the gap between these two languages. The method has been to construct languages (called problem oriented languages, or POL) that are (i) rich enough to allow a description of a set of problems and their solutions; (ii) reasonably close to the user's ordinary language of description and solution; and (iii) formal enough to permit a mechanical translation into machine language. COBOL and ALGOL are two examples of POL. The purpose of this investigation is to gain some insight into the syntax of POL, in particular ALGOL [1]. Specifically, the method of defining constituent parts of ALGOL 60 is abstracted, this giving rise to a family of sets of strings; and mathematical facts about the resulting family deduced. Now an ALGoL-like definable language (we hesitate to use the inclusive term \"POL\") may be viewed either as one of these sets (the set of sentences) ; or else, as a finite collection of these sets, one of which is the set of sentences, and the remaining, the constituent parts of the language used to construct the sentences. This is in line with one current view of natural languages [4, 5, 6]. The defining scheme for ALOOL turns out to be equivalent to one of the several schemes described by Chomsky [6] in his attempt to analyze the syntax of natural languages. Of course, POL, as special kinds of languages, should fit into a general theory of language. However, it is reasonable to expect that POL, as artificial languages contrived so as to be capable of being mechanically translated into machine language, should have a syntax simpler than that of the natural languages. The technical results achieved in this paper are as follows. Two families of sets (of strings), the family of definable sets and the family of sequentially definable sets, are described. Definable sets are obtained from a system of simultaneous equations, all the equations being of a certain form. This system, essentially parallel in nature, is an abstraction of the ALGOL method of description. Definable sets turn out to be identical to the type 2 languages (with identity) introduced by Chomsky [6]. Sequentially definable sets are obtained from a system", "Finite automata are considered in this paper as instruments for classifying finite tapes. Each one-tape automaton defines a set of tapes, a two-tape automaton defines a set of pairs of tapes, et cetera. The structure of the defined sets is studied. Various generalizations of the notion of an automaton are introduced and their relation to the classical automata is determined. Some decision problems concerning automata are shown to be solvable by effective algorithms; others turn out to be unsolvable by algorithms.", "entities from the very beginning rather than only where there is a real purpose in such reference. Hence my wish to keep general terms distinct from abstract singular terms. Even in the theory of validity it happens that the appeal to truth values of statements and extensions of predicates can finally be eliminated. For truth-functional validity can be redefined by the familiar tabular method of computation, and validity in quantification theory can be redefined simply by appeal to the rules of proof (since Godel [1] has proved them complete) . Here is a 1�ood example of the elimination of onto­ logical presuppositions, in one particular domain. In general it is important, I think, to show how the purposes of a certain segment of mathematics can be met with a reduced ontology, just as it is important to show how an erstwhile non­ constructive proof in mathematics can be accomplished by con­ structive means. The interest in progress of this type is no more u See below, p. 128. VI, 4 REIFICATION OF UNIVERSALS 1 1 7 dependent upon an out-and-out intolerance of abstract entities than it is upon an out-a.nd-out intolerance of nonconstructive proof. The important thing is to understand our instrument; to keep tab on the diverse presuppositions of diverse portions of our theory, and reduce them where we can. It is thus that we shall best be prepared to discover, eventually, the over-all dis­ pensability of some assumption that has always rankled as ad hoc and unintuitive.", "By a string on a, 6 we mean a row of a's and 6's such as baabbbab. I t may involve only a, or 6, or be null. If, for example, gi, g2, gz represent strings baby aa, b respectively, string g2gigigzg2 on gi, g2, gz will represent, in obvious fashion, the string aababbabbaa on a, 6. By the correspondence decision problem we mean the problem of determining for an arbitrary finite set (gu g{), (g2, g2), • • • , (gM, gi) of pairs of corresponding non-null strings on a, b whether there is a solution in w, iu ii, • • • , in of equation", "Rudolf Carnap's entire theory of Language structure \"came to me,\" he reports, \"like a vision during a sleepless night in January 1931, when I was ill.\" This theory appeared in The Logical Syntax of Language (1934). Carnap argued that many philosophical controversies really depend upon whether a particular language form should be used. This leads him to his famous \"Principle of tolerance\" by which everyone is free to mix and match the rules of his language and therefore his logic in any way he wishes. In this way, philosophical issues become reduced to a discussion of syntactical properties, plus reasons of practical convenience for preferring one form of language to another. In a tour de force of precise reasoning, Carnap also indicated how two model languages could be constructed. This is one of three books which Open Court is making available in paperback reprint in its Open Court Classics series. The other two are Carnap's The Logical Structure of the World and Schlick's General Theory of Knowledge.", "[ntroduct~ion In [5] the method of generation of the constituent parts of the algorithmic language ALGoI~ was abstracted. This gave rise to a family of \"ALGoL-like\" languages and their constituent parts, the latter called \"definable\" sets. A par-ticutar subclass of the definable sets, the \"sequentially definable\" sets, was then introduced, and a number of results about the definable and the sequentially definable sets proved. (For example, it was shown that the definable sets are identical to the context free phrase structure languages of Chomsky.) In [6] the effect of a number of operations on definable and sequentially definable sets was studied. Among other facts it was demonstrated that both complete sequential machines and generalized sequential machines transform definable sets to definable sets (the companion result for sequentially definable sets being false). In [1] several questions about definable sets were proved recursively unsolvable, that is, there are no algorithms for deciding the answers to these questions. The purpose of the present paper is to prove that two \"natural\" questions about definable and sequentially definable sets are recursively unsolvable. More precisely, we shall show that each of the following questions is recursively un-solvable. (1) Given a definable set, is it sequentially definable? (2) Given two (sequentially) definable sets L1 and L2 ~, (a) does there exist a complete sequential machine which maps L1 onto L2 (into L2)? (b) does there exist a generalized sequential machine which maps L~ onto L~ (into L2 so that the image of L~ is infinite if L~ is infinite)? The reeursive unsolvability of (2b) may be interpreted as saying that if a generalized sequential machine is a faithful model of a translating program and if definable sets are the constituent parts of all possible programming languages, then there is no mechanical procedure for deciding whether of two given programming languages there is a translation program converting one language into the other in a nontrivial way. The basic material and notation for definable and sequentially definable sets are now presented. The reader is referred to [5] for additional details as well as motivation." ], "authors": [ { "name": [ "Seymour Ginsburg", "H. G. Rice" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Rabin", "D. Scott" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. M. Martin", "W. Quine" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Emil L. Post" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Carnap" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Ginsburg", "G. F. Rose" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null ], "s2_corpus_id": [ "16718187", "3160330", "51836914", "122948861", "86401581", "16941895" ], "intents": [ [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false ] }
null
751
0.011984
null
null
null
null
null
null
null
null
d49ef005fd6fe2e2684c1552cc894eed459df4fd
27354720
null
On the Equivalence of Models of Language used in the Fields of Mechanical Translation and Information Retrieval
The purpose of this paper is to compare a certain number of well known models used in the fields of Mechanical Translation (M.T.) and Information Retrieval (I.R.). Different surveys of this type exist , Hays [2], Lecerf [3], Sestier and Dupuis [4]), where models have been compared from the point of view of practical and linguistic adequacy. We wish here to compare certain formal characteristics of these models, in fact to show that they are strictly equivalent to a well studied model. The notion of equivalence will be defined formally, the model to which the other models are equivalent is Chomsky's model of context-free languages (c.f. languages). The equivalences discussed here have not only an abstract character; several practical problems which arise naturally are clarified. 1 THE majority of MT and IR projects have been primarily concerned with the construction of grammars and computer programs aiming to produce syntactic analyses for the sentences of a given natural language. In certain cases, the grammar and the recognition routine are completely amalgamated into a single program and the grammatical information is no longer available for a program that would synthesize sentences. The interest of synthesizing sentences has been shown, synthesis by the output of a transfer grammar, Yngve [5], or synthesis at random, Yngve [6]. This sort of grammar cannot pretend to hold for models of languages, unless one admits that human beings use alternatively two disjoint devices; one for reading, the other for writing; and probably two others, one for hearing, the other for speaking. One would have to construct four types of corresponding devices for each language in order to translate and to retrieve information automatically. Rather frequent are the cases where the grammar is neutral between the programs of analysis and synthesis; these grammars following Chomsky [7] [8], we will call generative grammars. We shall now make more precise the concepts of grammar and language, and examine the requirements a grammar has to meet [9] . We consider the finite set V = [a i | 0  i  D] V is called vocabulary, the a i 's are words, a 0 is the null word. On V the operation of concatenation defines the set C(V) of strings on V, any finite sequence of words T = a i1 a i2 . .. a ik with 0  i 1 i 2 . .. i k  D is a string on V, C(V) is a free monoid whose generators are the a i 's.
{ "name": [ "Gross, Maurice" ], "affiliation": [ null ] }
null
null
Automatic Translation of Languages NATO Summer School
1962-07-01
13
31
null
THE majority of MT and IR projects have been primarily concerned with the construction of grammars and computer programs aiming to produce syntactic analyses for the sentences of a given natural language.In certain cases, the grammar and the recognition routine are completely amalgamated into a single program and the grammatical information is no longer available for a program that would synthesize sentences. The interest of synthesizing sentences has been shown, synthesis by the output of a transfer grammar, Yngve [5] , or synthesis at random, Yngve [6] . This sort of grammar cannot pretend to hold for models of languages, unless one admits that human beings use alternatively two disjoint devices; one for reading, the other for writing; and probably two others, one for hearing, the other for speaking. One would have to construct four types of corresponding devices for each language in order to translate and to retrieve information automatically.Rather frequent are the cases where the grammar is neutral between the programs of analysis and synthesis; these grammars following Chomsky [7] [8], we will call generative grammars.We shall now make more precise the concepts of grammar and language, and examine the requirements a grammar has to meet [9] .We consider the finite setV = [a i | 0  i  D] Vis called vocabulary, the a i 's are words, a 0 is the null word. On V the operation of concatenation defines the set C(V) of strings on V, any finite sequence of words T = a i1 a i2 . .. a ik with 0  i 1 i 2 . .. i k  D is a string on V, C(V) is a free monoid whose generators are the a i 's.(1) A subset L of C(V) is called a language on V : L  C(V).(2) A string S, such that S  L, is a sentence of the language L. (3) A finite set of finite rules which characterize all and only the sentences of L, is called a grammar of L. (Productions of a combinatorial system: Davis [32] .) (4) Two grammars are equivalent if they characterize the same language L.This abstract and most general model is justified empirically as follows: The words (or morphemes) of a natural language L form a finite set, i.e. the vocabulary or lexicon V of the language L.Certain strings on V are clearly understood by speakers of L as sentences of L, others are clearly recognized as nonsentences, so L is a proper subset of C(V).Many facts show that natural languages have to be considered as infinite. The linguistic operation of conjunction can be repeated indefinitely, the same thing for the embedding of relative clauses as in the sentence: {the rat [the cat (the dog chased) killed]ate the malt]. Many other devices of nesting and embedding exist in all natural languages and there is no linguistic motivation which would allow a limit on the number of possible recursions.We wish to construct a grammar for a natural language L, that considered abstractly enumerates (generates) the sentences of the language L, and associated with a recognition routine it recognizes effectively the sentences of L.The device so far described is a normative device which simply tells whether or not a string on V is a sentence of L; equivalently, it is the characteristic function of L. This minimum requirement of separating sentences from non sentences is a main step in our construction. The construction of this normative grammar requires the use of the total grammatical information inherent in L, and under these conditions it should be natural to use this same information in order to give as a by-product a description of the organization of a sentence S 1 in L. This can be done simply by keeping track of the grammar rules that have been involved in the analysis of S 1 . This particular ordered set of rules is called the derivation of S 1 . This by-product is of primary importance; it is a model for the understanding of L by its speakers (Chomsky [9] , Tesnière [10] ); parallelly any MT or IR realization requires the machine to have a deep understanding of the texts to be processed. Furthermore a normative device would not tell whether a sentence is ambiguous or not; the only way to describe the different interpretations assigned to an ambiguous sentence is to give them different descriptions.The basic requirements for a formalized description of natural languages, almost trivial in the sense that they make practically no restrictions on the forms of grammars and languages, do not seem to have been widely recognized in the fields of MT and IR, nevertheless they are always unconsciously accepted.Chomsky [11, 8, 12] has studied a large variety of linguistic and formal constraints that one can reasonably put on the structure of grammars. The grammars so constrained range from finite-state grammars to the device described above, which can be viewed as arbitrary Turing Machine.These grammars in general meet a supplementary requirement; each derivation of a sentence has to provide a particular type of structural description which takes the form of simple trees or, equivalently of parenthesized expressions; both subtrees and parentheses are labelled, i.e. carry grammatical information. Certain types of grammar have been proved inadequate because of their inability to provide a structural description for the sentences they characterize (type 1 grammars in Chomsky [8] ).Many authors in the field of MT and IR start from the postulates:(1) A grammar is to be put, together with a recognition routine, into the memory of an electronic digital computer. (2) The result of the analysis of a sentence is to be given in the form of a structural description.In order to minimize computing time and memory space, further constraints were devised aiming to obtain efficient recognition routines. In general these constraints were put on the structural description and much more attention has been paid to giving a rigorous definition of the structural description than to the definition of a grammar rule. Discussions of this topic can be found in Hays [2] , Lecerf [3] and Plath [13] . The nature of grammatical rules is never emphasized and often the structure of the rules is not even mentioned. Nevertheless the logical priority order of the operations is the following:-the recognition routine traces a derivation of a sentence S according to the grammar.-the derivation of S provides a structural description of S. The structural descriptions generally have a simple form, that of a tree terminating in the linear sequence of the words of S. These trees (or associated parenthesized expressions) are unlabelled in [2] , [3] and [13] . Yngve uses a complex tree with labelled nodes corresponding to well-defined rules of grammar. Other structures than trees could be used [10] in order to increase the amount of information displayed in the structural description. The question of how much information is necessary in the structural description of a given sentence in order to process it automatically (translation or storage and matching of information), has never been studied. Many authors look for a minimization of this information, which is quite unreasonable given the present status of the art; our guess is that the total amount of information available in any formalized system for a sentence S will never be sufficient for a completely mechanized processing of S and that these minimal structural descriptions will have to be considerably enriched. In Section 6 we will show that for many sentences one tree is not sufficient to describe relations between words.Among the simple models which have been or are still used, we will quote: Immediate constituent analysis models as developed in [7] , [14] , [15] and [16] where two substrings B and C of a sentence S can form a larger unit A of S only if they are contiguous (A = BC).Categorical grammars developed by Bar-Hillel where categories of substrings are defined by means of the basic categories (Noun, Sentence) and of the immediate left or right environment.Predictive Analysis models ( [17] , [18] , [19] ) where the grammars are built such that the recognition routine can use a pushdown storage, which is a convenient programming tool.Dependency and projective grammars developed respectively by Hays [20] and Lecerf [3] , which lead to the simple analysis programs their authors have described.We will turn to the dependency grammars as defined in Hays [20] . The linguistic conception originated by Tesnière differs from that of Immediate Constituent Analysis; here the morphemes are connected in terms of the intuitive notions of governor and dependent:The two basic principles which determine the shape of the dependency model are quoted as follows: ([20] pp. 3, 4).(1) Isolation of word order rules from agreement rules.(2) Two occurrences (words) can be connected only if every intervening occurrence depends, directly or indirectly, on one or the other of them.(1) Corresponds to the fact that recognition routine and grammar are separated.(2) Defines both grammar and language. The grammar consists of a set of binary relations between a governor and a dependent. Two occurrences can be connected only if a certain contiguity holds: IN and GARDEN are connected only if the (direct) dependency GARDEN-THE holds; FLOWERS and ARE can be connected only if the indirect dependency IN-THE and the direct one FLOWERS-IN, hold.The dependency languages are exactly the context-free languages. This theorem has been proven by Gaifman and independently by the author. Gaifman [34] has obtained a stronger result showing that the set of the dependency trees is a proper subset of the set of the context-free trees.We give below the part of our proof which yields the following result; the dependency languages are context-free languages.We now construct a PDS automaton of the type of Section 2 which accepts the dependency languages. Like the previous machine, it will not give a structural description of the analysed string but the restriction that the device be normative has no effect on the class of accepted strings.Let V = {a i } i  0 be the vocabulary of the language. We will take as the input vocabulary of the automaton V I = VUe.Let V 0 = eU{A k }U, k  0 be the output vocabulary where the A k 's are syntactic classes.The set {I} of instructions contains I 1 : (e, s 0 , )  (S D , e) I 2 : (a i , S D , e)  (S D , A i ) I 3 : (e, S j , A j )  (S j ,) I 4 : (e,S j ,Ak)  (S j ,) I 5 : (e, S j A k )  (S k , ) I 6 : (e, S j , e)  (S D , A j ) I 7 : (e, S s , )  (S 0 ,r)The main operation of the computation is to connect a dependent to its governor; when this is done the dependent is erased from the storage. I 1 initializes the computation. I 2 is a dictionary look-up instruction; A i a syntactic class of a i is printed on the storage tape.I 3 guesses that there is a connection to be made with the A i immediately to the left of the scanned A j , A j is erased but remembered since the automaton switches into a corresponding state S j . I 4 and I 5 compare the syntactic classes A j and A k : the automaton switches into either the state S j or S k corresponding to the fact that either A j or A k is governor. For a given pair (A j , A k ) there is generally either an I 4 or an I 5 according as A j or A k is governor (in case the agreement is unambiguous). If they cannot be connected at all, the machine stops and the string is not accepted. If the next accessible A r on the storage tape is to be connected with A j or A k then an instruction: (e, S j , A r )  (S, ) or (e, S k , A r )  (S, ) can be applied (either of these instructions can be an I 4 or an I 5 ). If the next accessible A r is not to be connected with A j or A k then instructions I 6 and I 2 are applied. Before the automaton uses an instruction I 5 where it forgets A j it has to make the guess that no dependent on A j will come next from the input tape. I 6 ; the automaton transfers A j from its internal memory to the storage tape, it switches into state S D where it will use an instruction I 2 .I 7 is a type of final instruction; from the state S s where the automaton remembers the topmost governor A s , it switches into the state S 0 after the direct dependents of A s have been connected (i.e. erased); the situation following I 7 will be (#, S 0 , #).This non-deterministic automaton is equivalent to a recognition routine that would look for one 'most probable' solution. As in the case of the predictive analysis discussed above a monitoring program that would enumerate all possible computations would provide all possible solutions.*The instructions I 3 and I 4 (or I 5 for a governor A j (or A k )) remember this governor in the form S j (or S k ). If this governor were modified by the agreement then it could be rewritten S m with m  j (or m  k). This would not change the structure of the automaton, nor would it modify the class of accepted languages.* The author has written in COMIT a recognition routine of this type. The automaton above is a schematization of a part of the program which gave, after one scan from left to right, all the solutions compatible with the grammar.It might be convenient for the diagramming of a sentence to think of null occurrences having a syntactic class; then their position could be restricted (between consecutive A j A k , for example) and in this case too a modification of the automaton shows that the class of accepted languages is the same.In the case of I 5 we have the following indeterminacy about the guess the automaton has to make; if the guess is wrong, namely the next coming A r can be connected to A k ; two cases are possible: (1) A r can be connected to A j the governor of A k , then the computation may still follow a path which will lead to an acceptance (in this case the sentence was ambiguous), (2) A r cannot be connected to A j . The string will not be accepted since the content of the storage tape will never become blank.The cases of conjunction, double conjunction and subordinate conjunctions in the 'secondary structure' require the automaton to have states where it remembers two syntactic types. For example, in the string acb a conjunction c depends on governor b on its right, and the item a equivalent to b and preceding c is marked dependent on c. We will have the following instructions: (C 0 ) (a,SFour classes of grammars, all based on the concept of dependency, are defined by Fitialov.The author remarks that the languages described by these grammars are all phrasestructure languages in the sense of Chomsky [8] . According to Chomsky [8] , [12] phrasestructure grammars include at least context-free and context-sensitive grammars. In his paper Fitialov mentions the use of contexts, and it is not clear which type of phrase structure grammars are Fitialov's grammars.The construction of a P.D.S. automaton very similar to the one above, which is perfectly straightforward, shows that Fitialov's languages are context-free. They are probably powerful enough to describe the whole class of context-free languages, which remains to be proved.Lecerf [24] works on the principle that the dependency representation of a sentence S 1 and the immediate constituent analysis of the same sentence S 1 are both of interest since they show two different aspects of linguistics [3] .This mathematical theory deals with infinite lexicons and doubly structured strings are defined. We will impose the restriction of finiteness on the lexicon. Lecerf proved that the operation of erasing the parentheses provides the tree of the immediate constituent analysis where the dots are the nonterminal nodes. On the other hand, the operation which consists of erasing the brackets and dots provides the tree of the dependency analysis.We will point out some properties of this model. The language defined by right or left adjunction of an operator to a syntagma is obtained when we erase the structure markers (parentheses, brackets, dots). These adjunctions then reduce to the simple operation of concatenation. The language defined is the set of all possible combinations of words. It is the monoid C(M).Clearly what is missing is a set of rules which would tell which syntagma and which operators can be combined, but this problem is not raised.If, following the author, we admit that his model characterizes both the dependency languages and the immediate constituent languages, we have two good reasons to think of the 'G-structures' as being context-free, but no evidence at all. In this case the 'G-structures' would be redundant since Gaifman gave an algorithm which converts every dependency grammar into a particular context-free grammar.The grammar described in Yngve [6] consists of the following types of rules defined as a vocabulary The restriction of the depth hypothesis makes the language a finite state language [6] . Without depth, rules of type (1) and (2) show that the language is at least context-free.V = V N UV T (1) A  a (2) A  BC (3) A  B ... CMatthews [25] studied a special class of grammars; grammars containing rules of types (1), (2), (3) are shown to be a subclass of Matthews' 'one-way discontinuous grammars', which in turn are shown to generate context-free languages.The structural description provided by these rules are not simple trees and cannot be compared to the other structures so far described.The context-free grammars, have found applications in the field of programming languages. We extend here remarks already made by Chomsky [9] and Ginsburg and Rose [26] to the case of natural languages. The two theorems mentioned below derive from the results obtained by Bar-Hillel et al. [27] .An empirical requirement for a grammar is that of giving n structural descriptions for an n-ways ambiguous sentence. A grammar of English will have to give, for example, two analyses of the sort given below, for the sentence: (A) They are flying planes When constructing a c-f grammar for natural languages the number of rules soon becomes very large and one is no longer able to master the interrelations of the rules: rules corresponding to the analysis of new types of sentences are added without seeing exactly what are the repercussions on the analysis of the former ones. We use here an example mentioned in [17] . The grammar used at the Harvard Computation Laboratory contains the grammatical data necessary to analyse the sentence (A) in two different manners. Independently it contains the data necessary to analyse the sentence (B) in the manner shown below: (B) The facts are smoking killsThe dictionary shows that TO PLANE is also a verb and KILL also a noun. The result of the analysis of sentences (A) and (B) is three solutions for each of them (the three trees described above:T 1 , T 2 , T 3 ).Any native speaker of English will say that (A) is twice ambiguous and (B) is not ambiguous at all; the wrong remaining analyses are obtained because of the grammar in which rules have to be made more precise.Obviously T 3 has to be suppressed for (A) and T 1 and T 2 for (B). This situation arises very frequently and one may raise the more general question or systematically detecting ambiguities in order to suppress the undesirable ones. A result of the general theory of c.f. grammars is the following.Theorem. The general problem* of determining whether or not a c.f. grammar is ambiguous is recursively unsolvable, [21] .The meaning of the result is the following: there is no general procedure which, given a c.f. grammar, would tell, after a systematic inspection of the rules, whether the grammar is ambiguous or not. Therefore the stronger question of asking what rules produce ambiguous sentences is recursively unsolvable as well. We can expect that the problem of checking an actual grammar by other means than analysing samples and verifying the structural descriptions one by one is extremely difficult.A scheme widely adopted in the field of M.T.[5] consists of having two independent grammars G 1 , G 2 for two languages L l , L 2 and a transfer grammar T going from L 1 to L 2 (or from L 2 to L 1 ). A result of the theory of c.f. grammars is the following:Theorem. Given two c.f. languages L 1 and L 2 , the problem of deciding whether or not there exists a mapping T such that T(L 1 ) = L 2 is recursively unsolvable [26] .Of course a translation from L 1 to L 2 need not be an exact mapping between L 1 and L 2 , but there may be a large sublanguage of L 2 which is not in the range of any particular grammar T constructed empirically from L 1 and L 2 ; conversely some sublanguage of L 1 may not be translatable into L 2 ; in any case, one should expect that the problem of constructing a practically adequate T for L 1 and L 2 is extremely difficult.These results derived from the general theory show the interest of this theory. The results so far obtained mostly concern the languages themselves; very little is known about the structural descriptions. Studies in this field could provide decisions when a choice comes between different formal systems: for example, this theory could decide which one of the systems described above is more economical according to the number of syntactic classes, or to the number of operations necessary to analyse a sentence. Such questions, if they may be answered, require further and difficult theoretical studies on these systems.Many natural languages present to a certain extent, the features of context-free languages. An example of strings which are not context-free has been given by Bar-Hillel and Solomonoff:N 1 , N 2 . ... AND N k ARE RESPECTIVELY A 1 , A 2 ...A kwhere a relation holds between each noun N i and the corresponding adjective A i . These strings cannot be generated for any k by a context-free grammar.More obvious is the inadequacy of the context-free structural descriptions. Chomsky [7, 11, 12] pointed out that no c.f. grammar can generate the correct structure for a sequence of adjectives modifying a noun. * Except in a very simple case of no linguistic interest.In the cases (1) and (2) above the Noun Phrase has been given too much structure by a c.f. grammar. Very frequent too are the cases where no structure can be given at all by a c.f. grammar: the ambiguous phrase: THE FEAR OF THE ENEMY cannot be given two different structures showing the two possible interpretations, the same remark applies to phrases of the type:VISITING RELATIVES.Another case of lack of information in the structural description given by a contextfree grammar is the following: (i) THE EXPERIMENT IS BOUND TO FAIL BOUND and TO FAIL have to be connected but nothing can tell that EXPERIMENT is subject of TO FAIL.Once a context-free structure is given to (i) it is not very simple to give a different one to the sentence: (ii) THE EXPERIMENT IS IMPOSSIBLE TO REALIZE where EXPERIMENT is object of TO REALIZE.In any case, if two different structures are given to (i) and (ii), nothing will tell any more that these two sentences are very similar.All these examples are beyond the power of context-free grammars. Adequate treatments are possible when using transformational grammars (Chomsky) but these grammars, more difficult to construct and to use, have been disregarded in the field of M.T. on the grounds that they could not be used in a computer, which is false. The programming language COMIT uses precisely the formalism of transformations. Matthews is working on a recognition routine which is using a generative transformational grammar [28] , [29] .The interest of a word for word translation being very limited [30] a scheme of translation sentence for sentence motivated the construction of grammars.Every sentence requires to be given a structural description, and a transfer grammar maps input trees into output trees. Considered from a formal point of view, a transfer grammar is precisely a transformational grammar.The construction of a transfer grammar between two context-free grammars raises serious problems.Let us consider the following example of translation from English to French taken from Klima [31] .(a) HE DWELLED ON ITS ADVANTAGES an almost word for word translation gives the French equivalent: (a') IL A INSISTE SUR SES AVANTAGES but let us consider the passive form (p) of the sentence (a):(p) ITS ADVANTAGES WERE DWELLED ON BY HIM in French (a') has no passive form and (b) has for translation the sentence (a'). What is required for the translation of (p) is either a transformation of a French passive non-sentence (p') (obtained almost word for word):(p') *SES AVANTAGES ONT ETE INSISTE SUR PAR LUI into the sentence (a') or a transformation of (p) into (a), made before the translation. The two solutions are equivalent from the point of view of the operations to be carried out but the second seems more natural.In the latter case, since the passive sentences are described by the means of the active sentences and a transformation, no context-free description of the passive sentences is longer required in the source grammar. Many other cases of the type above show that the use of a transformational source grammar will simplify the transfer grammar, moreover Chomsky has shown that transformational grammars simplify considerably the description of languages. These are two good reasons for M.T. searchers to become interested in models which are less limited than context-free models.* Presented at the NATO Advanced Study Institute on Automatic Translation of Languages, Venice, 15-31 July 1962. † Presently at the "Institut Blaise Pascal-C.N.R.S". Paris, France.
The grammars of the most general type we have described in the previous section can be viewed as arbitrary Turing Machines or equivalently [32] as combinatorial systems (Semi-Thue Systems) where the sentences are derived from an axiom S by the means of a finite set of rewriting rules (productions): .The sentences are defined on the terminal vocabulary V T as in Section 1. The strings ,  are defined on a vocabulary V = V N UV T , where V N is the non-terminal vocabulary which includes an initial symbol S meaning sentence.The arrow between  and  is to be interpreted as 'is rewritten'. A context-free grammar is a finite set of rewriting rules  i   i where  i and  i are strings on V such that: i is a single element of V N at least one  i is S  i is a finite string on V.The language generated by a context-free grammar is called a context-free language.Grammar:S  aSb S  cThis grammar can generate (recognize) all and only the sentences of the type noted a" c b", for any n.The sentence aaa c bbb is obtained by the following steps:S  aSb aSb aaSbb these four lines represent the derivation of the sentence.The associated structural description is the following:The context-free grammars can be associated with restricted Turing Machines such as restricted infinite automata or equivalently, pushdown storage automata.We give an informal description of the pushdown storage automaton (PDS automaton). For a more precise description see Chomsky [12] .The PDS automaton is composed of a control unit which has a finite set of possible internal configurations or states {S j } including an initial state S 0 . The control unit is equipped with a reading head which scans the symbols a i of a finite string written on successive squares of an input tape which is potentially infinite and can move, let us say, only from right to left. {a i | l  i  p} is the set of symbols; V I = eU{a i } is the input vocabulary which includes a null element e; the input strings are defined onV I .The control unit is equipped with a second head which allows it to read and write on a storage tape which can move in either direction and is also potentially infinite. It writes on successive squares of the storage tape strings on a vocabulary: eU{A k |1  k  q} which can include V I . The storage tape vocabulary is V 0 = eU{A k }U, where e is the null element, and  a special symbol which is never printed out.The squares on which the strings are written are occupied simultaneously by both a i (or A k or ) and e; either infinite side of the string can be thought of as filled with the blank symbol #.A situation  of the PDS automaton is a triplet  = (a i , S j , A k ); if the PDS is in the situation  it is also in the situations where the elements a i or A k or both are replaced by e.There is an initial situation (a i , S 0 , ) where the input head is positioned on the leftmost square of the input string, and the storage head on the symbol .A computation starts in an initial situation and is directed by a finite number of instructions I, (S r , x); when the automaton returns to the initial situation the first time, x = .From the situation , where the automaton is in state S j , the automaton switches into the state S r and moves its input tape one square left if the first element of  is an a i , otherwise (for e) the tape is not moved.If x is a string on {A k }, it is printed on successive squares to the right of the square scanned on the storage tape, and the latter is moved  (x) (length of x) squares to the left.If x =  the storage tape is moved one square right, nothing is printed and the square A k previously scanned is replaced by the blank symbol #.If x = e the storage tape undergoes no modification. After an instruction I has been carried out, the automaton is in the new situation ' whose first element is the symbol of the input string now being scanned, the second element is S r , the third element can be either the same A k if x = e or A m if x = A m , or if x = , the rightmost symbol of A k written on the storage tape. If in this new situation ' a new instruction I' can be applied (i.e. there is an I' whose left member is '), the computation goes on, otherwise it is 'blocked'.An input string is accepted by a PDS automaton if starting in an initial situation, it computes until on its first return to S 0 it is in the situation (#, S 0 , #) the storage tape is blank, and the first blank is the one at the right of the input string (the latter has been completely scanned).A set of strings on V I accepted by a PDS automaton will be called a push-down language. We can define context-free grammars and pushdown automata on the same universal alphabet V U ; we then have the following theorem by Chomsky [12] and Schützenberger [33] .The pushdown languages are exactly the context-free languages.
(1) The equivalence of context-free languages and immediate constituent languages has been proven by Chomsky [8] . He proved that for any context-free grammar there exists a grammar whose rules are all of the form A  BC or A  a where the capital letters (members of the nonterminal vocabulary) represent structures and the a's morphemes. Sakai's model is exactly the immediate constituent model. His grammar has rules of the form BC = A, his recognition routine builds, for a given sentence, all binary trees compatible with the grammar.(2) The equivalence of Bar-Hillel's categorical grammars and context-free grammars is proved in [22] .(3) Theorem 1 proves the equivalence of the predictive analysis and the context-free analysis. The recognition routine described in Kuno and Oettinger can be schematized by PDS automaton of Section 2 in the following way: {a i } is identified with the set of syntactic word classes {s j } given by a dictionary; {A k } is identified with the set of predictions P = {P i }. The symbol S, meaning sentence and  meaning period (end of sentence) are P i 's. The set I of instructions contains:I 1 :(e, S 0 , a)  (S 1 , e) I 2 : (e, S 1 ,e)  (S 2 , S)I 3 : (s j , S 2 , P k ) (S Pk ,) I 4 : (e, S Pk , e)  (S 2 , x) I F : (s j , S 2 , )  (S 0 ,)The set of the states is {S 0 , S 1 , S 2 , [S Pk | P k  P]}.The device computes as follows: I 1 and I 2 are initialization instructions; I 2 places on the storage tape the prediction S (sentence).To I 3 , I 4 corresponds the use of the grammar rules; P k is the rightmost prediction on the storage tape; s j is the syntactic class of the scanned word on the input tape. If s j and P k are compatible then by an I 3 the automaton switches into the state S Pk and erases P k ; then by an I 4 it switches back into the state S 2 and prints a string x of predictions which is a function of P k and s j (x may be null).I F ends the computation, it comes after an I 3 where s j was compatible with the 'bottom' prediction  (period);  was erased, then I F applied and leaves the automaton in the situation (#, S 0 , #), where the storage tape is blank and the string accepted.In a situation (e, S Pk , e) following a situation (s j , S 2 , P k ) there may be different possible strings x corresponding to a single pair (s j , P k ); the automaton, which is non-deterministic in this case, will choose one at random; if its choices are right all along the computation, the input string will be accepted, if not the computation will block. These conventions do not affect the class of languages accepted by the automaton. Intuitively, an acceptable string may be rejected several times because of a wrong guess, but there exists a series of right guesses that will make the automaton accept this string.The actual device gives as an output a syntactic role r n for each s j , where r n provides a structural description; this does not affect the class of languages accepted by the PDS automaton.The automaton described above is precisely one which was previously used for predictive analysis, where only one (so-called 'most probable') solution (acceptance) was looked for [23] . The new scheme [17] gives all possible solutions for an input sentence. It can be considered as a monitoring program which provides inputs for the PDS automaton described above and enumerates all possible computations the automaton has to do for every input string. If a sentence contains homographs then the corresponding input strings are enumerated and fed successively into the automaton; the latter instead of making a guess when in a non-deterministic situation, tries all of them; they are kept track of by the moni-toring program; when a computation blocks, the storage tape (subpool) is discarded and a new computation is proposed to the automaton.
null
null
Main paper: chomsky's context-free languages*: The grammars of the most general type we have described in the previous section can be viewed as arbitrary Turing Machines or equivalently [32] as combinatorial systems (Semi-Thue Systems) where the sentences are derived from an axiom S by the means of a finite set of rewriting rules (productions): .The sentences are defined on the terminal vocabulary V T as in Section 1. The strings ,  are defined on a vocabulary V = V N UV T , where V N is the non-terminal vocabulary which includes an initial symbol S meaning sentence.The arrow between  and  is to be interpreted as 'is rewritten'. A context-free grammar is a finite set of rewriting rules  i   i where  i and  i are strings on V such that: i is a single element of V N at least one  i is S  i is a finite string on V.The language generated by a context-free grammar is called a context-free language.Grammar:S  aSb S  cThis grammar can generate (recognize) all and only the sentences of the type noted a" c b", for any n.The sentence aaa c bbb is obtained by the following steps:S  aSb aSb aaSbb these four lines represent the derivation of the sentence.The associated structural description is the following:The context-free grammars can be associated with restricted Turing Machines such as restricted infinite automata or equivalently, pushdown storage automata.We give an informal description of the pushdown storage automaton (PDS automaton). For a more precise description see Chomsky [12] .The PDS automaton is composed of a control unit which has a finite set of possible internal configurations or states {S j } including an initial state S 0 . The control unit is equipped with a reading head which scans the symbols a i of a finite string written on successive squares of an input tape which is potentially infinite and can move, let us say, only from right to left. {a i | l  i  p} is the set of symbols; V I = eU{a i } is the input vocabulary which includes a null element e; the input strings are defined onV I .The control unit is equipped with a second head which allows it to read and write on a storage tape which can move in either direction and is also potentially infinite. It writes on successive squares of the storage tape strings on a vocabulary: eU{A k |1  k  q} which can include V I . The storage tape vocabulary is V 0 = eU{A k }U, where e is the null element, and  a special symbol which is never printed out.The squares on which the strings are written are occupied simultaneously by both a i (or A k or ) and e; either infinite side of the string can be thought of as filled with the blank symbol #.A situation  of the PDS automaton is a triplet  = (a i , S j , A k ); if the PDS is in the situation  it is also in the situations where the elements a i or A k or both are replaced by e.There is an initial situation (a i , S 0 , ) where the input head is positioned on the leftmost square of the input string, and the storage head on the symbol .A computation starts in an initial situation and is directed by a finite number of instructions I, (S r , x); when the automaton returns to the initial situation the first time, x = .From the situation , where the automaton is in state S j , the automaton switches into the state S r and moves its input tape one square left if the first element of  is an a i , otherwise (for e) the tape is not moved.If x is a string on {A k }, it is printed on successive squares to the right of the square scanned on the storage tape, and the latter is moved  (x) (length of x) squares to the left.If x =  the storage tape is moved one square right, nothing is printed and the square A k previously scanned is replaced by the blank symbol #.If x = e the storage tape undergoes no modification. After an instruction I has been carried out, the automaton is in the new situation ' whose first element is the symbol of the input string now being scanned, the second element is S r , the third element can be either the same A k if x = e or A m if x = A m , or if x = , the rightmost symbol of A k written on the storage tape. If in this new situation ' a new instruction I' can be applied (i.e. there is an I' whose left member is '), the computation goes on, otherwise it is 'blocked'.An input string is accepted by a PDS automaton if starting in an initial situation, it computes until on its first return to S 0 it is in the situation (#, S 0 , #) the storage tape is blank, and the first blank is the one at the right of the input string (the latter has been completely scanned).A set of strings on V I accepted by a PDS automaton will be called a push-down language. We can define context-free grammars and pushdown automata on the same universal alphabet V U ; we then have the following theorem by Chomsky [12] and Schützenberger [33] .The pushdown languages are exactly the context-free languages. equivalences of languages: (1) The equivalence of context-free languages and immediate constituent languages has been proven by Chomsky [8] . He proved that for any context-free grammar there exists a grammar whose rules are all of the form A  BC or A  a where the capital letters (members of the nonterminal vocabulary) represent structures and the a's morphemes. Sakai's model is exactly the immediate constituent model. His grammar has rules of the form BC = A, his recognition routine builds, for a given sentence, all binary trees compatible with the grammar.(2) The equivalence of Bar-Hillel's categorical grammars and context-free grammars is proved in [22] .(3) Theorem 1 proves the equivalence of the predictive analysis and the context-free analysis. The recognition routine described in Kuno and Oettinger can be schematized by PDS automaton of Section 2 in the following way: {a i } is identified with the set of syntactic word classes {s j } given by a dictionary; {A k } is identified with the set of predictions P = {P i }. The symbol S, meaning sentence and  meaning period (end of sentence) are P i 's. The set I of instructions contains:I 1 :(e, S 0 , a)  (S 1 , e) I 2 : (e, S 1 ,e)  (S 2 , S)I 3 : (s j , S 2 , P k ) (S Pk ,) I 4 : (e, S Pk , e)  (S 2 , x) I F : (s j , S 2 , )  (S 0 ,)The set of the states is {S 0 , S 1 , S 2 , [S Pk | P k  P]}.The device computes as follows: I 1 and I 2 are initialization instructions; I 2 places on the storage tape the prediction S (sentence).To I 3 , I 4 corresponds the use of the grammar rules; P k is the rightmost prediction on the storage tape; s j is the syntactic class of the scanned word on the input tape. If s j and P k are compatible then by an I 3 the automaton switches into the state S Pk and erases P k ; then by an I 4 it switches back into the state S 2 and prints a string x of predictions which is a function of P k and s j (x may be null).I F ends the computation, it comes after an I 3 where s j was compatible with the 'bottom' prediction  (period);  was erased, then I F applied and leaves the automaton in the situation (#, S 0 , #), where the storage tape is blank and the string accepted.In a situation (e, S Pk , e) following a situation (s j , S 2 , P k ) there may be different possible strings x corresponding to a single pair (s j , P k ); the automaton, which is non-deterministic in this case, will choose one at random; if its choices are right all along the computation, the input string will be accepted, if not the computation will block. These conventions do not affect the class of languages accepted by the automaton. Intuitively, an acceptable string may be rejected several times because of a wrong guess, but there exists a series of right guesses that will make the automaton accept this string.The actual device gives as an output a syntactic role r n for each s j , where r n provides a structural description; this does not affect the class of languages accepted by the PDS automaton.The automaton described above is precisely one which was previously used for predictive analysis, where only one (so-called 'most probable') solution (acceptance) was looked for [23] . The new scheme [17] gives all possible solutions for an input sentence. It can be considered as a monitoring program which provides inputs for the PDS automaton described above and enumerates all possible computations the automaton has to do for every input string. If a sentence contains homographs then the corresponding input strings are enumerated and fed successively into the automaton; the latter instead of making a guess when in a non-deterministic situation, tries all of them; they are kept track of by the moni-toring program; when a computation blocks, the storage tape (subpool) is discarded and a new computation is proposed to the automaton. dependency languages: We will turn to the dependency grammars as defined in Hays [20] . The linguistic conception originated by Tesnière differs from that of Immediate Constituent Analysis; here the morphemes are connected in terms of the intuitive notions of governor and dependent:The two basic principles which determine the shape of the dependency model are quoted as follows: ([20] pp. 3, 4).(1) Isolation of word order rules from agreement rules.(2) Two occurrences (words) can be connected only if every intervening occurrence depends, directly or indirectly, on one or the other of them.(1) Corresponds to the fact that recognition routine and grammar are separated.(2) Defines both grammar and language. The grammar consists of a set of binary relations between a governor and a dependent. Two occurrences can be connected only if a certain contiguity holds: IN and GARDEN are connected only if the (direct) dependency GARDEN-THE holds; FLOWERS and ARE can be connected only if the indirect dependency IN-THE and the direct one FLOWERS-IN, hold.The dependency languages are exactly the context-free languages. This theorem has been proven by Gaifman and independently by the author. Gaifman [34] has obtained a stronger result showing that the set of the dependency trees is a proper subset of the set of the context-free trees.We give below the part of our proof which yields the following result; the dependency languages are context-free languages.We now construct a PDS automaton of the type of Section 2 which accepts the dependency languages. Like the previous machine, it will not give a structural description of the analysed string but the restriction that the device be normative has no effect on the class of accepted strings.Let V = {a i } i  0 be the vocabulary of the language. We will take as the input vocabulary of the automaton V I = VUe.Let V 0 = eU{A k }U, k  0 be the output vocabulary where the A k 's are syntactic classes.The set {I} of instructions contains I 1 : (e, s 0 , )  (S D , e) I 2 : (a i , S D , e)  (S D , A i ) I 3 : (e, S j , A j )  (S j ,) I 4 : (e,S j ,Ak)  (S j ,) I 5 : (e, S j A k )  (S k , ) I 6 : (e, S j , e)  (S D , A j ) I 7 : (e, S s , )  (S 0 ,r)The main operation of the computation is to connect a dependent to its governor; when this is done the dependent is erased from the storage. I 1 initializes the computation. I 2 is a dictionary look-up instruction; A i a syntactic class of a i is printed on the storage tape.I 3 guesses that there is a connection to be made with the A i immediately to the left of the scanned A j , A j is erased but remembered since the automaton switches into a corresponding state S j . I 4 and I 5 compare the syntactic classes A j and A k : the automaton switches into either the state S j or S k corresponding to the fact that either A j or A k is governor. For a given pair (A j , A k ) there is generally either an I 4 or an I 5 according as A j or A k is governor (in case the agreement is unambiguous). If they cannot be connected at all, the machine stops and the string is not accepted. If the next accessible A r on the storage tape is to be connected with A j or A k then an instruction: (e, S j , A r )  (S, ) or (e, S k , A r )  (S, ) can be applied (either of these instructions can be an I 4 or an I 5 ). If the next accessible A r is not to be connected with A j or A k then instructions I 6 and I 2 are applied. Before the automaton uses an instruction I 5 where it forgets A j it has to make the guess that no dependent on A j will come next from the input tape. I 6 ; the automaton transfers A j from its internal memory to the storage tape, it switches into state S D where it will use an instruction I 2 .I 7 is a type of final instruction; from the state S s where the automaton remembers the topmost governor A s , it switches into the state S 0 after the direct dependents of A s have been connected (i.e. erased); the situation following I 7 will be (#, S 0 , #).This non-deterministic automaton is equivalent to a recognition routine that would look for one 'most probable' solution. As in the case of the predictive analysis discussed above a monitoring program that would enumerate all possible computations would provide all possible solutions.*The instructions I 3 and I 4 (or I 5 for a governor A j (or A k )) remember this governor in the form S j (or S k ). If this governor were modified by the agreement then it could be rewritten S m with m  j (or m  k). This would not change the structure of the automaton, nor would it modify the class of accepted languages.* The author has written in COMIT a recognition routine of this type. The automaton above is a schematization of a part of the program which gave, after one scan from left to right, all the solutions compatible with the grammar.It might be convenient for the diagramming of a sentence to think of null occurrences having a syntactic class; then their position could be restricted (between consecutive A j A k , for example) and in this case too a modification of the automaton shows that the class of accepted languages is the same.In the case of I 5 we have the following indeterminacy about the guess the automaton has to make; if the guess is wrong, namely the next coming A r can be connected to A k ; two cases are possible: (1) A r can be connected to A j the governor of A k , then the computation may still follow a path which will lead to an acceptance (in this case the sentence was ambiguous), (2) A r cannot be connected to A j . The string will not be accepted since the content of the storage tape will never become blank.The cases of conjunction, double conjunction and subordinate conjunctions in the 'secondary structure' require the automaton to have states where it remembers two syntactic types. For example, in the string acb a conjunction c depends on governor b on its right, and the item a equivalent to b and preceding c is marked dependent on c. We will have the following instructions: (C 0 ) (a,SFour classes of grammars, all based on the concept of dependency, are defined by Fitialov.The author remarks that the languages described by these grammars are all phrasestructure languages in the sense of Chomsky [8] . According to Chomsky [8] , [12] phrasestructure grammars include at least context-free and context-sensitive grammars. In his paper Fitialov mentions the use of contexts, and it is not clear which type of phrase structure grammars are Fitialov's grammars.The construction of a P.D.S. automaton very similar to the one above, which is perfectly straightforward, shows that Fitialov's languages are context-free. They are probably powerful enough to describe the whole class of context-free languages, which remains to be proved.Lecerf [24] works on the principle that the dependency representation of a sentence S 1 and the immediate constituent analysis of the same sentence S 1 are both of interest since they show two different aspects of linguistics [3] .This mathematical theory deals with infinite lexicons and doubly structured strings are defined. We will impose the restriction of finiteness on the lexicon. Lecerf proved that the operation of erasing the parentheses provides the tree of the immediate constituent analysis where the dots are the nonterminal nodes. On the other hand, the operation which consists of erasing the brackets and dots provides the tree of the dependency analysis.We will point out some properties of this model. The language defined by right or left adjunction of an operator to a syntagma is obtained when we erase the structure markers (parentheses, brackets, dots). These adjunctions then reduce to the simple operation of concatenation. The language defined is the set of all possible combinations of words. It is the monoid C(M).Clearly what is missing is a set of rules which would tell which syntagma and which operators can be combined, but this problem is not raised.If, following the author, we admit that his model characterizes both the dependency languages and the immediate constituent languages, we have two good reasons to think of the 'G-structures' as being context-free, but no evidence at all. In this case the 'G-structures' would be redundant since Gaifman gave an algorithm which converts every dependency grammar into a particular context-free grammar.The grammar described in Yngve [6] consists of the following types of rules defined as a vocabulary The restriction of the depth hypothesis makes the language a finite state language [6] . Without depth, rules of type (1) and (2) show that the language is at least context-free.V = V N UV T (1) A  a (2) A  BC (3) A  B ... CMatthews [25] studied a special class of grammars; grammars containing rules of types (1), (2), (3) are shown to be a subclass of Matthews' 'one-way discontinuous grammars', which in turn are shown to generate context-free languages.The structural description provided by these rules are not simple trees and cannot be compared to the other structures so far described. applications of the general theory of c.f. grammar: The context-free grammars, have found applications in the field of programming languages. We extend here remarks already made by Chomsky [9] and Ginsburg and Rose [26] to the case of natural languages. The two theorems mentioned below derive from the results obtained by Bar-Hillel et al. [27] .An empirical requirement for a grammar is that of giving n structural descriptions for an n-ways ambiguous sentence. A grammar of English will have to give, for example, two analyses of the sort given below, for the sentence: (A) They are flying planes When constructing a c-f grammar for natural languages the number of rules soon becomes very large and one is no longer able to master the interrelations of the rules: rules corresponding to the analysis of new types of sentences are added without seeing exactly what are the repercussions on the analysis of the former ones. We use here an example mentioned in [17] . The grammar used at the Harvard Computation Laboratory contains the grammatical data necessary to analyse the sentence (A) in two different manners. Independently it contains the data necessary to analyse the sentence (B) in the manner shown below: (B) The facts are smoking killsThe dictionary shows that TO PLANE is also a verb and KILL also a noun. The result of the analysis of sentences (A) and (B) is three solutions for each of them (the three trees described above:T 1 , T 2 , T 3 ).Any native speaker of English will say that (A) is twice ambiguous and (B) is not ambiguous at all; the wrong remaining analyses are obtained because of the grammar in which rules have to be made more precise.Obviously T 3 has to be suppressed for (A) and T 1 and T 2 for (B). This situation arises very frequently and one may raise the more general question or systematically detecting ambiguities in order to suppress the undesirable ones. A result of the general theory of c.f. grammars is the following.Theorem. The general problem* of determining whether or not a c.f. grammar is ambiguous is recursively unsolvable, [21] .The meaning of the result is the following: there is no general procedure which, given a c.f. grammar, would tell, after a systematic inspection of the rules, whether the grammar is ambiguous or not. Therefore the stronger question of asking what rules produce ambiguous sentences is recursively unsolvable as well. We can expect that the problem of checking an actual grammar by other means than analysing samples and verifying the structural descriptions one by one is extremely difficult.A scheme widely adopted in the field of M.T.[5] consists of having two independent grammars G 1 , G 2 for two languages L l , L 2 and a transfer grammar T going from L 1 to L 2 (or from L 2 to L 1 ). A result of the theory of c.f. grammars is the following:Theorem. Given two c.f. languages L 1 and L 2 , the problem of deciding whether or not there exists a mapping T such that T(L 1 ) = L 2 is recursively unsolvable [26] .Of course a translation from L 1 to L 2 need not be an exact mapping between L 1 and L 2 , but there may be a large sublanguage of L 2 which is not in the range of any particular grammar T constructed empirically from L 1 and L 2 ; conversely some sublanguage of L 1 may not be translatable into L 2 ; in any case, one should expect that the problem of constructing a practically adequate T for L 1 and L 2 is extremely difficult.These results derived from the general theory show the interest of this theory. The results so far obtained mostly concern the languages themselves; very little is known about the structural descriptions. Studies in this field could provide decisions when a choice comes between different formal systems: for example, this theory could decide which one of the systems described above is more economical according to the number of syntactic classes, or to the number of operations necessary to analyse a sentence. Such questions, if they may be answered, require further and difficult theoretical studies on these systems. adequacy of context-free models: Many natural languages present to a certain extent, the features of context-free languages. An example of strings which are not context-free has been given by Bar-Hillel and Solomonoff:N 1 , N 2 . ... AND N k ARE RESPECTIVELY A 1 , A 2 ...A kwhere a relation holds between each noun N i and the corresponding adjective A i . These strings cannot be generated for any k by a context-free grammar.More obvious is the inadequacy of the context-free structural descriptions. Chomsky [7, 11, 12] pointed out that no c.f. grammar can generate the correct structure for a sequence of adjectives modifying a noun. * Except in a very simple case of no linguistic interest.In the cases (1) and (2) above the Noun Phrase has been given too much structure by a c.f. grammar. Very frequent too are the cases where no structure can be given at all by a c.f. grammar: the ambiguous phrase: THE FEAR OF THE ENEMY cannot be given two different structures showing the two possible interpretations, the same remark applies to phrases of the type:VISITING RELATIVES.Another case of lack of information in the structural description given by a contextfree grammar is the following: (i) THE EXPERIMENT IS BOUND TO FAIL BOUND and TO FAIL have to be connected but nothing can tell that EXPERIMENT is subject of TO FAIL.Once a context-free structure is given to (i) it is not very simple to give a different one to the sentence: (ii) THE EXPERIMENT IS IMPOSSIBLE TO REALIZE where EXPERIMENT is object of TO REALIZE.In any case, if two different structures are given to (i) and (ii), nothing will tell any more that these two sentences are very similar.All these examples are beyond the power of context-free grammars. Adequate treatments are possible when using transformational grammars (Chomsky) but these grammars, more difficult to construct and to use, have been disregarded in the field of M.T. on the grounds that they could not be used in a computer, which is false. The programming language COMIT uses precisely the formalism of transformations. Matthews is working on a recognition routine which is using a generative transformational grammar [28] , [29] .The interest of a word for word translation being very limited [30] a scheme of translation sentence for sentence motivated the construction of grammars.Every sentence requires to be given a structural description, and a transfer grammar maps input trees into output trees. Considered from a formal point of view, a transfer grammar is precisely a transformational grammar.The construction of a transfer grammar between two context-free grammars raises serious problems.Let us consider the following example of translation from English to French taken from Klima [31] .(a) HE DWELLED ON ITS ADVANTAGES an almost word for word translation gives the French equivalent: (a') IL A INSISTE SUR SES AVANTAGES but let us consider the passive form (p) of the sentence (a):(p) ITS ADVANTAGES WERE DWELLED ON BY HIM in French (a') has no passive form and (b) has for translation the sentence (a'). What is required for the translation of (p) is either a transformation of a French passive non-sentence (p') (obtained almost word for word):(p') *SES AVANTAGES ONT ETE INSISTE SUR PAR LUI into the sentence (a') or a transformation of (p) into (a), made before the translation. The two solutions are equivalent from the point of view of the operations to be carried out but the second seems more natural.In the latter case, since the passive sentences are described by the means of the active sentences and a transformation, no context-free description of the passive sentences is longer required in the source grammar. Many other cases of the type above show that the use of a transformational source grammar will simplify the transfer grammar, moreover Chomsky has shown that transformational grammars simplify considerably the description of languages. These are two good reasons for M.T. searchers to become interested in models which are less limited than context-free models.* Presented at the NATO Advanced Study Institute on Automatic Translation of Languages, Venice, 15-31 July 1962. † Presently at the "Institut Blaise Pascal-C.N.R.S". Paris, France. 1: THE majority of MT and IR projects have been primarily concerned with the construction of grammars and computer programs aiming to produce syntactic analyses for the sentences of a given natural language.In certain cases, the grammar and the recognition routine are completely amalgamated into a single program and the grammatical information is no longer available for a program that would synthesize sentences. The interest of synthesizing sentences has been shown, synthesis by the output of a transfer grammar, Yngve [5] , or synthesis at random, Yngve [6] . This sort of grammar cannot pretend to hold for models of languages, unless one admits that human beings use alternatively two disjoint devices; one for reading, the other for writing; and probably two others, one for hearing, the other for speaking. One would have to construct four types of corresponding devices for each language in order to translate and to retrieve information automatically.Rather frequent are the cases where the grammar is neutral between the programs of analysis and synthesis; these grammars following Chomsky [7] [8], we will call generative grammars.We shall now make more precise the concepts of grammar and language, and examine the requirements a grammar has to meet [9] .We consider the finite setV = [a i | 0  i  D] Vis called vocabulary, the a i 's are words, a 0 is the null word. On V the operation of concatenation defines the set C(V) of strings on V, any finite sequence of words T = a i1 a i2 . .. a ik with 0  i 1 i 2 . .. i k  D is a string on V, C(V) is a free monoid whose generators are the a i 's.(1) A subset L of C(V) is called a language on V : L  C(V).(2) A string S, such that S  L, is a sentence of the language L. (3) A finite set of finite rules which characterize all and only the sentences of L, is called a grammar of L. (Productions of a combinatorial system: Davis [32] .) (4) Two grammars are equivalent if they characterize the same language L.This abstract and most general model is justified empirically as follows: The words (or morphemes) of a natural language L form a finite set, i.e. the vocabulary or lexicon V of the language L.Certain strings on V are clearly understood by speakers of L as sentences of L, others are clearly recognized as nonsentences, so L is a proper subset of C(V).Many facts show that natural languages have to be considered as infinite. The linguistic operation of conjunction can be repeated indefinitely, the same thing for the embedding of relative clauses as in the sentence: {the rat [the cat (the dog chased) killed]ate the malt]. Many other devices of nesting and embedding exist in all natural languages and there is no linguistic motivation which would allow a limit on the number of possible recursions.We wish to construct a grammar for a natural language L, that considered abstractly enumerates (generates) the sentences of the language L, and associated with a recognition routine it recognizes effectively the sentences of L.The device so far described is a normative device which simply tells whether or not a string on V is a sentence of L; equivalently, it is the characteristic function of L. This minimum requirement of separating sentences from non sentences is a main step in our construction. The construction of this normative grammar requires the use of the total grammatical information inherent in L, and under these conditions it should be natural to use this same information in order to give as a by-product a description of the organization of a sentence S 1 in L. This can be done simply by keeping track of the grammar rules that have been involved in the analysis of S 1 . This particular ordered set of rules is called the derivation of S 1 . This by-product is of primary importance; it is a model for the understanding of L by its speakers (Chomsky [9] , Tesnière [10] ); parallelly any MT or IR realization requires the machine to have a deep understanding of the texts to be processed. Furthermore a normative device would not tell whether a sentence is ambiguous or not; the only way to describe the different interpretations assigned to an ambiguous sentence is to give them different descriptions.The basic requirements for a formalized description of natural languages, almost trivial in the sense that they make practically no restrictions on the forms of grammars and languages, do not seem to have been widely recognized in the fields of MT and IR, nevertheless they are always unconsciously accepted.Chomsky [11, 8, 12] has studied a large variety of linguistic and formal constraints that one can reasonably put on the structure of grammars. The grammars so constrained range from finite-state grammars to the device described above, which can be viewed as arbitrary Turing Machine.These grammars in general meet a supplementary requirement; each derivation of a sentence has to provide a particular type of structural description which takes the form of simple trees or, equivalently of parenthesized expressions; both subtrees and parentheses are labelled, i.e. carry grammatical information. Certain types of grammar have been proved inadequate because of their inability to provide a structural description for the sentences they characterize (type 1 grammars in Chomsky [8] ).Many authors in the field of MT and IR start from the postulates:(1) A grammar is to be put, together with a recognition routine, into the memory of an electronic digital computer. (2) The result of the analysis of a sentence is to be given in the form of a structural description.In order to minimize computing time and memory space, further constraints were devised aiming to obtain efficient recognition routines. In general these constraints were put on the structural description and much more attention has been paid to giving a rigorous definition of the structural description than to the definition of a grammar rule. Discussions of this topic can be found in Hays [2] , Lecerf [3] and Plath [13] . The nature of grammatical rules is never emphasized and often the structure of the rules is not even mentioned. Nevertheless the logical priority order of the operations is the following:-the recognition routine traces a derivation of a sentence S according to the grammar.-the derivation of S provides a structural description of S. The structural descriptions generally have a simple form, that of a tree terminating in the linear sequence of the words of S. These trees (or associated parenthesized expressions) are unlabelled in [2] , [3] and [13] . Yngve uses a complex tree with labelled nodes corresponding to well-defined rules of grammar. Other structures than trees could be used [10] in order to increase the amount of information displayed in the structural description. The question of how much information is necessary in the structural description of a given sentence in order to process it automatically (translation or storage and matching of information), has never been studied. Many authors look for a minimization of this information, which is quite unreasonable given the present status of the art; our guess is that the total amount of information available in any formalized system for a sentence S will never be sufficient for a completely mechanized processing of S and that these minimal structural descriptions will have to be considerably enriched. In Section 6 we will show that for many sentences one tree is not sufficient to describe relations between words.Among the simple models which have been or are still used, we will quote: Immediate constituent analysis models as developed in [7] , [14] , [15] and [16] where two substrings B and C of a sentence S can form a larger unit A of S only if they are contiguous (A = BC).Categorical grammars developed by Bar-Hillel where categories of substrings are defined by means of the basic categories (Noun, Sentence) and of the immediate left or right environment.Predictive Analysis models ( [17] , [18] , [19] ) where the grammars are built such that the recognition routine can use a pushdown storage, which is a convenient programming tool.Dependency and projective grammars developed respectively by Hays [20] and Lecerf [3] , which lead to the simple analysis programs their authors have described. Appendix:
null
null
null
null
{ "paperhash": [ "davis|computability_and_unsolvability", "chomsky|three_models_for_the_description_of_language", "ginsburg|some_recursively_unsolvable_problems_in_algol-like_languages" ], "title": [ "Computability and Unsolvability", "Three models for the description of language", "Some Recursively Unsolvable Problems in ALGOL-Like Languages" ], "abstract": [ "Only for you today! Discover your favourite computability and unsolvability book right here by downloading and getting the soft file of the book. This is not your time to traditionally go to the book stores to buy a book. Here, varieties of book collections are available to download. One of them is this computability and unsolvability as your preferred book. Getting this book b on-line in this site can be realized now by visiting the link page to download. It will be easy. Why should be here?", "We investigate several conceptions of linguistic structure to determine whether or not they can provide simple and \"revealing\" grammars that generate all of the sentences of English and only these. We find that no finite-state Markov process that produces symbols with transition from state to state can serve as an English grammar. Furthermore, the particular subclass of such processes that produce n -order statistical approximations to English do not come closer, with increasing n , to matching the output of an English grammar. We formalize-the notions of \"phrase structure\" and show that this gives us a method for describing language which is essentially more powerful, though still representable as a rather elementary type of finite-state process. Nevertheless, it is successful only when limited to a small subset of simple sentences. We study the formal properties of a set of grammatical transformations that carry sentences with phrase structure into new sentences with derived phrase structure, showing that transformational grammars are processes of the same elementary type as phrase-structure grammars; that the grammar of English is materially simplified if phrase structure description is limited to a kernel of simple sentences from which all other sentences are constructed by repeated transformations; and that this view of linguistic structure gives a certain insight into the use and understanding of language.", "[ntroduct~ion In [5] the method of generation of the constituent parts of the algorithmic language ALGoI~ was abstracted. This gave rise to a family of \"ALGoL-like\" languages and their constituent parts, the latter called \"definable\" sets. A par-ticutar subclass of the definable sets, the \"sequentially definable\" sets, was then introduced, and a number of results about the definable and the sequentially definable sets proved. (For example, it was shown that the definable sets are identical to the context free phrase structure languages of Chomsky.) In [6] the effect of a number of operations on definable and sequentially definable sets was studied. Among other facts it was demonstrated that both complete sequential machines and generalized sequential machines transform definable sets to definable sets (the companion result for sequentially definable sets being false). In [1] several questions about definable sets were proved recursively unsolvable, that is, there are no algorithms for deciding the answers to these questions. The purpose of the present paper is to prove that two \"natural\" questions about definable and sequentially definable sets are recursively unsolvable. More precisely, we shall show that each of the following questions is recursively un-solvable. (1) Given a definable set, is it sequentially definable? (2) Given two (sequentially) definable sets L1 and L2 ~, (a) does there exist a complete sequential machine which maps L1 onto L2 (into L2)? (b) does there exist a generalized sequential machine which maps L~ onto L~ (into L2 so that the image of L~ is infinite if L~ is infinite)? The reeursive unsolvability of (2b) may be interpreted as saying that if a generalized sequential machine is a faithful model of a translating program and if definable sets are the constituent parts of all possible programming languages, then there is no mechanical procedure for deciding whether of two given programming languages there is a translation program converting one language into the other in a nontrivial way. The basic material and notation for definable and sequentially definable sets are now presented. The reader is referred to [5] for additional details as well as motivation." ], "authors": [ { "name": [ "Martin D. Davis" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "N. Chomsky" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Ginsburg", "G. F. Rose" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "32008469", "17432009", "16941895" ], "intents": [ [], [], [] ], "isInfluential": [ false, false, false ] }
null
751
0.041278
null
null
null
null
null
null
null
null
58922a4e16d29b5b65b93890414c3159a21af180
45287348
null
Grammaire {I} Description Transformationnelle D{'}un Sous-Ensemble Du {F}rancais
DESCRIPTION TRANSFORMATIONNELLE D'UN SOUS-ENSE)~LE DU FRANCAIS I. COMPOSITION DE LA GRAMMAIRE Nous supDosons une certaine familiarisation avec le module transformationnel de descriptions linguistiques expos~ par Chomsky dans Aspects of the theor~of syntax (1965). Ainsi nous insisterons surtout sur ce en quoi GRAMMAIRE-I s'~carte de ce module. Signalons tout d'abord qu'elle ne contient pas de composante lexicale ni de r~gles de sous-cat~gorisation. La base donc est incomplete: elle contient exclusivement un ensemble de rgslesdg_pat~9o_risa__tiond99 syy_nc tagmes ("rewritinq rules"
{ "name": [ "Querido, Antonio A.M." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 43
1969-09-01
0
0
null
La sous-cat~gorlsation des categories ~l~mentaires de syntagmes ("pre-terminal Symbols") et l'at-tachement des unit~s lexicales se font manue~ement.A mesure que nous assignions des sous~cat~@oties ou traits aux syntagmes ~l~mentaires pour en marquer les propri~t~s syntactico-s~mantiques, nous avons progressivement assembl~ un syst~me de traits binaires, qui, dans le cadre du module de Chomsky 1965, seraient normalement introduits par des r~gles de sous-cat~gorisation ("subcategorization rules").La relation d'opposition entre deux sous-cat~gories (traits) est marquee par les coefficients "+" et "-".Nous distinguerons tout d'abord les traits de la structure profonde et les traits des structures d~ri-v~es.Parmi les traits de la structure profonde nous faisons la distinction entre traits contextuels et traits non-contextuels. Les premiers expriment des restrictions de co-occurrence par rapport ~ des categories de syntagmes ~l~mentaires (les "strict subcategorization features" de Chomsky ~965))ou par rapport ~ des souscategories de syntagmes ~l~mentaires (les "selectional features" de Chomsky). Ainsi ( ~ hum m (humain/non-httmain) Dans le cas des traits morph_olgqiques on ne peut plus parler de sous-©at~9ories de syntagmes ~l~mentaires, comme dans le cas des autres types de traits (cf. dans GRAMMAIRE-I l'inventaire des traits organis~s selon les types que nous avons ~num~r~s).En fair l'objectif fondamental de la premiere phase de notre recherche ~tait l'~laboration d'un s~st~m_e_ cyclique, de transformations qui metre en correspondance la structure profonde des phrases, orient~e vers l'inter- inf. -40-~L~plurJ [T62] ELLIPSE-~ 1 X 1 |T63] M-PRON X 1 1 -57- +accusl fem | plUr~PRON] ~que1~ L ~L J 3 X a V[T65] M-leur X $ PRON ~,a) V
null
null
null
null
Main paper: : La sous-cat~gorlsation des categories ~l~mentaires de syntagmes ("pre-terminal Symbols") et l'at-tachement des unit~s lexicales se font manue~ement.A mesure que nous assignions des sous~cat~@oties ou traits aux syntagmes ~l~mentaires pour en marquer les propri~t~s syntactico-s~mantiques, nous avons progressivement assembl~ un syst~me de traits binaires, qui, dans le cadre du module de Chomsky 1965, seraient normalement introduits par des r~gles de sous-cat~gorisation ("subcategorization rules").La relation d'opposition entre deux sous-cat~gories (traits) est marquee par les coefficients "+" et "-".Nous distinguerons tout d'abord les traits de la structure profonde et les traits des structures d~ri-v~es.Parmi les traits de la structure profonde nous faisons la distinction entre traits contextuels et traits non-contextuels. Les premiers expriment des restrictions de co-occurrence par rapport ~ des categories de syntagmes ~l~mentaires (les "strict subcategorization features" de Chomsky ~965))ou par rapport ~ des souscategories de syntagmes ~l~mentaires (les "selectional features" de Chomsky). Ainsi ( ~ hum m (humain/non-httmain) Dans le cas des traits morph_olgqiques on ne peut plus parler de sous-©at~9ories de syntagmes ~l~mentaires, comme dans le cas des autres types de traits (cf. dans GRAMMAIRE-I l'inventaire des traits organis~s selon les types que nous avons ~num~r~s).En fair l'objectif fondamental de la premiere phase de notre recherche ~tait l'~laboration d'un s~st~m_e_ cyclique, de transformations qui metre en correspondance la structure profonde des phrases, orient~e vers l'inter- inf. -40-~L~plurJ [T62] ELLIPSE-~ 1 X 1 |T63] M-PRON X 1 1 -57- +accusl fem | plUr~PRON] ~que1~ L ~L J 3 X a V[T65] M-leur X $ PRON ~,a) V Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
81002f27ae9136a2c72948632529ca3e6ee7286d
14176415
null
Automatic error-correction in natural languages
Automatic error-correction in natural language processing is based on the principle of 'elastic matching'. Text words are segmented into 'lines' with letters arranged according to a pre-determined sequence, and then matched line-by-line, shifts being applied if the numbers of lines are unequal.
{ "name": [ "Szanser, A.J." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 7
1969-09-01
2
13
null
In order to resolve the possible multiple choices produced, the method may be supplemented by another one, based on the observed repetition of words in natural texts, and also by syntactic analysis.This paper describes the above methods and gives an account of an experiment now in progress at the National Physical Laboratory.Elastic matchin~ ~ith increased application of computers in the processing of natural languages comes the need for correcting errors introduced by human operators at the input stage.A statistic investigation [1] revealed that roughly 80 per cent of all misspelled words contain only one error, belonging to one of the following cases: a letter missing, an extra letter, a wrong letter and finally two adjacent letters interchanged.As such an error can occur in any position, a check by trying all possible alternatives in turn is clearly impracticable.A method which can obtain the same result but in a less tedious and time-constming way has been worked out and experimented upon at the National Physical Laboratory, Teddington, England.This method, named'elastic matching' was first proposed at the 1968 I.F.I.P. Congress in Edinburgh, Scotland [2] .The elastic matching of words consists basically of coding all the characters (letters) as bits in a computer word, allotting to each letter a specific position.The whole English alphabet will therefore be represented by a sequence of 26 bits, although their order, as will be shown below, may, and indeed should, differ from the usual order of letters in the alphabet.All words belonging to a complete set, which may be a list of words or a whole dictionary, are 'linearized', that is converted into segments, called 'lines', in which the letters are arranged in the agreed order° if the current letter has a position prior to the last stored, a new line must be started. Thus, if the sequence in question were the alphabet itself, the word 'interest' (for example) would be linearized as follows:'int-er-est'. The actual sequence, by the way, has to be chosen in such a way that it would produce the longest possible lines or, in other words, the minimum number of lines for a given sample of text.The matching is carried out not between words but between lines. ~ll errors will then stand out immediately as one or more disagreeing bits ~. In the case of two bits a simple check will reveal whether this is the result of an accepted type of error (one wrong letter, or two adjacent letters interchanged), or the result of two separate errors, and therefore to be rejected under the limit accepted (one error per word).In the examples shown below the alphabet has been assumed to be the linearizing sequence;this is done for the sake of better clarity only.M 0 RST I f I R T Y ! Mo I i D I Y disagreeing bit .~ .............. ...~ ! (a) ~Extra letter (b) Letter missin~ B H N T B E N T J J J B N ST I B E N T i I (c) Wrong letter (d) Two errors (unacceptable) tThe result (d) is unacceptable because the two disagreeing bits For example by using the logical 'NOT-EQUIVALENT' operation.are formed as a result of two errors (extra S and missing E). In the computer check this is shown by the two outstanding bits (letters) being separated by another bit (letter).If the numbers of lines in the two versions (misspelled and correct) are unequal, the procedure is as follows. The next line of the longer version is shifted back and matched against the result (that is, the disagreeing bits) of the previous match. Thus, for example,I M 0 U M 0 ST I o ST(an extra letter)In the case of two disagreeing bits some simple checks have again to be made to eliminate the two-error cases, and also to prevent spurious matches resulting from the self-cancellation of characters between the two successive lines of the same version.More particulars of the operation of this method can be found in a special paper [5].The dictionary organization The elastic matching, as was mentioned above, is applicable against any set of (correct) words, which may be, for example, a list of proper names, or any other words, even of artificial (e.g. programming) languages.It is, however, the application to natural languages, in particular English, which is the subject of this paper. There are two problems which have to be overcome or, at least, reduced to manageable proportions before this method can be applied using a complete English dictionary.The first problem is access to the dictionary which may contain tens or even hundreds of thousands of entries. This number, however, includes all grammatical forms of English words (fortunately, they are not so numerous as in highly inflected languages such as Russian).The dictionary look-up takes different forms depending on the way in which the dictionary is organized.The latter could have either a tree-like structure (preferably built of 'lines') which is likely to be quicker in operation, or a list structure, in which words may be grouped by their line numbers, then by numbers of letters and finally, if the lists are still too long, by part-alphabetization (according to the accepted sequence).This structure is easier to prepare.The words to be checked against the dictionary of the list structure will be linearized, and during this process the numbers of their lines and letters will be determined.The sections of the dictionary to be used in the matching process will be those with equal numbers of lines (and letters) and those ir~,ediately below and above these numbers (depending on the error threshold accepted).The other problem is connected with the number of multiple matches likely to occur, especially for short words.Two ways of alleviating this problem are described in the next section.The supplementary proceduresAn experiment has been carried out at the NPL on the lines described above.First of all, an optimum linearizing sequence had to be established for English texts. Several methods were used for this purpose, both statistical and purely linguistic, and the results were submitted to computer tests. Sequences bringing lower yield had been gradually eliminated and changes were made in those remaining, in order to determine the optimum sequence by the well-known lhillclimbing' technique.This investigation has been fully described elsewhere [3] and it has produced the following sequence: FJVWMBPHIOQUEARLNXGSCKTDYZ Next, through lack of a proper dictionary, the general-content check procedure was used to compile lists of words occurring in selected stretches of English (parts of three articles on physics, linguistics and secio-politics, containing about 3,000 text words in ~l).This limit has been accepted.Several hundred distorted words (based on words in the s~ne articles) were matched against these vocabularies.After all the corrections and adjustments, the need for which naturally occurred during the tests, have been made, the final results can be summarized as follows:(i) The retrievals were both exact and complete, in the sense that no misspelled words (within the proper error limit) were left unretrieved and no wrong retrievals were produced;(ii) The number of multiple equivalents increased rapidly as the lower limit of the number of letters (four) in a word was approached (in some cases up to five equivalents);(iii) The number of multiple equivalents was generally insignificant for 'content' words (in most cases only one word was retrieved), whereas 'function' words often produced many equivalents, e.g. WTH~'--,THEY, OTHER, THEN, T}~M, TImElY, TI~IR All these observations confirmed the results anticipated in previous sections.The latest stage of the experiment is being carried out at the time of writing this paper (May, 1969) . The author is now able to use the English side of the Palantype -'_iuglish dictionary ~ of about 80,000 entries. For the sake of economy in programming and machine time, only one section of the dictionary, namely the entries starting with letter S (about I~% of the whole dictionary) is being used. The linearization and organization of this section is now in progress. This will enable the author to test a more complete dictionary lookup than before, together with general-content check and later with syntactic analysis as well.Other applicationsApart from the general use for natural English texts, an application of the elastic m!~tching technique has been proposed in the automatic tra!!scription of machine-shorthand of the Palantype system. This system uses a special machine with a keyboard enabling the simultaneous striking of several keys, each 'stroke' corresponding to a phonetically-based group of consonants and vowels, roughly equivalent to a syllable.In normal operation all the characters of each stroke are printed together on a continuous paper band, shifting after each This will be explained below, Section 5-stroke. The recording is later read and transcribed by a human operator. Since the latter part of the operation is naturally much slower (about four times that of the recording), a project, now in progress at the NPL, aims at securing automatic transcription, in which the character levers, in addition to the ordinary printing action, activate electric contacts. These create impulses, which are fed into a computer and result, after a series of operations, in printing out a text as new to ordinary English, as possible.One of the problems encountered in this process is caused by the flexibility of the recording conve~tion, enabling the human operator to record phonetic combinations in more than one way. Generally, this is provided for by inserting in the automatic Palantype-~haglish dictionary all versions of each word that can be reasonably foreseen. In practice the Unforeseen sometimes happens and the word is output untranslated (but 'transliterated' phonetically), which is at the best annoying, but may even be unreadable.An analysis has shown that most of the deviations from standard versions stored in the dictionary are caused by a few convention rtules, such as e.g. 'vowel elision':any unaccented vowel in a word can be omitted. Now, if the matching is done not on palantype strokes but on their linearized versions, the elastic matching rules can easily be adjusted to include the versions produced.Incidentally, the Palantype sequence is ~Iready partly linearized, and reads: SCPTH + M~LYOEAUI . NLCF~RPT + SH (the "+" and "." signs have special phonetic functions). For the linearization purposes all that is needed is to exclude the repeated consonants (from second "N" to the end); the number of lines will therefore exceed the number of 'strokes'.The relevant procedures have been fully tested on sample lists of standard and non-standard versions (containing up to 300 words) and were found satisfactory.The implementation, however, for use with the full dictionary remains to be done. It is still not clear whether it would repay to linearize and store in this form the complete dictionary of eighty odd thousand entries; or whether it would be more practical to linearize while checking, stroke by stroke, which would be, of course, a much slower procedure. At the present time it does not look likely that either solution would lead to standardization being possible in 'real-time', but there remains the possibility of an 'errata' sheet being produced almost immediately after the normal output. More particulars about this application can be found in the paper [4].Another application, now under consideration, is the retrieval of misspelled proper names from lists used in a factretrieval project, which is also in progress at the NPL.
null
null
One possibility of choosing between the multiple equivalents produced by dictionary look-up is to select those which are repeated throughout the article or speech in question.For this purpose a procedure called 'general-content check' has been devised.~s the text is processed, each different word satisfying certain conditions is stored.Then all multiple results from dictionary look-up are compared with the contents of this store (which may also be organized into sections) and words found there are given preference to others. The idea behind this is, of course, that words tend to be repeated by one writer or speaker.The size of the sample processed for the general-content check must not be too small or too large. The optimum size should be determined experimentally, but one may risk the guess that perhaps 1-2 thousand (current) test words are a practical amount.Further, there is no need to store all the different words. Ideally, these should be the so-called'content' words, such as nouns, verbs, adjectives and adverbs, whereas the remaining, 'function' words (prepositions, conjunctions, etc.) should be left aside, as not being content-typical.The selection can easily be done in the storing process if dictionary entries are suitably marked.Also, if one grammatical form of a word is stored, there is no need for storing others, so that the general-content vocabulary may assume the character of a stem-word list. This again, can conveniently be arranged both in storing and in matching.2mother possibility of making a choice between multiple equivai[ents is syntactic ~alysis. This is especially promising, because if one consider~ a typical lexical set of common words, one must notice that long words (which give, as a rule, better results in elastic matching) usually belong to Icontent' words, whereas the 'function' words, which are specially amenable to syntactic analysis are normally short and, therefore, would either produce more multiple choices er~ if of less than four letters, would escape the elastic matching ~together*.In this way the two methods are largely complementary. More of syntactic analysis in error-correction will be said below.Neither of the two supplementary methods mentioned above is applicable where elastic matching is used for non-textual material (list of names, etc.).
null
Main paper: the ~eneral-content check: One possibility of choosing between the multiple equivalents produced by dictionary look-up is to select those which are repeated throughout the article or speech in question.For this purpose a procedure called 'general-content check' has been devised.~s the text is processed, each different word satisfying certain conditions is stored.Then all multiple results from dictionary look-up are compared with the contents of this store (which may also be organized into sections) and words found there are given preference to others. The idea behind this is, of course, that words tend to be repeated by one writer or speaker.The size of the sample processed for the general-content check must not be too small or too large. The optimum size should be determined experimentally, but one may risk the guess that perhaps 1-2 thousand (current) test words are a practical amount.Further, there is no need to store all the different words. Ideally, these should be the so-called'content' words, such as nouns, verbs, adjectives and adverbs, whereas the remaining, 'function' words (prepositions, conjunctions, etc.) should be left aside, as not being content-typical.The selection can easily be done in the storing process if dictionary entries are suitably marked.Also, if one grammatical form of a word is stored, there is no need for storing others, so that the general-content vocabulary may assume the character of a stem-word list. This again, can conveniently be arranged both in storing and in matching.2mother possibility of making a choice between multiple equivai[ents is syntactic ~alysis. This is especially promising, because if one consider~ a typical lexical set of common words, one must notice that long words (which give, as a rule, better results in elastic matching) usually belong to Icontent' words, whereas the 'function' words, which are specially amenable to syntactic analysis are normally short and, therefore, would either produce more multiple choices er~ if of less than four letters, would escape the elastic matching ~together*.In this way the two methods are largely complementary. More of syntactic analysis in error-correction will be said below.Neither of the two supplementary methods mentioned above is applicable where elastic matching is used for non-textual material (list of names, etc.). c~experiment in automatic error-correction: An experiment has been carried out at the NPL on the lines described above.First of all, an optimum linearizing sequence had to be established for English texts. Several methods were used for this purpose, both statistical and purely linguistic, and the results were submitted to computer tests. Sequences bringing lower yield had been gradually eliminated and changes were made in those remaining, in order to determine the optimum sequence by the well-known lhillclimbing' technique.This investigation has been fully described elsewhere [3] and it has produced the following sequence: FJVWMBPHIOQUEARLNXGSCKTDYZ Next, through lack of a proper dictionary, the general-content check procedure was used to compile lists of words occurring in selected stretches of English (parts of three articles on physics, linguistics and secio-politics, containing about 3,000 text words in ~l).This limit has been accepted.Several hundred distorted words (based on words in the s~ne articles) were matched against these vocabularies.After all the corrections and adjustments, the need for which naturally occurred during the tests, have been made, the final results can be summarized as follows:(i) The retrievals were both exact and complete, in the sense that no misspelled words (within the proper error limit) were left unretrieved and no wrong retrievals were produced;(ii) The number of multiple equivalents increased rapidly as the lower limit of the number of letters (four) in a word was approached (in some cases up to five equivalents);(iii) The number of multiple equivalents was generally insignificant for 'content' words (in most cases only one word was retrieved), whereas 'function' words often produced many equivalents, e.g. WTH~'--,THEY, OTHER, THEN, T}~M, TImElY, TI~IR All these observations confirmed the results anticipated in previous sections.The latest stage of the experiment is being carried out at the time of writing this paper (May, 1969) . The author is now able to use the English side of the Palantype -'_iuglish dictionary ~ of about 80,000 entries. For the sake of economy in programming and machine time, only one section of the dictionary, namely the entries starting with letter S (about I~% of the whole dictionary) is being used. The linearization and organization of this section is now in progress. This will enable the author to test a more complete dictionary lookup than before, together with general-content check and later with syntactic analysis as well.Other applicationsApart from the general use for natural English texts, an application of the elastic m!~tching technique has been proposed in the automatic tra!!scription of machine-shorthand of the Palantype system. This system uses a special machine with a keyboard enabling the simultaneous striking of several keys, each 'stroke' corresponding to a phonetically-based group of consonants and vowels, roughly equivalent to a syllable.In normal operation all the characters of each stroke are printed together on a continuous paper band, shifting after each This will be explained below, Section 5-stroke. The recording is later read and transcribed by a human operator. Since the latter part of the operation is naturally much slower (about four times that of the recording), a project, now in progress at the NPL, aims at securing automatic transcription, in which the character levers, in addition to the ordinary printing action, activate electric contacts. These create impulses, which are fed into a computer and result, after a series of operations, in printing out a text as new to ordinary English, as possible.One of the problems encountered in this process is caused by the flexibility of the recording conve~tion, enabling the human operator to record phonetic combinations in more than one way. Generally, this is provided for by inserting in the automatic Palantype-~haglish dictionary all versions of each word that can be reasonably foreseen. In practice the Unforeseen sometimes happens and the word is output untranslated (but 'transliterated' phonetically), which is at the best annoying, but may even be unreadable.An analysis has shown that most of the deviations from standard versions stored in the dictionary are caused by a few convention rtules, such as e.g. 'vowel elision':any unaccented vowel in a word can be omitted. Now, if the matching is done not on palantype strokes but on their linearized versions, the elastic matching rules can easily be adjusted to include the versions produced.Incidentally, the Palantype sequence is ~Iready partly linearized, and reads: SCPTH + M~LYOEAUI . NLCF~RPT + SH (the "+" and "." signs have special phonetic functions). For the linearization purposes all that is needed is to exclude the repeated consonants (from second "N" to the end); the number of lines will therefore exceed the number of 'strokes'.The relevant procedures have been fully tested on sample lists of standard and non-standard versions (containing up to 300 words) and were found satisfactory.The implementation, however, for use with the full dictionary remains to be done. It is still not clear whether it would repay to linearize and store in this form the complete dictionary of eighty odd thousand entries; or whether it would be more practical to linearize while checking, stroke by stroke, which would be, of course, a much slower procedure. At the present time it does not look likely that either solution would lead to standardization being possible in 'real-time', but there remains the possibility of an 'errata' sheet being produced almost immediately after the normal output. More particulars about this application can be found in the paper [4].Another application, now under consideration, is the retrieval of misspelled proper names from lists used in a factretrieval project, which is also in progress at the NPL. : In order to resolve the possible multiple choices produced, the method may be supplemented by another one, based on the observed repetition of words in natural texts, and also by syntactic analysis.This paper describes the above methods and gives an account of an experiment now in progress at the National Physical Laboratory.Elastic matchin~ ~ith increased application of computers in the processing of natural languages comes the need for correcting errors introduced by human operators at the input stage.A statistic investigation [1] revealed that roughly 80 per cent of all misspelled words contain only one error, belonging to one of the following cases: a letter missing, an extra letter, a wrong letter and finally two adjacent letters interchanged.As such an error can occur in any position, a check by trying all possible alternatives in turn is clearly impracticable.A method which can obtain the same result but in a less tedious and time-constming way has been worked out and experimented upon at the National Physical Laboratory, Teddington, England.This method, named'elastic matching' was first proposed at the 1968 I.F.I.P. Congress in Edinburgh, Scotland [2] .The elastic matching of words consists basically of coding all the characters (letters) as bits in a computer word, allotting to each letter a specific position.The whole English alphabet will therefore be represented by a sequence of 26 bits, although their order, as will be shown below, may, and indeed should, differ from the usual order of letters in the alphabet.All words belonging to a complete set, which may be a list of words or a whole dictionary, are 'linearized', that is converted into segments, called 'lines', in which the letters are arranged in the agreed order° if the current letter has a position prior to the last stored, a new line must be started. Thus, if the sequence in question were the alphabet itself, the word 'interest' (for example) would be linearized as follows:'int-er-est'. The actual sequence, by the way, has to be chosen in such a way that it would produce the longest possible lines or, in other words, the minimum number of lines for a given sample of text.The matching is carried out not between words but between lines. ~ll errors will then stand out immediately as one or more disagreeing bits ~. In the case of two bits a simple check will reveal whether this is the result of an accepted type of error (one wrong letter, or two adjacent letters interchanged), or the result of two separate errors, and therefore to be rejected under the limit accepted (one error per word).In the examples shown below the alphabet has been assumed to be the linearizing sequence;this is done for the sake of better clarity only.M 0 RST I f I R T Y ! Mo I i D I Y disagreeing bit .~ .............. ...~ ! (a) ~Extra letter (b) Letter missin~ B H N T B E N T J J J B N ST I B E N T i I (c) Wrong letter (d) Two errors (unacceptable) tThe result (d) is unacceptable because the two disagreeing bits For example by using the logical 'NOT-EQUIVALENT' operation.are formed as a result of two errors (extra S and missing E). In the computer check this is shown by the two outstanding bits (letters) being separated by another bit (letter).If the numbers of lines in the two versions (misspelled and correct) are unequal, the procedure is as follows. The next line of the longer version is shifted back and matched against the result (that is, the disagreeing bits) of the previous match. Thus, for example,I M 0 U M 0 ST I o ST(an extra letter)In the case of two disagreeing bits some simple checks have again to be made to eliminate the two-error cases, and also to prevent spurious matches resulting from the self-cancellation of characters between the two successive lines of the same version.More particulars of the operation of this method can be found in a special paper [5].The dictionary organization The elastic matching, as was mentioned above, is applicable against any set of (correct) words, which may be, for example, a list of proper names, or any other words, even of artificial (e.g. programming) languages.It is, however, the application to natural languages, in particular English, which is the subject of this paper. There are two problems which have to be overcome or, at least, reduced to manageable proportions before this method can be applied using a complete English dictionary.The first problem is access to the dictionary which may contain tens or even hundreds of thousands of entries. This number, however, includes all grammatical forms of English words (fortunately, they are not so numerous as in highly inflected languages such as Russian).The dictionary look-up takes different forms depending on the way in which the dictionary is organized.The latter could have either a tree-like structure (preferably built of 'lines') which is likely to be quicker in operation, or a list structure, in which words may be grouped by their line numbers, then by numbers of letters and finally, if the lists are still too long, by part-alphabetization (according to the accepted sequence).This structure is easier to prepare.The words to be checked against the dictionary of the list structure will be linearized, and during this process the numbers of their lines and letters will be determined.The sections of the dictionary to be used in the matching process will be those with equal numbers of lines (and letters) and those ir~,ediately below and above these numbers (depending on the error threshold accepted).The other problem is connected with the number of multiple matches likely to occur, especially for short words.Two ways of alleviating this problem are described in the next section.The supplementary procedures Appendix:
null
null
null
null
{ "paperhash": [ "damerau|a_technique_for_computer_detection_and_correction_of_spelling_errors" ], "title": [ "A technique for computer detection and correction of spelling errors" ], "abstract": [ "The method described assumes that a word which cannot be found in a dictionary has at most one error, which might be a wrong, missing or extra letter or a single transposition. The unidentified input word is compared to the dictionary again, testing each time to see if the words match—assuming one of these errors occurred. During a test run on garbled text, correct identifications were made for over 95 percent of these error types." ], "authors": [ { "name": [ "F. J. Damerau" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "7713345" ], "intents": [ [] ], "isInfluential": [ false ] }
null
665
0.019549
null
null
null
null
null
null
null
null
2f6bd8eaad5de8b869302566d97a7adf35df68fd
152179197
null
On the Problems of Co-Textual Analysis of Texts
MY paper will deal with the theoretical and practical questions of the co-textual analysis of texts. The theoretical frame is contained by the chapters /bird/ DEF ST ISF FIELD BT-LOG -WH NT-LOG -PT -CON COL-LOG EC ASCR ASCT ASCF tenger /sea/ DEF ST ISF FIELD BT-LOG -WH NT-LOG -PT -CON COL-LOG -PT ASCR ASCT ASCF /We shall deal with --separately later./ 'poultry' 'migratory bird',... 'animals' ' vert ebrat e ' 'living being' ' singing-birds', ' birds-of-prey' 'beak'. 'wing' 'migratory birds' 'mammals', ,reptilia' , nest'_,_' air_' z__' t__ree_' A 'water' 'flying' song ' ' chirping', ' shrieking' 'ocean' 'seaside', 'sea level' ' still waters' 'Earth' 'North Sea','East Sea' 'bay' ,sweet water' t 'salt water' 'lake', 'tam' 'land' 'coast', 'island', 'harbour', ,_sh_iRL__'_b_uoxl._i~__i~_h_tho_use', ' infinitude' 'waves', 'storm' 'gulls' DEF --that contains SF and CAT We did not meet TR and SN in these examples, though in certain cases they may be necessary also in non technical texts; E.g. at foreign words and geographical names related to the given language archipelagus /archipelago/
{ "name": [ "Petofi, Janos S." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 50
1969-09-01
0
4
null
In my opinion, the following means are necessary for the co-textual analysis of texts: I. a thesaurus including different sectors;2. a rule-system working on sentences: a/phonological, morphonological rules, ~ b/ syntactic rules, c/ semantic rules to the linguistic semantic interpretation to the logical semantic interpretation; 3. a system of the syntactic-semantic rules of the basic composition units;4. a rule-system of an abstracting process;5. a rule-system of a process that is able to establish the thematic connections of a 'text' consisting of abstracts.In this chapter I want to deal only with the questions of the structure of the thesaurus in details. The work of all the rule-systems is based on the belowsketched thesaurus structure.The base of the linguistic analysis has to be a thesaurus that unites the structures of the thesaurus made for the purpose of documentation and that of the lexiconhavingdeveloped during the general linguistic investigations.Such a thesaurus consists of two main parts: the sector of definitions and that of classifications.The headwords of the documentational thesaurus -combining the structural characteristics of the more important thesauri 5-contains the following constituents: 5. The information FIF/J9 points to the groups and fields into which the notion represented b x the given word is classified. The group and field structure has to unit the virtues of the thematic classifications of the linguistic thesauri /Roget, Dornseiff/, the documentation thesauri and the illustrated dictionaries. But its re~lisation can be built only on a classificationtheoretical basis.6. For reaching a satisfactory or suitable degree of the division of the informations BT NT and COL we have to analyse it further. 7. When defining the informations ASC we --of course--think only of the minimal informations that can be defined with a relatively big probability. We wanted to mark the indefiniteness of the definition by selecting it from the others with a dotted line. We can provide the classifications with -13 l identificators, and we can order the identificators to the thesaurus entries.Thus the atr~cture of a thesaurus-uni~ is as follows:L-~EF /lexical definition / T-X~F /the sauristic definition/ I~ENT /identificators/ 2.2As I have already mentioned, I do not wish to deal here with the other means of the co-textual text analysis. We can feel the character of their structure at the presentation of the way of analysing. Their real elaboration can be realized only on the basis of a profound analysis of the mutual influences of all the 3. A miDuna-szakaszunkon is ~yakranl~tom ezeket a viz felett lebeg6 korfllpiros cs6ra madara-Eat 4.1 N@h8 magasra felcsapnak, Birds above the sea There are people standing on the bridge of the Danube, at the rail, adults as well as children. They are looking at the gulls that are swinging above the river and flying down to the water from time to time. I often see these corallinebeaked birds hovering above the water at our reach of the Danube as well. It was a voice that I hear in autumn when a group of painfully cryin~ little hirds flies on m~ birches.All i-s, all i-s, but such sad i-s, that one's heart begins to ache.The group of the crying birdies was flying over the ship. I was looking after them amazed. Tiny birds above the sea in the North? Would they be migratory birds? I know, the swallow, the yellow-bird, the bee-eater, the nightingale all shoulder the journey when their instinct calls the~ But perhaps these are Only moving birds. And perhaps it is the Sweedish insular world that explains me why I meet them here. Would they move from island to island? And do they dare to shoulder the journey above the angrily waving sea because the islands are not very far from each other? The sailors throw out scrapings to the gulls and they wolf them eagerly, nearly fighting. But who feeds these tin~ birds of the sea? During the substituting it can happen that there is a word in the text which is missing from our thesaurus.The lex~cal entry of it has to be made and inserted into the thesaurus.Paralelly with the substitution we can make the list of 'word forms' of the analysed text. It seemes necessary in the surface structure to sign the constituents of the 'topic-comment' relation. These signs have to be preserved in the deep structure as well.4. The discover X of the 'deep st_ru_ct_ur_ee' belonging to the given surface structure --incomplete in mar%y cases. 9The deep structure is demonstrated by Figure 2 . The steps from 3 to 6 happened on the basis of the lexical /theeauristic/ informations.The 'conditions' informations /CO1/ of the substituted lexical units mean a certain prediction referring to the 8nal~sis both inside and outside the sentences. For example the compulsory complement of the verbs makes it possible to look for the compulsory, possible and not complement like adverbs 81gorithmically.The logical semantic interpretation of the deep structure makes it possible on one hand to establish the syno~ymity of sentences, and to discover the net of thematic connections of the text on the Other.After finishing the analysis of the sentences we compile different kinds of special text-thesauri.It is this point where the whole interpretation of the linear patterning of the text takes place on the level of sentences. I have dealt with its problems in detail st another place. Here I should like to mention only the things that are necessary for the further investigation of the analysing process. ~._~._I First of all we make the index of the word-forms of the whole text. After the word-forms this index gives the numbers of the sentences in which the given word-form occurs. We shall neglect the complete presentation of this index, we shall just give the list of the roots of nouns, verbs and adjectives that occur more than one time. /Considering the shortness of the text, a word occurring • twice can be relevant as well./ The underlined numbers in the list sign the 'implicite' occurrences of the given words. /We speak about implicit occurrences when the given word is represented by pronouns, verbal endings or demonstrative pronouns. Their identifications with the proper words have to be done already at the semantic interpretations of the sentences --and have to be signed by a special code in the continuously compiled text-dictionary./ On the basis of this list --especially in case of shorter texts--we can also collect the list of the logical relations where the most frequently occurring nouns take place.pronouns, possessive pronouns, nouns with possessive personal suffixes reflecting the 'communicational net' of the text is an important analysing device.Let us see for example the list of the words referring to the first person: X is a constituent ordered ~mmediately under X /if Y is the dominant sentence itself, we never write it out/, X/Z/ Z is an information that defines the character of the constituent X more precisely --if it is e number it stands for the communication unit that contains X as its constituent, /after a semicolon we have the concrete lexicsl unit that keeps the communication units in question together/, X :: Y Y is the definition of X, X = r X is the repetition of the constituent Y, X : Y the missing constituent X is identical to X, We mean the basic units of the compositional structure the following way:V/Present/ VWe shall call basic composition unit /or comp@aition unit of the first de~ree/ the structure unit that forms one thematic unit and that comes to existence immediately from the communication units. /This always has to consist of morethan one communication unit. The 'orphan communication unit' inserted between the composition units forms a compositio n unit of the zero de,Tee./ Generally: we shall call composition unit of the n degree the structure unit that also forms one thematic unit and that contains compositionunit of the /n-l/ degree as well.5.~.~ Both the simplex and the complex blocks of the communication units and the continuous communication chains sign some kind of the compositional dissection already. But the discovery of the compositional structure demands a detailed thematic analysis.At the thematic analysis the first fixed point is meant by the 'referential elements'.When the referential elements hold not many communication units together, this mostly means s smallest thematic unit, too.Inside the longer simple blocks and in the parts of the text outside the simple and complex blocks the discovery of the thesauristic connections among the lexical Then we write out the surface representations one by one according to the formerly reduced deep structures.As the result of this step we shall get the following ,reduced text'. /Here we illustrate this process only with the composition units I-ii./ i. There are people standing on the bridge of the Danube.They are looking at the gulls. 2. I often see these birds at our reach of the Danube as well. Sometimes they dart up high. 3. But I have seen gulls above the sea, too. 4. Then the weather had already turned to winter-like.The birches of the island shone yellow. The North Sea dashed against its granite coasts with anger.My ship had started from a Finnish harbour and was leaving for Stockholm.Wherever we passed, there were often appearing some granite islands. There were pines and birches on it. 7.There were lonely lighthouses towering on some of them.The wind was wuthering on the deck. The waves threateningly dashed against the side of the ship. 9.I stayed out on the deck for a time.. I sent farewell to the Finnish coasts. I had such a homely feeling there. Ii.I have never seen Italy. I have always been drawn to the North by m~ desire.Before beginning the second step of making the abstract we have to examine the orphan blocks consisting of more communication units. We have to decide if they represent one or more composition units. This is relatively easy in cases like that of the composition unit 8. Its two communication units are connected in their content by the explicite thesauristic connection between the 'deck' and the 'side of the ship' /both are the parts of the 'ship'/. /We can find an implicite connection as well: "the wind was wuthering", "the waves threateningly dashed", the question is to what extent can this be made expllcite./ The communication units of the composition unit II are connected in their form by the pair of adverbs never -alw~.The composition unit 4 raises mo?e problems already.We can establish from the 'birches of the island' u on the basis of the later occurences of the 'island' and the 'birches' /see the composition unit 6/--that it refers to the birches of sea islands. The 'granite island' Considering the paraphrase-possibilities of the /i/ and /2/ and the tenses occuring in the text we can generate the following sentence from these: 'I said farewell to the hp~ely Finnish coasts.' or, holding to the structure of the text:'I sent farewell toward9 the hom~ly Finnlah coasts.' I 'When we generate, of course it is not necessary to condense all the informations --hidden in the logical representation--into one sentence.After the generating or the choice of the sentences we get the following 'abstract-text': 20. 21.I. People on the bridge of the Danube are looking at the gulls. 2. I often see these birds at our reach of the Danube as well. 3. But I have seen gulls above the sea, too. 4. Then the weather had already turned to winter-like. 5. MY ship passed from a Finnish harbour towards Stockholm. 6. Wherever we passed, there were often appearing some granite islands. 7. There were lonely lighthouses towering on some of them. 8. The wind was wuthering on the deck. 9. I stayed out on the deck for a time. I0. I sent farewell to the homely Finnish coasts. ii. I have always been drawn to the North by ~f desire. 12. Gulls joined the track of the ship. 13. If they had not accompanied me, that severe and stor~ sea murnfulness would have been unbearably depressh~.. 14. How good it is, that they are crying: "Our country Is the sea." 15. Only theirs? 16. Suddenly a plaintive bird-voice stroke my esrs. 17. The group of the cryin~ birdies was fl[ing over the ship, 18. Maybe these are not migratory just movlng birds. 19. Do they dare to shulder the journey above the angrily waving sea because the islands are not very far from each other? The sailors throw out scrapings to the gulls. But who feeds these tiny birds of the sea? This recklessness that shoulders the journey above the sea is encouraging for one who has sorrow inside like me because I am carrying a dear deceased in m~self, ~ sometimes something cries in me just as those tiny dear birds cried above my head.The informations condensed into the abstract of the literary text are relevant with different character than those of the scientific %ext. In this abstract~ composition unit has to be represented by acommunication unit. 2. Does the sea belon6 only to the gulls? 15 3. A group of the crying birdies was flying over the ship.16-19 4. The sailors feed the gulls, but who feeds the crying birdies? 20-21 E. This recklessness that shoulders the journey above the sea is encouraging 22In the hierarchical structure we cannot establish levels equally valid to every text, but we can define the algorithmically. This process has parts than can be done automatically already now, and theoretically --I suppose--all its parts can be automatized. But the practical realisation expects the solution of different problems that are among the key-problems of the linguistics and the documentation. MY intention was first of all to show how these are connected even at the analysis of the simplest text. Finally I should like to point to some problems of basic importance again.Before the compilement of the thesaurus we have to select the so-called 'notion-words'. Mostly these ~notion-words' take place in the sector of the thesaur~stic definitions.We also have to elaborate the 'semantic basic language' by which the words of the given language can be semantically defined and we have to define the system of the relations to be used in the semantic definitions. /The 'notion-words' are words, the elements of the 'semantic basic language' are of feature character./The thesaurus has to provide the interrelations among the different 'notion-words' and the different 'lexical units'. These interrelations are of one-many That is: more lexical units /LDEF/ m~y belong to one 'notion word' /TDEF/.The relation 'logical semantic representation' --'linguistic semantic representation' is an analogous equivalent of the relation 'notion word'--'lexical unit'.The 'logical semantic representation' /LOSE/ represents'logical connections' among ' notion words', while the 'linguistic semantic representation' /LISR/ represents 'verbal connections' among 'lexlcal units'. /In fact~ LISR is a deep structure that demonstrates the semantic character of the constituents, too./In connection with these we have to define the systems of both the logical and linguistic connections, the way of their representation, the correspondence of these two systems of connections, and the way of passing over from one to the other.The interrelations are here interrelations of sets and they are also of one-many character. /That is: to s given set of elementary logical relations more different text-structures can belong. These text structures one Dy one may contain only as many independent deep structures as ~he number of the elementary logical relations./ The efficacity of the analysing system first of all depends on the determinedness of the elements and relations of the logical and linguistic system and of the rules of -41 r the matching between the two systems.We have to mention that the relation of the phonetic and phonological representation is also analogous to the above told one, though this analogy contains an oblique symmetry. This is a natural consequence of the continuous and one-way one-many relation passing from the logical representation to the phonetic oneeFrom the point of view of the relation of the linguistic deep and surface structure the structure of the syntactic informations and the condition informations of the LDEF is of primly importance. It is significant to approach it from the transformations. /That is, we have to ~ examine to what extent the concrete lexical units occuring in a given deep structure determine the character of the transformations to be used./It is only a solution of the above sketched problems happening on 8 theoretical basis forming one coherent system that makes possible the solution of such questions as -the automatic realization of the transitions from the surface structure to the incomplete deep structure, from the incomplete to the complete deep structure, from the deep structure ~D the logical repre sentat ion~ -the automatic discovery of the thematic connections built on the TDEF-s, -the automatic establishment of ~he --in the given connection--irrelevant elements of the deep structure, -the automatic realization of the reduction building on the irrelevant elements, of m~ paper was a so-called 'simple narrative text'.When listing the problems we must not ignore the fact that the different kinds of texts will enlarge the above enumerated list of the basic problems with their special problems as well.
null
null
titled Introduction and Conclusion. MY aim here is to show that the automatization of the text analysis expects the accumulated experiences of the general linguistics and the documentation-theory to be summed up in one coherent theo~z. The linguistics has to examine the problems of documentational thesauri, abstracting, indexing from its own aspect, and has to insert these means and methods of the documentation in the system of the text-analysis.The practical questions, certain concrete phases of the co-textual analysis of texts are dealt with in the chapters 2 and 3. In connection with certain steps we hope our demonstrative examples to contain solutions that will necesssril~ become simpler further on. Our aim was, first of all, the presentation of the complete process on a special type of text. Not all the parts of this complete process can be automatized yet. But the presentation will always happen with respect to the automatization. The next step should be the getting up of an analysing system that is built on the thesauristic elaboration of a relatively not big material. Then we have to solve such special problems of certain phases of the analysis that could not even mention in this flow chart-like survey.Beside and after the theoretical analysis, only this can be the test of the usefulness of the system.i._I Before I should analyse the problems of the technical means and a method for the text analysis, I should like to touch a more general question. I wish to give a short sketch of the connection between the method to be presented and the problems of the modern theory of grammar and theory of documentation.To see the connections of the theory of grammar clearer, let us start from Figure I .Every message is an element of a net of connections /context/, ~etermined by time, geographical place, cultural environments etc. The linguistic analysis can set the only aim of discovering the inner /co-textual/ linguistic structure of the text with the demand of completeness. /It can draw near to the contextual connections only to an extent that is made possible by the structure of the dictionary having used at the analysis./The co-textual analysis --and the immediate constituent and sentence analysis that are organic parts of ~t--can be done on different w~s.TheFigure 1 shows the components of Chomsk~'s gemerative theory. Chomsky's generative process /@/ starts from the 'base component' /B/o The deep structure of the sentence /DS/ comes to existence from the sentence-basis by filling it up with Iexic~l elements in the 'lexicon' /L/. The surface structure /S~ comes to existence by using different kinds of transformations in the 'transformational component' /T/. /The situating of the lexicon and the transformational component in Figure I wishes to demonstrate that we may need turning to the lexicon even during dispatching the transformations./ The proper interpretating components /IDs and IS~ provide the surface and deep structure with phonetic and semantic interpretations. These interpretations, to a certain extent, relate to the extralinguis~ic realit~ already. The phonetic interpretation contains the basic elements, too, necessary to the real pronouncing of the sentence, beside the phonological interpretation of the surface structure /PHTR = ~honetic representation/. /The elements originating in the speaker's subjective interpretation either settle on or colour this./ The semantic interpretation --beside the establishing of the semantic character of the immediate constituents--trends to the discovering of logical connections independent from language, hidden behind the given verbal representation /LOSR = logical semantic representation versus LISR = linguistic semantic representation/.A part of the investigations included by the generative linguistic theory, like other examinations of other kinds, raise the possibility of generating sentences on another way, too. I This generative process /G'/ starts from the universal logical construction /LOSR/. /The universal logical construction is a net of connections containing ,notion-words,2./ We have to order a linguistic structure / DS/ --reflecting the characteristics of the ~v~n language--to this construction, and make a concrete sentence of it / SS/.The way of the sentence-analysis /A/ is of the opposite direction to both. At the same time, our sketched analysis is built on the theoretical basis, which has developed during the investigation concerning the problems of the generative processes /G and G'/ summarize~ above. But, as the text analysis goes beyond the boundaries of the sentence, we have to widen the sentencecentered theoretical basis, as well. 3When analysing the notion 'text-structure' we can make a difference between the linguistic and the sound-textural components of the text. Both can be linearly and hierarchically patterned.The 'linear patterning' is a net of the recurrences of elements that interweave the whole text. The 'hierarchical patterning' means the way as the text as a whole is built up from the basic units of the structure through the levels of the composition units of different complexity. 4Of course there are texts where the sound-textural component is the organizer of the structure, but as it is not mY intention to deal with this now, I shall ignore their problems at present.A great part of the essays dealing with text analysis considers making lists of the structural elements of similar construction. /It mostly remains among sentenceboundaries, so the use of a sentence-centered theory is enough./ These lists are undoubtedly essential characteristics of a text, but they reflect only one aspect of the text structure. This is what I called 'linear patterning'.The other --and, from a certain point of view more important--aspect is the 'hierarchic~l patterning' of the text. Its analysis means much more problems.The problems are different, according to the types of texts. Even if we consider only the most homogeneous texts --the sclent~fic didactic prose or the prose told only in the third person not containing indirect speech---5 -i even then we meet the following basic problem: grsmnatical /syntactic, semantic/ connections can be discovered only among sentences constructing the so-called 'paragraphs'./At the same time, no one has yet established the types of it, neither has s~yone described its rules as a system./ larger text-units ~han 'paragraphs' only such connections can be shown that are carried by the 'content structure'.Thus, beside the examination of the grammatical construction of the 'paragraphs' we also have to look for the means to help us to discover the connections of this kind.On this field we can get the greatest help from the documentation theory. During the last ten years the documentalists wrought out means and methods that will probably prove useful at the analysis of not scientific texts as well.I should like to mention the thesaurus as the most significant means. The thesaurus --as known--is a notiondictionary serving for the normalization of the indexing and searching language, which provides the different connections among the notions. It is this feature, that I think makes this means very useful to the linguistics for its own purposes, too.From the methods those of the abstracting and automatic indexing may become significant in our analysis.In the text analysis there is a 'great obstacle', that the texts are too extensive. For the sake of making the analysis easier and the hierarchical structure clear-cut it is both useful and necessary to replace the larger structure-units by their abstracts. When speaking of abstracting --though abstracts made by statistic methods may mean a good help temporarily--I do not think of statistic abstracts alone.In a broad outline these aspects of the documentation theory mean the wider frame, in which certain questions of the following analysing method can be examined.
null
Main paper: on the means of the co-textual analysis of texts: In my opinion, the following means are necessary for the co-textual analysis of texts: I. a thesaurus including different sectors;2. a rule-system working on sentences: a/phonological, morphonological rules, ~ b/ syntactic rules, c/ semantic rules to the linguistic semantic interpretation to the logical semantic interpretation; 3. a system of the syntactic-semantic rules of the basic composition units;4. a rule-system of an abstracting process;5. a rule-system of a process that is able to establish the thematic connections of a 'text' consisting of abstracts.In this chapter I want to deal only with the questions of the structure of the thesaurus in details. The work of all the rule-systems is based on the belowsketched thesaurus structure.The base of the linguistic analysis has to be a thesaurus that unites the structures of the thesaurus made for the purpose of documentation and that of the lexiconhavingdeveloped during the general linguistic investigations.Such a thesaurus consists of two main parts: the sector of definitions and that of classifications.The headwords of the documentational thesaurus -combining the structural characteristics of the more important thesauri 5-contains the following constituents: 5. The information FIF/J9 points to the groups and fields into which the notion represented b x the given word is classified. The group and field structure has to unit the virtues of the thematic classifications of the linguistic thesauri /Roget, Dornseiff/, the documentation thesauri and the illustrated dictionaries. But its re~lisation can be built only on a classificationtheoretical basis.6. For reaching a satisfactory or suitable degree of the division of the informations BT NT and COL we have to analyse it further. 7. When defining the informations ASC we --of course--think only of the minimal informations that can be defined with a relatively big probability. We wanted to mark the indefiniteness of the definition by selecting it from the others with a dotted line. We can provide the classifications with -13 l identificators, and we can order the identificators to the thesaurus entries.Thus the atr~cture of a thesaurus-uni~ is as follows:L-~EF /lexical definition / T-X~F /the sauristic definition/ I~ENT /identificators/ 2.2As I have already mentioned, I do not wish to deal here with the other means of the co-textual text analysis. We can feel the character of their structure at the presentation of the way of analysing. Their real elaboration can be realized only on the basis of a profound analysis of the mutual influences of all the 3. A miDuna-szakaszunkon is ~yakranl~tom ezeket a viz felett lebeg6 korfllpiros cs6ra madara-Eat 4.1 N@h8 magasra felcsapnak, Birds above the sea There are people standing on the bridge of the Danube, at the rail, adults as well as children. They are looking at the gulls that are swinging above the river and flying down to the water from time to time. I often see these corallinebeaked birds hovering above the water at our reach of the Danube as well. It was a voice that I hear in autumn when a group of painfully cryin~ little hirds flies on m~ birches.All i-s, all i-s, but such sad i-s, that one's heart begins to ache.The group of the crying birdies was flying over the ship. I was looking after them amazed. Tiny birds above the sea in the North? Would they be migratory birds? I know, the swallow, the yellow-bird, the bee-eater, the nightingale all shoulder the journey when their instinct calls the~ But perhaps these are Only moving birds. And perhaps it is the Sweedish insular world that explains me why I meet them here. Would they move from island to island? And do they dare to shoulder the journey above the angrily waving sea because the islands are not very far from each other? The sailors throw out scrapings to the gulls and they wolf them eagerly, nearly fighting. But who feeds these tin~ birds of the sea? During the substituting it can happen that there is a word in the text which is missing from our thesaurus.The lex~cal entry of it has to be made and inserted into the thesaurus.Paralelly with the substitution we can make the list of 'word forms' of the analysed text. It seemes necessary in the surface structure to sign the constituents of the 'topic-comment' relation. These signs have to be preserved in the deep structure as well.4. The discover X of the 'deep st_ru_ct_ur_ee' belonging to the given surface structure --incomplete in mar%y cases. 9The deep structure is demonstrated by Figure 2 . The steps from 3 to 6 happened on the basis of the lexical /theeauristic/ informations.The 'conditions' informations /CO1/ of the substituted lexical units mean a certain prediction referring to the 8nal~sis both inside and outside the sentences. For example the compulsory complement of the verbs makes it possible to look for the compulsory, possible and not complement like adverbs 81gorithmically.The logical semantic interpretation of the deep structure makes it possible on one hand to establish the syno~ymity of sentences, and to discover the net of thematic connections of the text on the Other. the compi!ine of special text-thesauri: After finishing the analysis of the sentences we compile different kinds of special text-thesauri.It is this point where the whole interpretation of the linear patterning of the text takes place on the level of sentences. I have dealt with its problems in detail st another place. Here I should like to mention only the things that are necessary for the further investigation of the analysing process. ~._~._I First of all we make the index of the word-forms of the whole text. After the word-forms this index gives the numbers of the sentences in which the given word-form occurs. We shall neglect the complete presentation of this index, we shall just give the list of the roots of nouns, verbs and adjectives that occur more than one time. /Considering the shortness of the text, a word occurring • twice can be relevant as well./ The underlined numbers in the list sign the 'implicite' occurrences of the given words. /We speak about implicit occurrences when the given word is represented by pronouns, verbal endings or demonstrative pronouns. Their identifications with the proper words have to be done already at the semantic interpretations of the sentences --and have to be signed by a special code in the continuously compiled text-dictionary./ On the basis of this list --especially in case of shorter texts--we can also collect the list of the logical relations where the most frequently occurring nouns take place.pronouns, possessive pronouns, nouns with possessive personal suffixes reflecting the 'communicational net' of the text is an important analysing device.Let us see for example the list of the words referring to the first person: X is a constituent ordered ~mmediately under X /if Y is the dominant sentence itself, we never write it out/, X/Z/ Z is an information that defines the character of the constituent X more precisely --if it is e number it stands for the communication unit that contains X as its constituent, /after a semicolon we have the concrete lexicsl unit that keeps the communication units in question together/, X :: Y Y is the definition of X, X = r X is the repetition of the constituent Y, X : Y the missing constituent X is identical to X, We mean the basic units of the compositional structure the following way:V/Present/ VWe shall call basic composition unit /or comp@aition unit of the first de~ree/ the structure unit that forms one thematic unit and that comes to existence immediately from the communication units. /This always has to consist of morethan one communication unit. The 'orphan communication unit' inserted between the composition units forms a compositio n unit of the zero de,Tee./ Generally: we shall call composition unit of the n degree the structure unit that also forms one thematic unit and that contains compositionunit of the /n-l/ degree as well.5.~.~ Both the simplex and the complex blocks of the communication units and the continuous communication chains sign some kind of the compositional dissection already. But the discovery of the compositional structure demands a detailed thematic analysis.At the thematic analysis the first fixed point is meant by the 'referential elements'.When the referential elements hold not many communication units together, this mostly means s smallest thematic unit, too.Inside the longer simple blocks and in the parts of the text outside the simple and complex blocks the discovery of the thesauristic connections among the lexical Then we write out the surface representations one by one according to the formerly reduced deep structures.As the result of this step we shall get the following ,reduced text'. /Here we illustrate this process only with the composition units I-ii./ i. There are people standing on the bridge of the Danube.They are looking at the gulls. 2. I often see these birds at our reach of the Danube as well. Sometimes they dart up high. 3. But I have seen gulls above the sea, too. 4. Then the weather had already turned to winter-like.The birches of the island shone yellow. The North Sea dashed against its granite coasts with anger.My ship had started from a Finnish harbour and was leaving for Stockholm.Wherever we passed, there were often appearing some granite islands. There were pines and birches on it. 7.There were lonely lighthouses towering on some of them.The wind was wuthering on the deck. The waves threateningly dashed against the side of the ship. 9.I stayed out on the deck for a time.. I sent farewell to the Finnish coasts. I had such a homely feeling there. Ii.I have never seen Italy. I have always been drawn to the North by m~ desire.Before beginning the second step of making the abstract we have to examine the orphan blocks consisting of more communication units. We have to decide if they represent one or more composition units. This is relatively easy in cases like that of the composition unit 8. Its two communication units are connected in their content by the explicite thesauristic connection between the 'deck' and the 'side of the ship' /both are the parts of the 'ship'/. /We can find an implicite connection as well: "the wind was wuthering", "the waves threateningly dashed", the question is to what extent can this be made expllcite./ The communication units of the composition unit II are connected in their form by the pair of adverbs never -alw~.The composition unit 4 raises mo?e problems already.We can establish from the 'birches of the island' u on the basis of the later occurences of the 'island' and the 'birches' /see the composition unit 6/--that it refers to the birches of sea islands. The 'granite island' Considering the paraphrase-possibilities of the /i/ and /2/ and the tenses occuring in the text we can generate the following sentence from these: 'I said farewell to the hp~ely Finnish coasts.' or, holding to the structure of the text:'I sent farewell toward9 the hom~ly Finnlah coasts.' I 'When we generate, of course it is not necessary to condense all the informations --hidden in the logical representation--into one sentence.After the generating or the choice of the sentences we get the following 'abstract-text': 20. 21.I. People on the bridge of the Danube are looking at the gulls. 2. I often see these birds at our reach of the Danube as well. 3. But I have seen gulls above the sea, too. 4. Then the weather had already turned to winter-like. 5. MY ship passed from a Finnish harbour towards Stockholm. 6. Wherever we passed, there were often appearing some granite islands. 7. There were lonely lighthouses towering on some of them. 8. The wind was wuthering on the deck. 9. I stayed out on the deck for a time. I0. I sent farewell to the homely Finnish coasts. ii. I have always been drawn to the North by ~f desire. 12. Gulls joined the track of the ship. 13. If they had not accompanied me, that severe and stor~ sea murnfulness would have been unbearably depressh~.. 14. How good it is, that they are crying: "Our country Is the sea." 15. Only theirs? 16. Suddenly a plaintive bird-voice stroke my esrs. 17. The group of the cryin~ birdies was fl[ing over the ship, 18. Maybe these are not migratory just movlng birds. 19. Do they dare to shulder the journey above the angrily waving sea because the islands are not very far from each other? The sailors throw out scrapings to the gulls. But who feeds these tiny birds of the sea? This recklessness that shoulders the journey above the sea is encouraging for one who has sorrow inside like me because I am carrying a dear deceased in m~self, ~ sometimes something cries in me just as those tiny dear birds cried above my head.The informations condensed into the abstract of the literary text are relevant with different character than those of the scientific %ext. In this abstract~ composition unit has to be represented by acommunication unit. 2. Does the sea belon6 only to the gulls? 15 3. A group of the crying birdies was flying over the ship.16-19 4. The sailors feed the gulls, but who feeds the crying birdies? 20-21 E. This recklessness that shoulders the journey above the sea is encouraging 22In the hierarchical structure we cannot establish levels equally valid to every text, but we can define the algorithmically. This process has parts than can be done automatically already now, and theoretically --I suppose--all its parts can be automatized. But the practical realisation expects the solution of different problems that are among the key-problems of the linguistics and the documentation. MY intention was first of all to show how these are connected even at the analysis of the simplest text. Finally I should like to point to some problems of basic importance again.Before the compilement of the thesaurus we have to select the so-called 'notion-words'. Mostly these ~notion-words' take place in the sector of the thesaur~stic definitions.We also have to elaborate the 'semantic basic language' by which the words of the given language can be semantically defined and we have to define the system of the relations to be used in the semantic definitions. /The 'notion-words' are words, the elements of the 'semantic basic language' are of feature character./The thesaurus has to provide the interrelations among the different 'notion-words' and the different 'lexical units'. These interrelations are of one-many That is: more lexical units /LDEF/ m~y belong to one 'notion word' /TDEF/.The relation 'logical semantic representation' --'linguistic semantic representation' is an analogous equivalent of the relation 'notion word'--'lexical unit'.The 'logical semantic representation' /LOSE/ represents'logical connections' among ' notion words', while the 'linguistic semantic representation' /LISR/ represents 'verbal connections' among 'lexlcal units'. /In fact~ LISR is a deep structure that demonstrates the semantic character of the constituents, too./In connection with these we have to define the systems of both the logical and linguistic connections, the way of their representation, the correspondence of these two systems of connections, and the way of passing over from one to the other.The interrelations are here interrelations of sets and they are also of one-many character. /That is: to s given set of elementary logical relations more different text-structures can belong. These text structures one Dy one may contain only as many independent deep structures as ~he number of the elementary logical relations./ The efficacity of the analysing system first of all depends on the determinedness of the elements and relations of the logical and linguistic system and of the rules of -41 r the matching between the two systems.We have to mention that the relation of the phonetic and phonological representation is also analogous to the above told one, though this analogy contains an oblique symmetry. This is a natural consequence of the continuous and one-way one-many relation passing from the logical representation to the phonetic oneeFrom the point of view of the relation of the linguistic deep and surface structure the structure of the syntactic informations and the condition informations of the LDEF is of primly importance. It is significant to approach it from the transformations. /That is, we have to ~ examine to what extent the concrete lexical units occuring in a given deep structure determine the character of the transformations to be used./It is only a solution of the above sketched problems happening on 8 theoretical basis forming one coherent system that makes possible the solution of such questions as -the automatic realization of the transitions from the surface structure to the incomplete deep structure, from the incomplete to the complete deep structure, from the deep structure ~D the logical repre sentat ion~ -the automatic discovery of the thematic connections built on the TDEF-s, -the automatic establishment of ~he --in the given connection--irrelevant elements of the deep structure, -the automatic realization of the reduction building on the irrelevant elements, of m~ paper was a so-called 'simple narrative text'.When listing the problems we must not ignore the fact that the different kinds of texts will enlarge the above enumerated list of the basic problems with their special problems as well. : titled Introduction and Conclusion. MY aim here is to show that the automatization of the text analysis expects the accumulated experiences of the general linguistics and the documentation-theory to be summed up in one coherent theo~z. The linguistics has to examine the problems of documentational thesauri, abstracting, indexing from its own aspect, and has to insert these means and methods of the documentation in the system of the text-analysis.The practical questions, certain concrete phases of the co-textual analysis of texts are dealt with in the chapters 2 and 3. In connection with certain steps we hope our demonstrative examples to contain solutions that will necesssril~ become simpler further on. Our aim was, first of all, the presentation of the complete process on a special type of text. Not all the parts of this complete process can be automatized yet. But the presentation will always happen with respect to the automatization. The next step should be the getting up of an analysing system that is built on the thesauristic elaboration of a relatively not big material. Then we have to solve such special problems of certain phases of the analysis that could not even mention in this flow chart-like survey.Beside and after the theoretical analysis, only this can be the test of the usefulness of the system.i._I Before I should analyse the problems of the technical means and a method for the text analysis, I should like to touch a more general question. I wish to give a short sketch of the connection between the method to be presented and the problems of the modern theory of grammar and theory of documentation.To see the connections of the theory of grammar clearer, let us start from Figure I .Every message is an element of a net of connections /context/, ~etermined by time, geographical place, cultural environments etc. The linguistic analysis can set the only aim of discovering the inner /co-textual/ linguistic structure of the text with the demand of completeness. /It can draw near to the contextual connections only to an extent that is made possible by the structure of the dictionary having used at the analysis./The co-textual analysis --and the immediate constituent and sentence analysis that are organic parts of ~t--can be done on different w~s.TheFigure 1 shows the components of Chomsk~'s gemerative theory. Chomsky's generative process /@/ starts from the 'base component' /B/o The deep structure of the sentence /DS/ comes to existence from the sentence-basis by filling it up with Iexic~l elements in the 'lexicon' /L/. The surface structure /S~ comes to existence by using different kinds of transformations in the 'transformational component' /T/. /The situating of the lexicon and the transformational component in Figure I wishes to demonstrate that we may need turning to the lexicon even during dispatching the transformations./ The proper interpretating components /IDs and IS~ provide the surface and deep structure with phonetic and semantic interpretations. These interpretations, to a certain extent, relate to the extralinguis~ic realit~ already. The phonetic interpretation contains the basic elements, too, necessary to the real pronouncing of the sentence, beside the phonological interpretation of the surface structure /PHTR = ~honetic representation/. /The elements originating in the speaker's subjective interpretation either settle on or colour this./ The semantic interpretation --beside the establishing of the semantic character of the immediate constituents--trends to the discovering of logical connections independent from language, hidden behind the given verbal representation /LOSR = logical semantic representation versus LISR = linguistic semantic representation/.A part of the investigations included by the generative linguistic theory, like other examinations of other kinds, raise the possibility of generating sentences on another way, too. I This generative process /G'/ starts from the universal logical construction /LOSR/. /The universal logical construction is a net of connections containing ,notion-words,2./ We have to order a linguistic structure / DS/ --reflecting the characteristics of the ~v~n language--to this construction, and make a concrete sentence of it / SS/.The way of the sentence-analysis /A/ is of the opposite direction to both. At the same time, our sketched analysis is built on the theoretical basis, which has developed during the investigation concerning the problems of the generative processes /G and G'/ summarize~ above. But, as the text analysis goes beyond the boundaries of the sentence, we have to widen the sentencecentered theoretical basis, as well. 3When analysing the notion 'text-structure' we can make a difference between the linguistic and the sound-textural components of the text. Both can be linearly and hierarchically patterned.The 'linear patterning' is a net of the recurrences of elements that interweave the whole text. The 'hierarchical patterning' means the way as the text as a whole is built up from the basic units of the structure through the levels of the composition units of different complexity. 4Of course there are texts where the sound-textural component is the organizer of the structure, but as it is not mY intention to deal with this now, I shall ignore their problems at present.A great part of the essays dealing with text analysis considers making lists of the structural elements of similar construction. /It mostly remains among sentenceboundaries, so the use of a sentence-centered theory is enough./ These lists are undoubtedly essential characteristics of a text, but they reflect only one aspect of the text structure. This is what I called 'linear patterning'.The other --and, from a certain point of view more important--aspect is the 'hierarchic~l patterning' of the text. Its analysis means much more problems.The problems are different, according to the types of texts. Even if we consider only the most homogeneous texts --the sclent~fic didactic prose or the prose told only in the third person not containing indirect speech---5 -i even then we meet the following basic problem: grsmnatical /syntactic, semantic/ connections can be discovered only among sentences constructing the so-called 'paragraphs'./At the same time, no one has yet established the types of it, neither has s~yone described its rules as a system./ larger text-units ~han 'paragraphs' only such connections can be shown that are carried by the 'content structure'.Thus, beside the examination of the grammatical construction of the 'paragraphs' we also have to look for the means to help us to discover the connections of this kind.On this field we can get the greatest help from the documentation theory. During the last ten years the documentalists wrought out means and methods that will probably prove useful at the analysis of not scientific texts as well.I should like to mention the thesaurus as the most significant means. The thesaurus --as known--is a notiondictionary serving for the normalization of the indexing and searching language, which provides the different connections among the notions. It is this feature, that I think makes this means very useful to the linguistics for its own purposes, too.From the methods those of the abstracting and automatic indexing may become significant in our analysis.In the text analysis there is a 'great obstacle', that the texts are too extensive. For the sake of making the analysis easier and the hierarchical structure clear-cut it is both useful and necessary to replace the larger structure-units by their abstracts. When speaking of abstracting --though abstracts made by statistic methods may mean a good help temporarily--I do not think of statistic abstracts alone.In a broad outline these aspects of the documentation theory mean the wider frame, in which certain questions of the following analysing method can be examined. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.006015
null
null
null
null
null
null
null
null
2031b19437187acf9a5efee57b359b363dd3727a
13510585
null
Structure, Effectiveness, and Uses of the Citation Identifier
A computer program for automatic identification of "fullform" case citations in legal literature (e.g., Rutherford v. Geddes, 4 Wall. 220, 18 L. Ed. 343; Southland Industries, Incorporated v.
{ "name": [ "Borkowski, Casimir" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 46
1969-09-01
22
3
null
null
null
null
null
The Citation Identifier operates rather rapidly.In a recent test run, the total time required to process some 400, 000 running words of t~xt was approximately fifteen and a half minutes. This speed could be further increased by suitable changes in the cc~uter program.An extension of the Citation Identifier to reduced-form citations (e.g., "The Geddes decision," "the Southland Industries ease") is no~ in preparation. (A) Words sad phrases such as: "affirmed" 3 "ante~ "at p84~e" 3 "certoriemi denied" 3 "certoriari granted" 3 "Docket" 3 "infra" 3 "super" 3 "supra" 3 etc.(B) Abbreviations such as: "aff'd" 3 "A.L.R.", "app." 3 "Atl. ", "A. 2d" 3 "Cranch. "3 "Cir. "~ "C. C. "3 '7. Supp. "3 etc., and the names of states referred to in 6. above.
Main paper: : The Citation Identifier operates rather rapidly.In a recent test run, the total time required to process some 400, 000 running words of t~xt was approximately fifteen and a half minutes. This speed could be further increased by suitable changes in the cc~uter program.An extension of the Citation Identifier to reduced-form citations (e.g., "The Geddes decision," "the Southland Industries ease") is no~ in preparation. (A) Words sad phrases such as: "affirmed" 3 "ante~ "at p84~e" 3 "certoriemi denied" 3 "certoriari granted" 3 "Docket" 3 "infra" 3 "super" 3 "supra" 3 etc.(B) Abbreviations such as: "aff'd" 3 "A.L.R.", "app." 3 "Atl. ", "A. 2d" 3 "Cranch. "3 "Cir. "~ "C. C. "3 '7. Supp. "3 etc., and the names of states referred to in 6. above. Appendix:
null
null
null
null
{ "paperhash": [ "borkowski|an_experimental_system_for_automatic_recognition_of_personal_titles_and_personal_names_in_newspaper_texts", "borkowski|an_experimental_system_for_automatic_identification_of_personal_names_and_personal_titles_in_newspaper_texts" ], "title": [ "An Experimental System for Automatic Recognition of Personal Titles and Personal Names in Newspaper Texts", "An experimental system for automatic identification of personal names and personal titles in newspaper texts" ], "abstract": [ "This paper (i) describes some of the main problems involved in automatic recognition of personal titles and names in newspaper texts, (2) outlines some rules of an algorithm designed to perform this task, (3) presents statistics concerning the algorithm's accuracy and exhaustiveness obtained in manual application of the algorithm to texts, (4) discusses and interprets some of the results, and (5) suggests some applications for computer programs capable of recognizing personal titles and names.", "Natural language seems to contain various special‐purpose subsystems, e.g., personal titles, personal names, dates, street addresses, place names—each with its own structure which relative to the total structure of language is rather simple. An ability to identify automatically words and word strings belonging to various special‐purpose linguistic subsystems (akin to some thesaurus classes) may prove to be very useful since they play an important role in the making of indexes and in various systems for extracting and distributing information. This article describes some of the main problems involved in automatic identification in newspaper texts of words and word strings belonging to two important linguistic subsystems, viz., personal titles and names; lists some of the major rules of an algorithm designed to perform this task; presents statistics concerning the algorithm's accuracy and exhaustiveness obtained in manual application of the algorithm to texts; and suggests some applications for computer programs capable of recognizing personal titles and names. The results obtained indicate that an automatic system capable of accurate and exhaustive identification of personal titles and names in texts requires recognition procedures which are rather complex. It is therefore suggested that along with researching and developing methods for high‐quality automatic classification of words in texts, it may be advisable to set up efficient procedures for manual classification and tagging of words in texts, and automatic extraction of data from texts which were recognized either manually or automatically. Such action seems appropriate since automatic extraction of information from manually recognized texts would probably constitute a valuable service, and, when automatic procedures for identifying dates, personal names, personal titles, trade names, company names, chemical formulas, numbers and measure words, and so forth become competitive with manual ones, the data‐processing profession will be already in possession of operational computer programs capable of extracting data from recognized exts." ], "authors": [ { "name": [ "C. Borkowski" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Borkowski" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "2810870", "62216436" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
- Problem: The inefficiency and slowness of traditional methods of legal information handling hinder efficient administration of justice, as access to necessary legal data is often slow and ineffective, impacting the speed and effectiveness of legal actions, defense and offense preparation, law creation, and overall legal processes. - Solution: The paper proposes the development of a computer program, the Citation Identifier, for automatic identification of full-form case citations in legal literature to alleviate the crisis in legal documentation and improve the efficiency of legal information processing. The program aims to automate the identification of legal precedents in legal literature by recognizing both full-form and reduced-form case citations, with a focus on starting with automatic identification of full-form legal citations before moving on to reduced-form citations.
665
0.004511
null
null
null
null
null
null
null
null
f4c7c2e487347dc20351d6e8cc8578d2c8e368ef
2788301
null
Stylistic Analysis of Poetry
The Trlcon program has been adapted as an aid to the analysis of syntax in poetry with the hope of identifying the dominant patterns that characterize a poet's style. The Trlcon program (described in Concordances from Com-~ , S.M.Lamb and L.Gould, University of Callf0~nia, ey,1964)has three active lines which this researcher has used for text, syntactic analysis (following a phrase-str~cture grammar) and a deviance indicator. The analysis for the three lines is charted and then transfered to punched cards. Abbreviated alphabetical symbols are used for the syntactic analysis (AP=adJective phrase) because of the program's 24unlt search limitation.
{ "name": [ "Fairley, Irene R." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 62: Collection of Abstracts of Papers
1969-09-01
0
1
null
The Trlcon program has been adapted as an aid to the analysis of syntax in poetry with the hope of identifying the dominant patterns that characterize a poet's style. The Trlcon program (described in Concordances from Com-~ , S.M. Lamb and L.Gould, University of Callf0~nia, ey,1964) has three active lines which this researcher has used for text, syntactic analysis (following a phrase-str~cture grammar) and a deviance indicator.The analysis for the three lines is charted and then transfered to punched cards. Abbreviated alphabetical symbols are used for the syntactic analysis (AP=adJective phrase) because of the program's 24unlt search limitation.The program provides the usual textual concordances for vocabulary, punctuation, etc., and a concordance of syntactic patterns and deviances. Syntactic and deviance lines may be concorded for identiSal units (wholes: ~-I~P-PP-P) Or for partials (all tokens for S or P). The program has a wide range of subsort possibilities including multl-dlmenslonal crossconcordance : subsorts are possible by left or right context on all three lines.It also has the advantage of a neat multi-line centered print-out, so that the searched token appears with left and ri~nt context; this is especially helpful in the analysis of poetry.The concordances provide totals of the number of tokens for each type as well as totals of the number of types and tokens searched.An extensive analysis of a poet' s poems with the aid of a program llke Trlcon could provide the basis for a formulaic expression of the poet's syntactic habits, as a set of patterns and deviances.This kind of analysis might aid in the comparison of styles by Identifylng differences in their dominant and minor syntactic patterns and deviations (qualitative distinctions) and the variations in their frequencies of occurrence (quantltatfwe distinctions).Such a formal treatment, of course, looks far into the future, and a program llke Trlcon must be regarded as at best a breaking of ground. This researcher is exploring Tricon' s possibilities in the analysis of e.e. cummings poetry.A substantial analysis of hlspoems usln~ Trlcon would help determine what syntactic patterms create the "cummings style" and to what extent and in what manner they are deviant from the standard grammar of English.
null
null
null
null
Main paper: • stylistic analysis of poetry: The Trlcon program has been adapted as an aid to the analysis of syntax in poetry with the hope of identifying the dominant patterns that characterize a poet's style. The Trlcon program (described in Concordances from Com-~ , S.M. Lamb and L.Gould, University of Callf0~nia, ey,1964) has three active lines which this researcher has used for text, syntactic analysis (following a phrase-str~cture grammar) and a deviance indicator.The analysis for the three lines is charted and then transfered to punched cards. Abbreviated alphabetical symbols are used for the syntactic analysis (AP=adJective phrase) because of the program's 24unlt search limitation.The program provides the usual textual concordances for vocabulary, punctuation, etc., and a concordance of syntactic patterns and deviances. Syntactic and deviance lines may be concorded for identiSal units (wholes: ~-I~P-PP-P) Or for partials (all tokens for S or P). The program has a wide range of subsort possibilities including multl-dlmenslonal crossconcordance : subsorts are possible by left or right context on all three lines.It also has the advantage of a neat multi-line centered print-out, so that the searched token appears with left and ri~nt context; this is especially helpful in the analysis of poetry.The concordances provide totals of the number of tokens for each type as well as totals of the number of types and tokens searched.An extensive analysis of a poet' s poems with the aid of a program llke Trlcon could provide the basis for a formulaic expression of the poet's syntactic habits, as a set of patterns and deviances.This kind of analysis might aid in the comparison of styles by Identifylng differences in their dominant and minor syntactic patterns and deviations (qualitative distinctions) and the variations in their frequencies of occurrence (quantltatfwe distinctions).Such a formal treatment, of course, looks far into the future, and a program llke Trlcon must be regarded as at best a breaking of ground. This researcher is exploring Tricon' s possibilities in the analysis of e.e. cummings poetry.A substantial analysis of hlspoems usln~ Trlcon would help determine what syntactic patterms create the "cummings style" and to what extent and in what manner they are deviant from the standard grammar of English. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.001504
null
null
null
null
null
null
null
null
46823f209ef3833ab79bb8286b4181499117aa6d
174383
null
The Measurement of Phonetic Similarity
There are many reasc~s for wanting to measure the degree of phoaetic similarity between members of a group of languages or dialects. The present study grew out of a research project which was designed to get data that might have a bearing on some of the practical problems which exist in Uganda. In the Southern part of Uganda, where two thirds of the nine million people live, there are numerous closely related Bantu languages or dialects. The official Ugandan census data lists 15 Bantu languages. The current study uses data on these and six others. We wanted to assess their phonetic similarity so that there would be data on which to base decisions on which languages to use for broadcasting (the government currently broadcasts in 8 or 9 of these languages, as well as in i0 non-Bantu languages), which to use in schools (3 are used officially and a further 5 unofficially, but with the connivance of the local education authorities), and which for other purposes. One method of obtaining a measure might have been by devising a metric that could be applied to formal comparisons of phonological descriptions of each of these languages. This method was not attempted, largely because of time limitations. The data had to be collected and first analyses made within a period of one year. Furthermore, it soon appeared that the sound patterns of nearly all of these languages were very similar, and the phonological descriptions would have to be eXtremely detailed before systematic differences became apparent. Finally,
{ "name": [ "Ladefoged, Peter" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 57
1969-09-01
19
9
null
before we could quauti~j, in practical terms, the overall degree of phonetic similarity between a pair of languages, the phonological descripticas would have to be supported by counts of the frequency of occurrence of each rule. A difference between two languages due to, s~, the addition of a rule in one but not the other would be more or less important depending on the number of times in which the rule was involved in ordinary utterances.The technique which we chose to use instead was to measure the degree of phonetic similarity in a list of 30 co-,,on words in each language, all of which were historically cognate forms in at least 16 out of the 20 languages. The list was a subset of a list of lO0 words which had been recorded so that lexico-statistical comparisons might be made.The complete lists had been recorded in a narrow phonetic transcription by the author, u~ing IPA symbols except for the voiced and voiceless palatal affricates, which Were transcribed j and C in accordance with the conventions of Ugaudan orthographies. Long vowels and long consonants (both of which are phonemic in sone of these languages) were transcribed with double letters. Tones were transcribed by acute accents (high), grave accents (low) and circumflex accents (falling); as far as is known these possibilities will account for nearly all the tonal contrasts that occur in these languages. Table 1 exemplifies the data for two words in each of the 20 languages.The fundamental problem in making phonetic comparisons is how to line up two words, one in one dialect and one,in another, in such a ws~ that we can make a valid point by point comparison of all the things which affect phonetic similarity. In the Bantu languages with which we were concerned, each noun consists of a stem, and a prefix indicating the noun class. Only the stems were used in these phonetic comparisons.In general, a stem begins with a consonant, C, followed by a vowel, V, and may contain additional alternations of consonants and vowels. The commonest form is CVCV. Some problems in lining up segments will be considered after we have considered how they may be compared.There have been a number of attempts to devise measures of the degree of phonetic similarity of isolated segments. Some of these have been based on experimental studies showing, for instance, the degree of confusability of different segments (Miller and Nicely 1955 , Peters 1963 , Wickelgren 1965 , 1966 , Klatt 1968 , Greenberg and Jenkins 1962 , Mohr and Wang 1968 ; others have been based on more theoretical arguments (Austin 1957, Peterson and Harary 1961) . All of these are of interest here, in that the knowledge of the degree of phonetic similarity between segments is a necessary prerequisite to a statement about the degree of phonetic similarity of languages as a whole.Some of the studies cited above have discussed the possibility of quantifying the degree of difference between segments by counting the number of differences in their specifications in terms of features.Various ways of specifying segments in terms of features have been suggested, the most important being the early distinctive feature system of Jakobson, Fant, and Halle (1951) , its revision by Jakobson and Halle (1956) , and the system proposed by Chomsky and Halle (1968) . All these features sets are intended for classifying the segments which occur in phonemic or phonological contrasts within a language. But it is by no means obvious that the specification of the phonetic level in the way suggested by Chc~sky and Halle, for instance, is directly related to the specification of the kind of phonetic similarity measure which is useful in cross language studies. Chomsky and Halle were certainly not trying to produce a phonetic specification of this kind. Accordingly for the purposes of the present study an ad hoc set of phonetic features was used.For the sake of computational simplicity, the phonetic features were considered to be independent binary categories. This is obviously aninvalid assumption which will be discussed further towards the end of this paper. Because vowels were being compared only with vowels, and consonants only with consonants, there was no need for features such as consonantal and vocalic; they would never have contributed anything to the cross language comparisons. Furthermore there was no need to use the same features for both consonants and vowels. The feature system which was set up was adequate for specifying all the phonetic differences which had been observed sunnng Ugandan Bantu languages and seemed, on the basis of the experimental studies cited above, likely to be the best possible measure of segment similarity within the constraints previously noted.Each consonant segment in a Ugandan Bantu language was described as being, or not being: (i) a stop;(2) a nasal;(3) a fricative; (4) an.terior --made in the front of the mouth; (5) alveolar --made near the teeth ridge; (6) coronal --made in the centr~ of the mouth; (7) voiced;(8) long; (9) followed by a w-glide; (i0) followed by a y-glide. The easiest way of appreciating the way in which these terms were used is through the examples showing the partial characterization of some ~onants given in Tables 2 and 3 . A plus sign indicates the presence ~ feature, and a minus sign shows its absence.The degree of similarity between segments is exemplified in Ts~ ~.Thus b and d have nine out of the ten points in common; and b =~ 5Y differ in seven points, and have only three points in cc~amon.In one or two details this measure is not entirely satisfactory. In specifying the vowels we stated whether each one was, or was ~.- A number of problems arose in the comparison of specific segments, two of which will be considered here. Both are due to the constraint of having to compare words segment by segment, a constraint which is necessary only because of the difficulties of formalizing the comparisons in any other way.The first was that not all the stems to be compared were the same length. For exe~ple, the stem in the word for 'ear' has the form -~ or -~wf in many of these languages; but in two languages it is disyllabic, being either -t~yf or -t~yf. One might guess that these are the older forms, and there has been some kind of shortening process in all the other languages. The solution that was adopted was to add dummy segmsnts with entirely negative feature values to all the languages having a monosyllabic form. This did not affect the similarity measure within the monosyllabic group of languages ; and it made the two languages having disyllabic forms more similar to the monosyllabic group than they would have been to another language which had a different second syllable.The second problem arose when a phonetic feature such as palatalization was realized in one language in a consonant and in another in a vowel. The word for 'crocodile', for example, often has a stem of the form -g66~ ~ but sometimes, instead of the p~latal nasal, the form is -g6fn~. Note that if these two forms were lined up so that the consonants were compared only with the consonants and the vowels only with the vowels, then there would be differences in both the last vowel and the last consonant. Consequently this pair would be counted as less similar than a pair such as -g6~n~ and -g6~. This is not a desirable result. It was avoided by an ad hoc solution in which -in was arbitrarily specified as a consonant differing in one feature from the palatal nasal p . Note also that the problem is not avoided by using the same features for consonants and vowels~ it is simply a matter of the lining up of the segments to be compared. The results for this particular group of 20 Ugandan languages are not particularly relevant here~ they are given in detail elsewhere (Criper, Glick,and Ladefoged, forthcoming) . It is sufficient to note that the relationships revealed suggested plausible and interesting groupings into dialect clusters.What is of more interest here is the validation of the claim that this technique measures phonetic similarity between languages. We attempted to do this in two ways, first by assessing local opinion concerning the degree of similarity between one language and another, and secondly by testing the extent to which people actually understand other languages° The first of the6e two methods did not produce reliable data; different local experts gave different figures, and even the ssme man gave different estimates when the questions were put to him in a slightly different ws~ on different occasions. The second method produced limited but valid data. The procedures are described in full elsewhere (Criper, Glick, and Ladefoged, forthcoming) . We conducted tests with speakers of two different languages. For each of these languages we used five groups of speakers, and pls~ed them recordings of stories in their own and four other languages, rotating stories, languages, and groups in a Latin square design. The group scores in answering questions about these stories were subjected to an analysis of variance, which showed that there were no significant differences between any of the listening groups, or between auy of the stories ; but there were very significant differences in the comprehension of the different languages. We therefore had valid scores on the co~rehension of two languages relative to four other languages. These eight scores were compared with the degrees of phonetic similarity of the corresponding pairs of languages end, provided one score was left out for reasons discussed below, a high correlation was found (r = 0.98).It is virtually impossible to test the relative comprehension of all possible pairs of a large number of languages, because of the complexities in the experimental design which are necessary. But it would appear that, at least in the case of these Ugandan Bantu languages, valid predictions ms~ be made on the basis of the phonetic similarity measure described above. There are, however, circumstances in which our predictions would be wrong. The degree of comprehension of one language to another is not always a reversible relationship~ speakers of a prestige language do not understand a minor l~uguage as well as speakers of the minor language understand the prestige language. It is this discrepancy which accosts for our having to leave out one score in order to get a high correlation as described above. Phonetic similarity is a good predictor of intelligibility only if questions of prestige are not involved.Finally we must consider w~s in which we could improve the metric used for comparing the phonetic similarity of segments. Perhaps the mo~t obvious improvement is to allow for variations in the importance of different features. The experimental studies cited above generally agree in finding that differences in manner of articulation contribute more to perceptual distance than differences in voicing, and both contribute more than differences in place of articulation. Accordingly features must be assigned different weights.The situation is, however, more complicated. We must also allow for the interaction of features. For example, the experimental studies cited above have shown that there is a greater difference between the members of the set pa -ta -ka than there is between the members of the set b8 -ds -9.~ ; and the members of the set ma -na -~8are even less different from one another. Consequently differences in place of articulationD however coded, must be made to have less effect when the feature voiced is also present; and even less effect when the feature nasal is also present.It seems that it would also be advisable to allow for non-binary specifications of features. Multivalued feature specifications can be l0 treated in either of two wssrs. In one way, each value is regarded as being equally different from all others. Thus if the consonants p , ~ , c , k are assigned the values l, 2, 3, 4 on a feature of articulatory place, they will each be regarded as being c~e point different from each other with respect to this feature, assuming it has been given a weight of 1. Alternatively multivalued specifications can ~ be treated as scalar quantities. If this is done and, for example, the vowels i , e , a are specified as having the values l, 2, 3 on a feature of vowel height, then e would be counted as one point different from i and a , but i and a would be two points different from each other (assuming this feature has a weight of 1). If they had been specified as l, 2, 7 then e would have been three points from I and a and the~ would have been six points different from each other.The use of independent multivalued feature specifications allows us to correct an anc~aly which was mentioned above. It will be remembered that using the previous system it was impossible to specify h in a way such that it was equally different from all stop consonants.But if place of articulation is an independent multivalued feature, and if h is assigned a value different from any of the stop consonants, then it can be made equally different from all of them. In other words, this type of specification allows us to formalize within the metric the notion of an irrelevant feature.A computer program has now been written which compares segments which may be specified in terms of weighted, interacting, multivalued, independent or scalar, features. It is hoped that results of experiments using this program will be available for reporting to the conference.
null
null
null
null
Main paper: : before we could quauti~j, in practical terms, the overall degree of phonetic similarity between a pair of languages, the phonological descripticas would have to be supported by counts of the frequency of occurrence of each rule. A difference between two languages due to, s~, the addition of a rule in one but not the other would be more or less important depending on the number of times in which the rule was involved in ordinary utterances.The technique which we chose to use instead was to measure the degree of phonetic similarity in a list of 30 co-,,on words in each language, all of which were historically cognate forms in at least 16 out of the 20 languages. The list was a subset of a list of lO0 words which had been recorded so that lexico-statistical comparisons might be made.The complete lists had been recorded in a narrow phonetic transcription by the author, u~ing IPA symbols except for the voiced and voiceless palatal affricates, which Were transcribed j and C in accordance with the conventions of Ugaudan orthographies. Long vowels and long consonants (both of which are phonemic in sone of these languages) were transcribed with double letters. Tones were transcribed by acute accents (high), grave accents (low) and circumflex accents (falling); as far as is known these possibilities will account for nearly all the tonal contrasts that occur in these languages. Table 1 exemplifies the data for two words in each of the 20 languages.The fundamental problem in making phonetic comparisons is how to line up two words, one in one dialect and one,in another, in such a ws~ that we can make a valid point by point comparison of all the things which affect phonetic similarity. In the Bantu languages with which we were concerned, each noun consists of a stem, and a prefix indicating the noun class. Only the stems were used in these phonetic comparisons.In general, a stem begins with a consonant, C, followed by a vowel, V, and may contain additional alternations of consonants and vowels. The commonest form is CVCV. Some problems in lining up segments will be considered after we have considered how they may be compared.There have been a number of attempts to devise measures of the degree of phonetic similarity of isolated segments. Some of these have been based on experimental studies showing, for instance, the degree of confusability of different segments (Miller and Nicely 1955 , Peters 1963 , Wickelgren 1965 , 1966 , Klatt 1968 , Greenberg and Jenkins 1962 , Mohr and Wang 1968 ; others have been based on more theoretical arguments (Austin 1957, Peterson and Harary 1961) . All of these are of interest here, in that the knowledge of the degree of phonetic similarity between segments is a necessary prerequisite to a statement about the degree of phonetic similarity of languages as a whole.Some of the studies cited above have discussed the possibility of quantifying the degree of difference between segments by counting the number of differences in their specifications in terms of features.Various ways of specifying segments in terms of features have been suggested, the most important being the early distinctive feature system of Jakobson, Fant, and Halle (1951) , its revision by Jakobson and Halle (1956) , and the system proposed by Chomsky and Halle (1968) . All these features sets are intended for classifying the segments which occur in phonemic or phonological contrasts within a language. But it is by no means obvious that the specification of the phonetic level in the way suggested by Chc~sky and Halle, for instance, is directly related to the specification of the kind of phonetic similarity measure which is useful in cross language studies. Chomsky and Halle were certainly not trying to produce a phonetic specification of this kind. Accordingly for the purposes of the present study an ad hoc set of phonetic features was used.For the sake of computational simplicity, the phonetic features were considered to be independent binary categories. This is obviously aninvalid assumption which will be discussed further towards the end of this paper. Because vowels were being compared only with vowels, and consonants only with consonants, there was no need for features such as consonantal and vocalic; they would never have contributed anything to the cross language comparisons. Furthermore there was no need to use the same features for both consonants and vowels. The feature system which was set up was adequate for specifying all the phonetic differences which had been observed sunnng Ugandan Bantu languages and seemed, on the basis of the experimental studies cited above, likely to be the best possible measure of segment similarity within the constraints previously noted.Each consonant segment in a Ugandan Bantu language was described as being, or not being: (i) a stop;(2) a nasal;(3) a fricative; (4) an.terior --made in the front of the mouth; (5) alveolar --made near the teeth ridge; (6) coronal --made in the centr~ of the mouth; (7) voiced;(8) long; (9) followed by a w-glide; (i0) followed by a y-glide. The easiest way of appreciating the way in which these terms were used is through the examples showing the partial characterization of some ~onants given in Tables 2 and 3 . A plus sign indicates the presence ~ feature, and a minus sign shows its absence.The degree of similarity between segments is exemplified in Ts~ ~.Thus b and d have nine out of the ten points in common; and b =~ 5Y differ in seven points, and have only three points in cc~amon.In one or two details this measure is not entirely satisfactory. In specifying the vowels we stated whether each one was, or was ~.- A number of problems arose in the comparison of specific segments, two of which will be considered here. Both are due to the constraint of having to compare words segment by segment, a constraint which is necessary only because of the difficulties of formalizing the comparisons in any other way.The first was that not all the stems to be compared were the same length. For exe~ple, the stem in the word for 'ear' has the form -~ or -~wf in many of these languages; but in two languages it is disyllabic, being either -t~yf or -t~yf. One might guess that these are the older forms, and there has been some kind of shortening process in all the other languages. The solution that was adopted was to add dummy segmsnts with entirely negative feature values to all the languages having a monosyllabic form. This did not affect the similarity measure within the monosyllabic group of languages ; and it made the two languages having disyllabic forms more similar to the monosyllabic group than they would have been to another language which had a different second syllable.The second problem arose when a phonetic feature such as palatalization was realized in one language in a consonant and in another in a vowel. The word for 'crocodile', for example, often has a stem of the form -g66~ ~ but sometimes, instead of the p~latal nasal, the form is -g6fn~. Note that if these two forms were lined up so that the consonants were compared only with the consonants and the vowels only with the vowels, then there would be differences in both the last vowel and the last consonant. Consequently this pair would be counted as less similar than a pair such as -g6~n~ and -g6~. This is not a desirable result. It was avoided by an ad hoc solution in which -in was arbitrarily specified as a consonant differing in one feature from the palatal nasal p . Note also that the problem is not avoided by using the same features for consonants and vowels~ it is simply a matter of the lining up of the segments to be compared. The results for this particular group of 20 Ugandan languages are not particularly relevant here~ they are given in detail elsewhere (Criper, Glick,and Ladefoged, forthcoming) . It is sufficient to note that the relationships revealed suggested plausible and interesting groupings into dialect clusters.What is of more interest here is the validation of the claim that this technique measures phonetic similarity between languages. We attempted to do this in two ways, first by assessing local opinion concerning the degree of similarity between one language and another, and secondly by testing the extent to which people actually understand other languages° The first of the6e two methods did not produce reliable data; different local experts gave different figures, and even the ssme man gave different estimates when the questions were put to him in a slightly different ws~ on different occasions. The second method produced limited but valid data. The procedures are described in full elsewhere (Criper, Glick, and Ladefoged, forthcoming) . We conducted tests with speakers of two different languages. For each of these languages we used five groups of speakers, and pls~ed them recordings of stories in their own and four other languages, rotating stories, languages, and groups in a Latin square design. The group scores in answering questions about these stories were subjected to an analysis of variance, which showed that there were no significant differences between any of the listening groups, or between auy of the stories ; but there were very significant differences in the comprehension of the different languages. We therefore had valid scores on the co~rehension of two languages relative to four other languages. These eight scores were compared with the degrees of phonetic similarity of the corresponding pairs of languages end, provided one score was left out for reasons discussed below, a high correlation was found (r = 0.98).It is virtually impossible to test the relative comprehension of all possible pairs of a large number of languages, because of the complexities in the experimental design which are necessary. But it would appear that, at least in the case of these Ugandan Bantu languages, valid predictions ms~ be made on the basis of the phonetic similarity measure described above. There are, however, circumstances in which our predictions would be wrong. The degree of comprehension of one language to another is not always a reversible relationship~ speakers of a prestige language do not understand a minor l~uguage as well as speakers of the minor language understand the prestige language. It is this discrepancy which accosts for our having to leave out one score in order to get a high correlation as described above. Phonetic similarity is a good predictor of intelligibility only if questions of prestige are not involved.Finally we must consider w~s in which we could improve the metric used for comparing the phonetic similarity of segments. Perhaps the mo~t obvious improvement is to allow for variations in the importance of different features. The experimental studies cited above generally agree in finding that differences in manner of articulation contribute more to perceptual distance than differences in voicing, and both contribute more than differences in place of articulation. Accordingly features must be assigned different weights.The situation is, however, more complicated. We must also allow for the interaction of features. For example, the experimental studies cited above have shown that there is a greater difference between the members of the set pa -ta -ka than there is between the members of the set b8 -ds -9.~ ; and the members of the set ma -na -~8are even less different from one another. Consequently differences in place of articulationD however coded, must be made to have less effect when the feature voiced is also present; and even less effect when the feature nasal is also present.It seems that it would also be advisable to allow for non-binary specifications of features. Multivalued feature specifications can be l0 treated in either of two wssrs. In one way, each value is regarded as being equally different from all others. Thus if the consonants p , ~ , c , k are assigned the values l, 2, 3, 4 on a feature of articulatory place, they will each be regarded as being c~e point different from each other with respect to this feature, assuming it has been given a weight of 1. Alternatively multivalued specifications can ~ be treated as scalar quantities. If this is done and, for example, the vowels i , e , a are specified as having the values l, 2, 3 on a feature of vowel height, then e would be counted as one point different from i and a , but i and a would be two points different from each other (assuming this feature has a weight of 1). If they had been specified as l, 2, 7 then e would have been three points from I and a and the~ would have been six points different from each other.The use of independent multivalued feature specifications allows us to correct an anc~aly which was mentioned above. It will be remembered that using the previous system it was impossible to specify h in a way such that it was equally different from all stop consonants.But if place of articulation is an independent multivalued feature, and if h is assigned a value different from any of the stop consonants, then it can be made equally different from all of them. In other words, this type of specification allows us to formalize within the metric the notion of an irrelevant feature.A computer program has now been written which compares segments which may be specified in terms of weighted, interacting, multivalued, independent or scalar, features. It is hoped that results of experiments using this program will be available for reporting to the conference. Appendix:
null
null
null
null
{ "paperhash": [ "klatt|structure_of_confusions_in_short-term_memory_between_english_consonants.", "wickelgren|distinctive_features_and_errors_in_short-term_memory_for_english_consonants.", "wickelgren|distinctive_features_and_errors_in_short-term_memory_for_english_vowels.", "peters|dimensions_of_perception_for_consonants", "miller|an_analysis_of_perceptual_confusions_among_some_english_consonants", "chomsky|the_sound_pattern_of_english" ], "title": [ "Structure of confusions in short-term memory between English consonants.", "Distinctive features and errors in short-term memory for English consonants.", "Distinctive features and errors in short-term memory for English vowels.", "Dimensions of Perception for Consonants", "An Analysis of Perceptual Confusions Among Some English Consonants", "The Sound Pattern of English" ], "abstract": [ "Data on confusions in short‐term memory between English consonants [W. A. Wickelgren, J. Acoust. Soc. Am. 39, 388–398 (1966)] have been reanalyzed within the framework of binary features. A new method of analysis is described that involves a similarity metric. The method produces an evaluation of individual features and associates a confidence level with each statement made about the data. With a confidence level greater than 0.99, it is shown that long frication, continuant, sonorant, sibilant, and voiced are features present in the data of Wickelgren. The issues of feature independence, completeness of a feature system, and the derivation of optimal features are treated. The advantages of the similarity metric over rank‐order statistics, multidimensional scaling, and information transfer are discussed.", "Errors in short‐term recall of 23 English consonants were tabulated and related to three distinctive‐feature systems. The consonants were always presented in initial position in a consonant‐vowel diagram, and the vowel was always /a/. Subjects were instructed to copy a list of consonants as it was being presented, followed by recall of the list. Perceptual errors were excluded from the recall‐error matrix by scoring for recall only correctly copied consonants. The data were also analyzed in such a way as to eliminate differences in response bias for different consonants. Having controlled for response bias, each feature system makes predictions about the rank order of different intrusion errors in recall. Each of the three feature systems was significantly more accurate than chance in these predictions, but the most accurate system was one developed in the present study. This system is a slightly modified version of the conventional phonetic analysis of consonants in terms of voicing, nasality, openness o...", "Errors in short‐term recall of six English vowels (I, e, ae, U, ʌ, ɑ) were tabulated and related to several distinctive‐feature systems. Vowels were embedded in two contexts: /l[ ]k/ and /z[ ]k/. Subjects were instructed to copy items as they were presented, followed by recall of the entire list of (six) items. Perceptual errors were excluded from the recall error matrix by scoring for recall only correctly copied items. The rank‐order frequency of different intrusions in recall of each presented vowel was almost perfectly predicted by a conventional phonetic analysis in two dimensions: place of articulation (front, back) and openness of the vocal tract (narrow, medium, and wide). The error matrix also supported the assumptions that the values of openness are ordered in short‐term memory and that the correct value on the openness dimension is more likely to be forgotten than the correct value on the place dimension. The study suggests that a vowel is coded in short‐term memory, not as a unit, but as a set ...", "Multidimensional analysis of similarity data was applied to the domain of spoken consonants. Listeners responded to productions of their own consonants in terms of the similarity of the consonants to each other. Estimates of psychological distances between consonants were derived from the similarity responses. The distances were used to construct an auditory space, Euclidean in nature, for the consonants. Coordinates of the consonants in the space were of the appropriate dimensionality necessary to account for distances between consonants. Articulatory data were utilized to identify the dimensions. Manner of articulation was the first and most important auditory dimension. Voicing and place of articulation served to identify subsequent dimensions.", "Sixteen English consonants were spoken over voice communication systems with frequency distortion and with random masking noise. The listeners were forced to guess at every sound and a count was made of all the different errors that resulted when one sound was confused with another. With noise or low‐pass filtering the confusions fall into consistent patterns, but with high‐pass filtering the errors are scattered quite randomly. An articulatory analysis of these 16 consonants provides a system of five articulatory features or “dimensions” that serve to characterize and distinguish the different phonemes: voicing, nasality, affrication, duration, and place of articulation. The data indicate that voicing and nasality are little affected and that place is severely affected by low‐pass and noisy systems. The indications are that the perception of any one of these five features is relatively independent of the perception of the others, so that it is as if five separate, simple channels were involved rather tha...", "Since this classic work in phonology was published in 1968, there has been no other book that gives as broad a view of the subject, combining generally applicable theoretical contributions with analysis of the details of a single language. The theoretical issues raised in The Sound Pattern of English continue to be critical to current phonology, and in many instances the solutions proposed by Chomsky and Halle have yet to be improved upon.Noam Chomsky and Morris Halle are Institute Professors of Linguistics and Philosophy at MIT." ], "authors": [ { "name": [ "D. Klatt" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. A. Wickelgren" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. A. Wickelgren" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Peters" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. A. Miller", "P. E. Nicely" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Noam Chomsky", "M. Halle" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null ], "s2_corpus_id": [ "32073836", "41043984", "10438208", "121446353", "119769753", "60457972" ], "intents": [ [ "background" ], [ "background" ], [ "background" ], [ "background" ], [ "background" ], [] ], "isInfluential": [ false, false, false, false, false, false ] }
null
665
0.013534
null
null
null
null
null
null
null
null
ba528fcdd157e7c1aa41bd9504038fc0ec0a4269
28708574
null
A Rapidly Extensible Language System (The {REL} Language Processor)
REL, Rapidly Extenslble Language System, permits a variety of languages to coexist within a single computer system. Here the term "language" is understood to include a particular data base. New languages may be defined by constructing a new base language with its syntax and semantics, by extending the terminology from a given base level in order to reflect specific concepts, or by associating a given base language with a certain data base. REL consists of an operating environment, a language processor, and the set of currently defined languages. The structural properties of these languages which determine the characterization and organization of the language processor are described. In particular, representation and manipulation of syntax and semantics are discussed, the mechanism of language extension is outlined, and the concept of a generator is introduced.
{ "name": [ "Lockemann, Peter C. and", "Thompson, Frederick B." ], "affiliation": [ null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 34
1969-09-01
8
3
null
Language plays a twofold role. For an individual, or a group of individuals with some common interest, it establishes a framework within which to express the structuration of their experience and conceptualization of their environment.In a social organization it provides the conventions through which these individuals or groups exchange and relate their views.In this second role, language facilitates communication between co--,unitles with divergent interests.In its first role language supports the creative process within a given conununlty. St becomes highly idiosyncratic and dynamic in nature as the community, or individual develops distinctive and specific concepts~ and continuously reconciles them with further observations of its environment.In such a community, the computer functions as an external memory which allows efficient and rapid presentation end organization of its stored information according to the various concepts developed. Since these concepts are expressed in a highly specific language, one must be able to converse with the computer in that very language.Extenslble Language System~ is a conversational computer system designed for these purposes [1] . REL provides a community with a base language suitable to its own interests. As the co~nunity develops the conceptual structure which deals most efficiently with its environment, it constructs / ! recursively from the base level a hierarchy of new terms or adjusts them.Since the conceptual structure is determined by observations of the environment (the "data"), so is the language. Language and data thus become closely interrelated. If chosen appropriately, the base language will remain invarlant and all conceptual changes will be reflected in its extensions.REL is designed to support a large number of diverse groups. As a consequence, it must be able to handle a large variety of languages.Efficiency considerations, as well as the necessity for easy formation and extension of a particular language suggest that a single processor be provided which deals with all the implemented languages. In orderto determine the precise nature of the language processor we must develop a structural description of language. This description, in turn, will spell out the detailed organization of the language processor.It is these questions that the present paper will concern itself with.We shall base our structural description of a language on the formalism presented earlier by F. B. Thompson [2, 3] . It postulates a one-to-one correspondence between the syntactic and semantic aspects.A language refers to some domain of discourse consisting of objects and relationships among them. One can order the objects and relationships into a finite number of sets, or "semantic categories" according to their structural properties. As a practical example, the ordering may he with respect to representation within the computer memory.There exist certain "transformations" mapping categories to categories;these deal wlth the structural properties of the sets and apply to any of their elements. On the syntactic level, the equivalent of categories and transformations are the syntactic classes ("parts of speech") and rewrite rules of the grammar. A particular composition of rules in the graummr (a parsing tree) corresponds to a particular composition of underlying transformations. The meaning of a sentence is the effect of a given sequence of transformations on the domain of discourse.The language processor is designed to handle these "formal languages".Even though the majority of the languages in the system can be expected to evolve from a relatively small set of base languages, the language processor must provide for languages with diverse characteristics. Our definition of formal language spans a large variety of grmmnars, ranging from those that are easy to describe to others that are difficult to characterize in a concise fashion. How much of this spectrum should be covered by the language processor?In other words, how complex should its architecture be? If we push its design towards accommodating the entire spectrum, the language processor will be very inefficient in dealing with formally simple languages because it would constantly have to treat aspects pertinent to only a few complex languages.If we were to tailor the processor to efficient manipulation of languages of little complexity we would limit the expressiveness which any language within the system could attain.We chose a compromise -a solution in which the language processor deals with those structural properties that are common to the majority of what we consider interesting languages, and which are simple to formalize in terms of the demands on computer memory, and complexity of programs. The remainder of this paper specifies and discusses these properties. On the other hand, all information regarding the present state and history of the sentence analysis is made available to any language. Languages with specific characteristics are thus allowed to perform certain steps in the analysis, and change the status of the analysis, on their own.The composite of syntactic rules and underlying transformations is a "language structure". Language is the comblnatlon of the language structure and a particular data base with objects and relationships.The language processor deals with a language'only in terms of its structure and is entirely divorced from the data. The language itselft through its transformations, is responsible for carrying out all the w~mlpulatlons of its data. If the subrule assumes a more complicated form, the language provides an explicit program, the "syntax completion" routine, to accomplish the analysis necessary. Such a program may also be needed to perform aspects of the syntactic analysis not covered by the language processor.
null
null
Indeed, each rule has its syntax completion part to determine the syntactic portion of the result, possibly on the basis of its arguments.A node in the phrase marker denotes either a "phrase" or a funetlon A language may also employ the extension mechanism if it wishes to avoid the use of a lexicon, and instead enter the referent words identifying objects in its universe of discourse in the form of a grammar rule. In this case each character must be considered a function symbol.
We notice that general rewrite rules and definition expansion have a property in common. In each case a llst of functions is given. Each function is exercised in turn, and the result of each step is utilized in a manner which depends only on the criterion governing the list. In Among the temporary configurations guiding the language processor are the parsing graph, and the syntax and interpretations of its phrases.
Main paper: languages and language processor: We shall base our structural description of a language on the formalism presented earlier by F. B. Thompson [2, 3] . It postulates a one-to-one correspondence between the syntactic and semantic aspects.A language refers to some domain of discourse consisting of objects and relationships among them. One can order the objects and relationships into a finite number of sets, or "semantic categories" according to their structural properties. As a practical example, the ordering may he with respect to representation within the computer memory.There exist certain "transformations" mapping categories to categories;these deal wlth the structural properties of the sets and apply to any of their elements. On the syntactic level, the equivalent of categories and transformations are the syntactic classes ("parts of speech") and rewrite rules of the grammar. A particular composition of rules in the graummr (a parsing tree) corresponds to a particular composition of underlying transformations. The meaning of a sentence is the effect of a given sequence of transformations on the domain of discourse.The language processor is designed to handle these "formal languages".Even though the majority of the languages in the system can be expected to evolve from a relatively small set of base languages, the language processor must provide for languages with diverse characteristics. Our definition of formal language spans a large variety of grmmnars, ranging from those that are easy to describe to others that are difficult to characterize in a concise fashion. How much of this spectrum should be covered by the language processor?In other words, how complex should its architecture be? If we push its design towards accommodating the entire spectrum, the language processor will be very inefficient in dealing with formally simple languages because it would constantly have to treat aspects pertinent to only a few complex languages.If we were to tailor the processor to efficient manipulation of languages of little complexity we would limit the expressiveness which any language within the system could attain.We chose a compromise -a solution in which the language processor deals with those structural properties that are common to the majority of what we consider interesting languages, and which are simple to formalize in terms of the demands on computer memory, and complexity of programs. The remainder of this paper specifies and discusses these properties. On the other hand, all information regarding the present state and history of the sentence analysis is made available to any language. Languages with specific characteristics are thus allowed to perform certain steps in the analysis, and change the status of the analysis, on their own.The composite of syntactic rules and underlying transformations is a "language structure". Language is the comblnatlon of the language structure and a particular data base with objects and relationships.The language processor deals with a language'only in terms of its structure and is entirely divorced from the data. The language itselft through its transformations, is responsible for carrying out all the w~mlpulatlons of its data. If the subrule assumes a more complicated form, the language provides an explicit program, the "syntax completion" routine, to accomplish the analysis necessary. Such a program may also be needed to perform aspects of the syntactic analysis not covered by the language processor. analysis of a sentence: Indeed, each rule has its syntax completion part to determine the syntactic portion of the result, possibly on the basis of its arguments.A node in the phrase marker denotes either a "phrase" or a funetlon A language may also employ the extension mechanism if it wishes to avoid the use of a lexicon, and instead enter the referent words identifying objects in its universe of discourse in the form of a grammar rule. In this case each character must be considered a function symbol. generators: We notice that general rewrite rules and definition expansion have a property in common. In each case a llst of functions is given. Each function is exercised in turn, and the result of each step is utilized in a manner which depends only on the criterion governing the list. In Among the temporary configurations guiding the language processor are the parsing graph, and the syntax and interpretations of its phrases. i. introduction: Language plays a twofold role. For an individual, or a group of individuals with some common interest, it establishes a framework within which to express the structuration of their experience and conceptualization of their environment.In a social organization it provides the conventions through which these individuals or groups exchange and relate their views.In this second role, language facilitates communication between co--,unitles with divergent interests.In its first role language supports the creative process within a given conununlty. St becomes highly idiosyncratic and dynamic in nature as the community, or individual develops distinctive and specific concepts~ and continuously reconciles them with further observations of its environment.In such a community, the computer functions as an external memory which allows efficient and rapid presentation end organization of its stored information according to the various concepts developed. Since these concepts are expressed in a highly specific language, one must be able to converse with the computer in that very language.Extenslble Language System~ is a conversational computer system designed for these purposes [1] . REL provides a community with a base language suitable to its own interests. As the co~nunity develops the conceptual structure which deals most efficiently with its environment, it constructs / ! recursively from the base level a hierarchy of new terms or adjusts them.Since the conceptual structure is determined by observations of the environment (the "data"), so is the language. Language and data thus become closely interrelated. If chosen appropriately, the base language will remain invarlant and all conceptual changes will be reflected in its extensions.REL is designed to support a large number of diverse groups. As a consequence, it must be able to handle a large variety of languages.Efficiency considerations, as well as the necessity for easy formation and extension of a particular language suggest that a single processor be provided which deals with all the implemented languages. In orderto determine the precise nature of the language processor we must develop a structural description of language. This description, in turn, will spell out the detailed organization of the language processor.It is these questions that the present paper will concern itself with. Appendix:
null
null
null
null
{ "paperhash": [ "thompson|english_for_the_computer" ], "title": [ "English for the computer" ], "abstract": [ "What about English as a programming language? Few would question that this is a desirable goal. On the other hand, I dare say every one of us has rather deep reservations both about its feasibility and about a number of problems that it entails. This paper presents a point of view which gives some clarity to the relationship between English and programming languages. This point of view has found substance in an experimental system called DEACON. The second paper in this session will describe the specific DEACON system and its capabilities." ], "authors": [ { "name": [ "F. B. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "16173809" ], "intents": [ [ "background" ] ], "isInfluential": [ true ] }
null
665
0.004511
null
null
null
null
null
null
null
null
ec05ef0cbfa893b6d3c81df0beac2402cc3f7a5a
12758647
null
Linguistics and Automated Language Processing
This paper is concerned with natural language, computers, and two groups of people interested in natural language: ]inguists, and persons engaged in computer processing of natural language data.
{ "name": [ "Montgomery, Christine A." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 41
1969-09-01
42
3
null
There is some intersection of the latter sets, but the intersection is quite small relative to the size of the sets themselves and is thus inadequate to provide linguists with a proper perspective on automated language processing, or computer scientists with a proper perspective on linguistics.Although both groups of persons have a mutual interest in natural language, their conceptualizations of the nature of language and their approaches to processing language data are very different. To present a somewhat oversimplified view of these differences: linguists tend to be theory-oriented--they are concerned with interesting but sometimes quiteesoterlc problems, counter-examples, and the infinite set of sentences of competence; on the other hand, persons engaged in automated language processing tend to be data-oriented, and are concerned with statistical significance and with some finite subset of the sentences of performance. The question therefore arises as to whether these different perspectives are to be interpreted as incompatible or compler:~entary, and if complementary, whether some research concept might provide the means for a unified approach to analysis of natural language.In this paper, Section 1 deals with the perspective of linguists on automated language processing and computer scientists on linguistics;Section Z discusses their respective concepts of natural language and lI arn indebted to Paul GarTin for his valuable comments on this paper.and their approaches to analysis of natural language, and explores the questions raised above; Section 3 presents some concluding remarks.It is appropriate to begin this discussion with a brief inquiry A further--and not unrelated--reason for non-participalion linguists in automated language processing is a basic lack of knowledge about computers~ in the sense of realizing when a computer is a handy tool~ and when it isn't so handy. By this I don't mean a lack of knowledge about hexadecimal systems, bits and bytes, or serial and parallel processors, but very simply knowledge of what a computer is good for.The fact is that for many of the operations characteristically per* formed in linguistic research, the computer is an invaluable--if not an indispensable--tool. This is a very strong claim for the utility of the computer in linguistics; therefore, the grounds on which it is based are worth examining in some detail.The operations which the linguist performs in carrying out research on a language or languages are essentially the following: he collects data, organizes and analyzes them, formulates hypotheses and verifies them. There is of course, a great deal of feedback analysis and recycling through all these operations, which are highly interdependent.It is therefore impractical to examine the applicability of computer processing individually to each of the operations listed above. Since the important concept of "organization" applies equally to data and hypotheses, in the following the linguistic operations for which computers can be used will be grouped into these two categories. Where these operations are differently interpreted or valued by linguists of different schools, divergent points of view will be noted.Data collection and organization, operations on the data base.There are two senses in which data is collected in linguistic analysis. There are two major problems inherent in these traditional datahandling methods, which may provide at least a partial explanation for the well-known inadequacies in the descriptions of the so-called "exotic"languages (Uhlenbeck 1960 ). In the first place, the operations involved in the creation of these files, retrieval of relevant data from them, and replacement of the data in the files require a great deal of the llnguistts time, which might be more profitably spent in analysis and in hypothesis formulation and verification. Secondly, because in a taxonomic approach, classifications contained in the files in effect form the basis of the grammar, and because syntactic and semantic analysis requires a highly sophisticated and extensive organization of the data, these aspects of linguistic research inevitably suffer when data handling is limited to traditional manual techniques. In addition to the stringent requirement for explicitness, use of the computer necessitates a logical organization of hypotheses in order to provide for systematic ~g and eleJ~r tracing. Such requirements apply equally to formal grammars and the somewhat more loosely organized descriptive grammars.Transformational grammars, however, present a particularly convincing case for the ne~cessity of computer testing. It is difficult to envision how the linguistic researcher can possibly keep track of ZAlthough some difficulties are inevitable in converting linguistic materials to machine-readabLe form, the initial investment of time, energy, and funds are well worth ~ effort. At present, whether a keypunch or an optical character reader is used as a conversion device, linguistic diacritics and special characters must be recoded in terms of the available character set. However, fully automatic conversion by means of an optical character reader is a development which can be expected within the next few y~ars. Some existing models can read avarinty of type styles with the combined error rate of the reader and the typist being lower than that of key punched material, and the recognition of handprinted characters with an acceptable error rate is not far off. UCLA English Syntax Project, and a system constructed by IBM to test the grammar of English II (Rosenbaum 1967) . Although these programs all operate through a synthesis procedure, the on-line system described in Gross and Walker also has an analytic capability through the MITRE Syntactic Analysis Procedure.In addition to these largely synthetic test devices, many analytical algorithms exist. These include algorithms for morphological as well as syntactic analysis. A top-to-bottom predictive equivalent of the Cocke algorithm is the Kuno-Oettinger Syntactic Analyzer (Kuno 1965) . Both these algorithms, however, suffer from the disadvantage of producing multiple analyses.More effective procedures for syntactic analysis incorporate transformational rules; these include the MITRE Syntactic Analysis Procedure and that described by Martin Kay (1967) . Another approach to syntactic analysis is the "fulcrum" method, reported in Garvin (1968) , in which the grammar and the parsing logic are both incorporated into the analysis algorithm. At present, the most versatile device for testing a descriptive grammar is probably some version of the Cocke algorithm, which could be used as a morphological analyzer with a set of morphological rules, and as a syntactic analyzer with a set of syntactic rules. In morphological analysis, the input string would consist of codes representing the morphs occupying the successive position classes which form the particular word. In syntactic analysis, the input string wouldof course consist of codes constituting the grammatical labels of the words which form the particular sentence.Finally, it is appropriate to discuss a computer application which is noteworthy not only by virtue of the fact that it is in the descriptive tradition, but also because it constitutes a substantial departure from the above-mentioned algorithms in several important respects (Garvin 1969) . teristically; the text is segmented into "chunks" or "fragments" by an ad hoc recognition procedure based on lists of prepositions, conjunctions, introductory adverbs, and the like (e. g. , Kochen 1969 , Bohnert 1966 , Briner 1968 , Wilks 1968 In answer to the first question, it is reasonable to consider the two approaches as complementary, since the specific weaknesses of the data-orlented position are offset by corresponding strengths in the theoretical orientation, and conversely. In the following discussion, the respective deficiencies of the two approaches will be examined and potential unifying Concepts will be explored.The data-oriented view of natural language is generally characterized by a bias toward the data, a reliance on statistics, an interest in subsets of natural language, and thus a concern with some particular inventory of sentences of performance exclusive of any notion of the infinite inventory of sentences of competence.There are two general directions in which this weakness is exhibited, depending on the size of the natural language subset that is in-3 volved. With extremely large subsets, data orientation is mainly due to data inundation, and computer processing substitutes for theory.Ina system of this type, it is possible to perform a great deal of computer processing without knowing quite what it all means. Content analysis may be attempted by statistical techniques, but if the definition of the statistical word is not correlated with an actual word stem--or more relexrantly, with a concept which may be represented in natural language text by a number of different words and phrases--then all that has really been performed is a frequency count of unique character strings. The actual process of content analysis remains to be performed.Another variety of data-orientation weakness involves extremely small subsets of natural language. In this case, the defect consists in the testing of theories on very limited amounts of data--often 0nly 3In this context, a large subset, is defined as the entire information store in a particular system for processlng--say, scientific materlals-where the data base consists of over 100, 000 documents.• on the very sample from which the theory was originally derived.Claims for the generality of techniques derived by such means must thus be viewed with a certain amount of skepticism. Unfortunately, many of the more interesting activities in automated language pro-4 cessing--e, g. question-answering systems--suffer from this defect. interpreted underlying phrase marker (generated by the optimal grammar for the language) is such that every semantic marking in 4It is interesting to note in passing that a similar cirticism has frequently been leveled at traditional linguistic descriptions by linguists espousing the generative approach. According to this criticism, the descriptive linguist suffers from an exaggerated dependence on his "corpus"--the body of linguistic material constituting his data base. I His description of the language--in formal terms, his theory of the language--is thus a description of the corpus, and its validity is a function of the adequacy of the corpus as a representative sample of the language. the reading for its predicate also occurs in the reading for its subject" (Katz, 1967) . Katz thus defines "S is analytic for L" in terms of theoretical constructs for which he claims universality; he further states that for each language, "for each L 1 that is a possible value of "L", it is possible to differentiate the analytic from the nonanalytic sentences in L 1 o.u the basis of predictions that follow from this definition in conju:c t;.,,.~ with the semantic descriptions of the sentences in L 1 provided by the ,.Irammar of LI" (1968).UnfortuJ~.ately, the impact of Kat~s arguments is substantially reduced by the fact that--although there exists a definition of analyticity which has been postulated by Katz in terms of the theoretical constructs "underlying phrase marker, " "semantic interpretation, " "reading, ""subject of, " etc. , --there exists no grammar for any L l to provide the semantic descriptions of L 1 which must be conjoined with Katz's definition to provide for the differentiation of analytic from non-analytic sentences.Moreover, if Katz were to state that he had actually produced a grammar of some L 1 complete with semantic descriptions and presumably capable of generating the set of sentences of a speaker's competence in L 1, no one could prove that this was or was not an empty claim. Katz himself has affirmed the necessity of behavioral tests as a means of validating the empirical adequacy of his theoretical formulations (Katz 1967 (Katz , 1968 . However, previous attempts to investigate various syntactic phenomena through behavioral experiments have not been spectacularly successful, and since the investigation of semantic phenomena is inestimably more complex, behavioral verification of a grammar of L 1 appears impossible.This difficulty derives from tw~ sources, one of which involves the nature of meaning, and the ~ther, the present state of knowledge about linguistic performance, or speech behavior.The semantic problem lies in the fact that a great deal of meaning is situationally derived; the physical and soci~cultural situation to a considerable extent controls the semantic interpretation of sentences.In the narrow sense, the c:~neept of a physical and sociocultural context can be limited to those situations which are participated in by a majority of the speakers of the language: say, a school, a city.an airport.In the broader sense, however, physical and s~ci:~cultural context includes such factors as the entire history of an interaction between two persons--in other words, all the occasions on which they have interacted and the content of those instances of interaction.Without such information, a nro~er interpretation of innuendoes, jokes, allusions, and so forth, would not be possible. Also, in the sense of an interaction between persons, the context is dynamic; it grows from the inception of the interaction to its conclusion.Thus, the speech event is actually performed in an environment consisting of the entire range of physical and s ~ciocultural phenomena which are relevant to its interpretation. For this reason, semantic interpretation presents problems of considerable magnitude, some of which may be inherently insoluble.Setting the semantic problem aside for the moment,-.we consider the second source of difficulty encountered in attempting to validate a grammar of L 1 through behavioral testing: the present lack of an adequate theory of performance, or speech behavior. A grammar is a m~del of a speakerVs innate capacity, and not of the ways he uses this capacity t ~ produce and understand sentences. Although experi.ments suggest the psychological reality of some features of the structural descripti,ns generated by the competence model or grammar (Fodor and Garrett 1967) , a speaker demonstrates his competence thr~ughhis ~erformance, and the relation between a speakerls competence and his performance has yet to be explicated.Assuming that a speaker of L 1 will produce and understand only sentences for which the grammar of L 1 can supply structural descriptions, the problem is reduced to determining how the speaker behaves in terms of the structural description, which is not trivial to begin with.However, reintroducing the semantic problem discussed above, it is clear that the explication of performance involves specification of "the speakerJs behavior in composing and interpreting sentences with respect not only to structural descriptions, but also to the total environment of the speech event. Thus speakers can and do process sentences which thegrammar is not capable of generating; in other words, the relation between the sentences of competence and those of performance is n~t ~ne of simple inclusion. As noted by Kasher (1967) and developed in detail by Watt (1968) Successive versions of the model would be capable of processing materials of increasing complexity with respect to contextual variables--e, g.the various subsets of "present-day American English" represented in Ku~era and Francis {1967) .Assuming a restricted automatic thesaurus and a data base in machinereadable form, a first cut at equivalence sets could be provided by separate lists{sortedinternally by number of thesaurus group assignments) of sentences containing words or phrases from the same thesaurus groups, and words and phrases from the same group as well as more general or more specific groups. These lists could then be studied in detail to isolate potential equivalence sets. The elements of the basic member or definiens of each set would be identified in the course of this study, and the set membership validated by behavioral tests, which would also serve as a means of eliciting additi~aal members of the set not represented in the data base.The final step in construction of the model consists in representing the definiens in the notation of formal logic, and representing the other members of the set in terms of the definiens. Analysis of a sentence presented to the model is thus accomplished through a decision procedure for membership in a particular equivalence set, by association Z3 with a particular definiens or its converse.The proposed model is presented as an approximate solution to problems of theory and data orientation. It overcomes the respective weaknesses of the two approaches (see Sections Z. 1 and 2. Z} by providing a means of arriving at theories of meaning and speech behavior through exploitation of data bases which are subsets of a natural language containing instances of speech behavior used in particular physical and sociocultural environments. Moreover, the concept of equivalence set provides a data defined approximation of the theoretical notion of a relation, in the sense of symbolic logic. This is of particular interest because symbolic logic has been used as a system of semantic representation both in computer processing of natural language data (Montgomery 1969, especially question-answering systems) and in linguistics (McCawley 1969) o Some convergence of linguistic and computational viewpoints is thus already in evidence. If progress toward the explication of natural language and the operations involved in processing it (whether by men or machines~ is to continue, linguistic science and automated language processing must increasingly share theories and data, objectives and methods.
null
null
null
null
Main paper: 1: It is appropriate to begin this discussion with a brief inquiry A further--and not unrelated--reason for non-participalion linguists in automated language processing is a basic lack of knowledge about computers~ in the sense of realizing when a computer is a handy tool~ and when it isn't so handy. By this I don't mean a lack of knowledge about hexadecimal systems, bits and bytes, or serial and parallel processors, but very simply knowledge of what a computer is good for.The fact is that for many of the operations characteristically per* formed in linguistic research, the computer is an invaluable--if not an indispensable--tool. This is a very strong claim for the utility of the computer in linguistics; therefore, the grounds on which it is based are worth examining in some detail.The operations which the linguist performs in carrying out research on a language or languages are essentially the following: he collects data, organizes and analyzes them, formulates hypotheses and verifies them. There is of course, a great deal of feedback analysis and recycling through all these operations, which are highly interdependent.It is therefore impractical to examine the applicability of computer processing individually to each of the operations listed above. Since the important concept of "organization" applies equally to data and hypotheses, in the following the linguistic operations for which computers can be used will be grouped into these two categories. Where these operations are differently interpreted or valued by linguists of different schools, divergent points of view will be noted.Data collection and organization, operations on the data base.There are two senses in which data is collected in linguistic analysis. There are two major problems inherent in these traditional datahandling methods, which may provide at least a partial explanation for the well-known inadequacies in the descriptions of the so-called "exotic"languages (Uhlenbeck 1960 ). In the first place, the operations involved in the creation of these files, retrieval of relevant data from them, and replacement of the data in the files require a great deal of the llnguistts time, which might be more profitably spent in analysis and in hypothesis formulation and verification. Secondly, because in a taxonomic approach, classifications contained in the files in effect form the basis of the grammar, and because syntactic and semantic analysis requires a highly sophisticated and extensive organization of the data, these aspects of linguistic research inevitably suffer when data handling is limited to traditional manual techniques. In addition to the stringent requirement for explicitness, use of the computer necessitates a logical organization of hypotheses in order to provide for systematic ~g and eleJ~r tracing. Such requirements apply equally to formal grammars and the somewhat more loosely organized descriptive grammars.Transformational grammars, however, present a particularly convincing case for the ne~cessity of computer testing. It is difficult to envision how the linguistic researcher can possibly keep track of ZAlthough some difficulties are inevitable in converting linguistic materials to machine-readabLe form, the initial investment of time, energy, and funds are well worth ~ effort. At present, whether a keypunch or an optical character reader is used as a conversion device, linguistic diacritics and special characters must be recoded in terms of the available character set. However, fully automatic conversion by means of an optical character reader is a development which can be expected within the next few y~ars. Some existing models can read avarinty of type styles with the combined error rate of the reader and the typist being lower than that of key punched material, and the recognition of handprinted characters with an acceptable error rate is not far off. UCLA English Syntax Project, and a system constructed by IBM to test the grammar of English II (Rosenbaum 1967) . Although these programs all operate through a synthesis procedure, the on-line system described in Gross and Walker also has an analytic capability through the MITRE Syntactic Analysis Procedure.In addition to these largely synthetic test devices, many analytical algorithms exist. These include algorithms for morphological as well as syntactic analysis. A top-to-bottom predictive equivalent of the Cocke algorithm is the Kuno-Oettinger Syntactic Analyzer (Kuno 1965) . Both these algorithms, however, suffer from the disadvantage of producing multiple analyses.More effective procedures for syntactic analysis incorporate transformational rules; these include the MITRE Syntactic Analysis Procedure and that described by Martin Kay (1967) . Another approach to syntactic analysis is the "fulcrum" method, reported in Garvin (1968) , in which the grammar and the parsing logic are both incorporated into the analysis algorithm. At present, the most versatile device for testing a descriptive grammar is probably some version of the Cocke algorithm, which could be used as a morphological analyzer with a set of morphological rules, and as a syntactic analyzer with a set of syntactic rules. In morphological analysis, the input string would consist of codes representing the morphs occupying the successive position classes which form the particular word. In syntactic analysis, the input string wouldof course consist of codes constituting the grammatical labels of the words which form the particular sentence.Finally, it is appropriate to discuss a computer application which is noteworthy not only by virtue of the fact that it is in the descriptive tradition, but also because it constitutes a substantial departure from the above-mentioned algorithms in several important respects (Garvin 1969) . teristically; the text is segmented into "chunks" or "fragments" by an ad hoc recognition procedure based on lists of prepositions, conjunctions, introductory adverbs, and the like (e. g. , Kochen 1969 , Bohnert 1966 , Briner 1968 , Wilks 1968 In answer to the first question, it is reasonable to consider the two approaches as complementary, since the specific weaknesses of the data-orlented position are offset by corresponding strengths in the theoretical orientation, and conversely. In the following discussion, the respective deficiencies of the two approaches will be examined and potential unifying Concepts will be explored.The data-oriented view of natural language is generally characterized by a bias toward the data, a reliance on statistics, an interest in subsets of natural language, and thus a concern with some particular inventory of sentences of performance exclusive of any notion of the infinite inventory of sentences of competence.There are two general directions in which this weakness is exhibited, depending on the size of the natural language subset that is in-3 volved. With extremely large subsets, data orientation is mainly due to data inundation, and computer processing substitutes for theory.Ina system of this type, it is possible to perform a great deal of computer processing without knowing quite what it all means. Content analysis may be attempted by statistical techniques, but if the definition of the statistical word is not correlated with an actual word stem--or more relexrantly, with a concept which may be represented in natural language text by a number of different words and phrases--then all that has really been performed is a frequency count of unique character strings. The actual process of content analysis remains to be performed.Another variety of data-orientation weakness involves extremely small subsets of natural language. In this case, the defect consists in the testing of theories on very limited amounts of data--often 0nly 3In this context, a large subset, is defined as the entire information store in a particular system for processlng--say, scientific materlals-where the data base consists of over 100, 000 documents.• on the very sample from which the theory was originally derived.Claims for the generality of techniques derived by such means must thus be viewed with a certain amount of skepticism. Unfortunately, many of the more interesting activities in automated language pro-4 cessing--e, g. question-answering systems--suffer from this defect. interpreted underlying phrase marker (generated by the optimal grammar for the language) is such that every semantic marking in 4It is interesting to note in passing that a similar cirticism has frequently been leveled at traditional linguistic descriptions by linguists espousing the generative approach. According to this criticism, the descriptive linguist suffers from an exaggerated dependence on his "corpus"--the body of linguistic material constituting his data base. I His description of the language--in formal terms, his theory of the language--is thus a description of the corpus, and its validity is a function of the adequacy of the corpus as a representative sample of the language. the reading for its predicate also occurs in the reading for its subject" (Katz, 1967) . Katz thus defines "S is analytic for L" in terms of theoretical constructs for which he claims universality; he further states that for each language, "for each L 1 that is a possible value of "L", it is possible to differentiate the analytic from the nonanalytic sentences in L 1 o.u the basis of predictions that follow from this definition in conju:c t;.,,.~ with the semantic descriptions of the sentences in L 1 provided by the ,.Irammar of LI" (1968).UnfortuJ~.ately, the impact of Kat~s arguments is substantially reduced by the fact that--although there exists a definition of analyticity which has been postulated by Katz in terms of the theoretical constructs "underlying phrase marker, " "semantic interpretation, " "reading, ""subject of, " etc. , --there exists no grammar for any L l to provide the semantic descriptions of L 1 which must be conjoined with Katz's definition to provide for the differentiation of analytic from non-analytic sentences.Moreover, if Katz were to state that he had actually produced a grammar of some L 1 complete with semantic descriptions and presumably capable of generating the set of sentences of a speaker's competence in L 1, no one could prove that this was or was not an empty claim. Katz himself has affirmed the necessity of behavioral tests as a means of validating the empirical adequacy of his theoretical formulations (Katz 1967 (Katz , 1968 . However, previous attempts to investigate various syntactic phenomena through behavioral experiments have not been spectacularly successful, and since the investigation of semantic phenomena is inestimably more complex, behavioral verification of a grammar of L 1 appears impossible.This difficulty derives from tw~ sources, one of which involves the nature of meaning, and the ~ther, the present state of knowledge about linguistic performance, or speech behavior.The semantic problem lies in the fact that a great deal of meaning is situationally derived; the physical and soci~cultural situation to a considerable extent controls the semantic interpretation of sentences.In the narrow sense, the c:~neept of a physical and sociocultural context can be limited to those situations which are participated in by a majority of the speakers of the language: say, a school, a city.an airport.In the broader sense, however, physical and s~ci:~cultural context includes such factors as the entire history of an interaction between two persons--in other words, all the occasions on which they have interacted and the content of those instances of interaction.Without such information, a nro~er interpretation of innuendoes, jokes, allusions, and so forth, would not be possible. Also, in the sense of an interaction between persons, the context is dynamic; it grows from the inception of the interaction to its conclusion.Thus, the speech event is actually performed in an environment consisting of the entire range of physical and s ~ciocultural phenomena which are relevant to its interpretation. For this reason, semantic interpretation presents problems of considerable magnitude, some of which may be inherently insoluble.Setting the semantic problem aside for the moment,-.we consider the second source of difficulty encountered in attempting to validate a grammar of L 1 through behavioral testing: the present lack of an adequate theory of performance, or speech behavior. A grammar is a m~del of a speakerVs innate capacity, and not of the ways he uses this capacity t ~ produce and understand sentences. Although experi.ments suggest the psychological reality of some features of the structural descripti,ns generated by the competence model or grammar (Fodor and Garrett 1967) , a speaker demonstrates his competence thr~ughhis ~erformance, and the relation between a speakerls competence and his performance has yet to be explicated.Assuming that a speaker of L 1 will produce and understand only sentences for which the grammar of L 1 can supply structural descriptions, the problem is reduced to determining how the speaker behaves in terms of the structural description, which is not trivial to begin with.However, reintroducing the semantic problem discussed above, it is clear that the explication of performance involves specification of "the speakerJs behavior in composing and interpreting sentences with respect not only to structural descriptions, but also to the total environment of the speech event. Thus speakers can and do process sentences which thegrammar is not capable of generating; in other words, the relation between the sentences of competence and those of performance is n~t ~ne of simple inclusion. As noted by Kasher (1967) and developed in detail by Watt (1968) Successive versions of the model would be capable of processing materials of increasing complexity with respect to contextual variables--e, g.the various subsets of "present-day American English" represented in Ku~era and Francis {1967) .Assuming a restricted automatic thesaurus and a data base in machinereadable form, a first cut at equivalence sets could be provided by separate lists{sortedinternally by number of thesaurus group assignments) of sentences containing words or phrases from the same thesaurus groups, and words and phrases from the same group as well as more general or more specific groups. These lists could then be studied in detail to isolate potential equivalence sets. The elements of the basic member or definiens of each set would be identified in the course of this study, and the set membership validated by behavioral tests, which would also serve as a means of eliciting additi~aal members of the set not represented in the data base.The final step in construction of the model consists in representing the definiens in the notation of formal logic, and representing the other members of the set in terms of the definiens. Analysis of a sentence presented to the model is thus accomplished through a decision procedure for membership in a particular equivalence set, by association Z3 with a particular definiens or its converse.The proposed model is presented as an approximate solution to problems of theory and data orientation. It overcomes the respective weaknesses of the two approaches (see Sections Z. 1 and 2. Z} by providing a means of arriving at theories of meaning and speech behavior through exploitation of data bases which are subsets of a natural language containing instances of speech behavior used in particular physical and sociocultural environments. Moreover, the concept of equivalence set provides a data defined approximation of the theoretical notion of a relation, in the sense of symbolic logic. This is of particular interest because symbolic logic has been used as a system of semantic representation both in computer processing of natural language data (Montgomery 1969, especially question-answering systems) and in linguistics (McCawley 1969) o Some convergence of linguistic and computational viewpoints is thus already in evidence. If progress toward the explication of natural language and the operations involved in processing it (whether by men or machines~ is to continue, linguistic science and automated language processing must increasingly share theories and data, objectives and methods. : There is some intersection of the latter sets, but the intersection is quite small relative to the size of the sets themselves and is thus inadequate to provide linguists with a proper perspective on automated language processing, or computer scientists with a proper perspective on linguistics.Although both groups of persons have a mutual interest in natural language, their conceptualizations of the nature of language and their approaches to processing language data are very different. To present a somewhat oversimplified view of these differences: linguists tend to be theory-oriented--they are concerned with interesting but sometimes quiteesoterlc problems, counter-examples, and the infinite set of sentences of competence; on the other hand, persons engaged in automated language processing tend to be data-oriented, and are concerned with statistical significance and with some finite subset of the sentences of performance. The question therefore arises as to whether these different perspectives are to be interpreted as incompatible or compler:~entary, and if complementary, whether some research concept might provide the means for a unified approach to analysis of natural language.In this paper, Section 1 deals with the perspective of linguists on automated language processing and computer scientists on linguistics;Section Z discusses their respective concepts of natural language and lI arn indebted to Paul GarTin for his valuable comments on this paper.and their approaches to analysis of natural language, and explores the questions raised above; Section 3 presents some concluding remarks. Appendix:
null
null
null
null
{ "paperhash": [ "kay|the_computer_system_to_aid_the_linguistic_field_worker", "bobrow|a_phonological_rule_tester", "friedman|a_computer_system_for_writing_and_testing_transformational_grammars:_final_report", "londe|tgt:_transformational_grammar_tester", "chapin|on_the_syntax_of_word-derivation_in_english", "kay|experiments_with_a_powerful_parser", "rosenbaum|specification_and_utilization_of_a_transformational_grammar.", "kuno|the_predictive_analyzer_and_a_path_elimination_technique", "stevens|automatic_indexing_:_a_state-of-the_art_report", "luhn|key_word‐in‐context_index_for_technical_literature_(kwic_index)", "kochen|automatic_question-answering_of_english-like_questions_about_simple_diagrams", "earl|automatic_determination_of_parts_of_speech_of_english_words", "kasher|data_retrieval_by_computer._a_critical_survey." ], "title": [ "THE COMPUTER SYSTEM TO AID THE LINGUISTIC FIELD WORKER", "A phonological rule tester", "A computer system for writing and testing transformational grammars: final report", "TGT: transformational grammar tester", "On the syntax of word-derivation in English", "Experiments With a Powerful Parser", "SPECIFICATION AND UTILIZATION OF A TRANSFORMATIONAL GRAMMAR.", "The predictive analyzer and a path elimination technique", "Automatic indexing : a state-of-the art report", "Key word‐in‐context index for technical literature (kwic index)", "Automatic Question-Answering of English-Like Questions About Simple Diagrams", "Automatic determination of parts of speech of English words", "DATA RETRIEVAL BY COMPUTER. A CRITICAL SURVEY." ], "abstract": [ "Abstract : A general discussion is presented of the capabilities and limitations of computers in linguistic research.", "Theoretical and practical values of error coefficients useful in bounding the error in integrating periodic analytic functions with the trapezoidal rule are tabulated for various ranges of the parameters.", "A comprehensive system for transformational grammar has been designed and is being implemented on the IBM 360/67 computer. The system deals with the transformational model of syntax, along the lines of Chomsky''s \"Aspects of the Theory of Syntax.\" The major innovations include a full and formal description of the syntax of a transformational grammar, a directed random phrase structure generator, a lexical insertion algorithm, and a simple problem-oriented programming language in which the algorithm for application of transformations can be expressed. In this paper we present the system as a whole, first discussing the philosophy underlying the development of the system, then outlining the system and discussing its more important special features. References are given to papers which consider particular aspects of the system in detail.", "Chomsky defines a generative grammar as one that; \"attempts to characterize in the most neutral possible terms the knowledge of the language that provides the basis for actual use of language by a speaker-hearer.\" It is \"a system of rules that in some explicit and well-defined way assigns structural descriptions to sentences.\" The syntactic component of such a grammar specifies the well-formed strings of formatives (minimal syntactically functioning elements) in the language and assigns structures to them.", "Massachusetts Institute of Technology. Dept. of Modern Languages and Linguistics. Thesis. 1967. Ph.D.", "Abstract : A description is given of a sophisticated computer program for the syntactic analysis of natural languages. The study discusses the notation used to write rules and the extent to which these rules can be made to state the same linguistic facts as a transformational grammar. Whereas most existing programs apply context-free phrase-structure grammars, this new program can analyze sentences with context-sensitive grammars and with grammars of a class very similar to transformational grammars. The program, which is written for the IBM 7040/44 computer, is nondeterministic: The various interpretations of an ambiguous sentence are all worked on simultaneously; at no stage does the program develop one interpretation rather than another. If two interpretations differ only in some small part of a partial syntactic structure, then only one complete structure is stored with two versions of the ambiguous part. The unambiguous portion is worked on only once for both interpretations. Although the current version of the program is written in ALGOL, with very little regard for efficiency, the basic algorithm is inherently much more efficient than any of its competitors. (Author)", "Abstract : The report contains four parts: Part I - The IBM Core Grammar of English. Our current grammar of English is presented in full, and numerous derivations are carried out in detail to illustrate the current generative power of the grammar. Part II - Design of a Grammar Tester. The design considerations on which the present version of the tester was based are discussed, and a set of tentative input, output, and control formats are presented. Part III - Programming for the Grammar Tester. A LISP implementation of the grammar tester is presented. The overall flow of control and the various special functions are described. Part IV - Computer Support for Lexicon Development. A program package (programmed in SNOBOL) to facilitate the compilation, modification, scanning, etc. of the lexicon is described. (Author)", "Some of the characteristic features of a predictive analyzer, a system of syntactic analysis now operational at Harvard on an IBM 7094, are delineated. The advantages and disadvantages of the system are discussed in comparison to those of an immediate constituent analyzer, developed at the RAND Corporation with Robinson's English grammar. In addition, a new technique is described for repetitive path elimination for a predictive analyzer, which can now claim efficiency both in processing time and core storage requirement.", "Report presenting a state-of-the-art survey of automatic indexing systems and experiments. It was conducted by the Research Information Center and Advisory Service on Information Processing, Information Technology Division, Institute for Applied Technology, National Bureau of Standards. Consideration is first given to indexes compiled by or with the aid of machines, including citation indexes. Advantages, disadvantages, and possibilities for modification and improvement are discussed. Experiments in automatic assignment indexing are summarized. Related research efforts in such areas as automatic classification and categorization, computer use of thesauri, statistical association techniques, and linguistic data processing are described. A major question is that of evaluation, particularly in view of evidence of human inter-indexer inconsistency. It is concluded that indexes based on words extracted from text are practical for many purposes today.", "A distinction is made between bibliographical indexes for new and past literature based on the willingness of the user to trade perfection for currency. Indexes giving keywords in their context are proposed as suitable for disseminating new information. These can be entirely machine-generated and hence kept up-to-date with the current literature. A compatible coding scheme to identify the indexed documents is also proposed. In it elements are automatically extracted from the usual identifiers of the document so that the coded identifier yields a maximum of information while remaining susceptible to normal methods of ordering.", "Abstract : This paper presents a technique for translating certain English-like questions into procedures for answering them in order to explore how large a class of basic question types can be so processed. The English-like questions all pertain to simple diagrams built of elementary figures with relations like 'above' and 'larger than.' The input to the program into which the algorithm presented here could be implemented are questions such as 'Is it true that in Fig. 1 each triangle is above a circle,' and may include terms like 'how,' 'when,' 'what,' in an interesting variety of interrogative sentence types. The output of the program is a flow diagram for another program to answer the question by inference and search of a structured data base in which representations of diagrams are stored. The English-like source language of questions that the algorithm can process, though restricted and fixed in syntax and domain of discourse, has a potentially wide scope in that it includes some of the fundamental question types.", "The classifying of words according to syntactic usage is basic to language handling; this paper describes an algorithm for automatically classifying words according to thirteen commonly used parts of speech: noun, adjective, verb, past verb, adverb, preposition, conjunction, pronoun, interjection, present participle, past participle, auxiliary verb, and plural or collective noun. The algorithm was derived by a computerized study of the words in The Shorter Oxford English Dictionary. In its operation it utilizes a prepared dictionary of around nine hundred words to assign parts of speech to special or exceptional words. Other words are split into affix and kernel parts and assigned a part of speech on the basis of the part-of-speech implications of the affixes and the length of the remaining kernel. An accuracy of 95 per cent is achieved from the point of view of inclusive part of speech, where inclusive part of speech is defined as that string which contains all the parts of speech attributed to the word by the dictionary but which may also contain one or two more parts of speech.", "Abstract : The report constitutes a critical survey of work in the field of data-retrieval from a theoretical viewpoint. The conditions to be fulfilled, ideally, by a data-retrieval system which distinguish it from other types of data-processing systems are enumerated. Section 1 deals with problems of sentence ambiguity, syntactic and/or semantic, with special reference to context-dependence. Various fallacies and misconceptions are pointed out. Section 2 is devoted to the question of data-based versus text-based systems. Sections 3 and 4 discuss the ambivalent attitude to various theoretical results. Examples: Questions of consistency, decidability, syntactic simplification, inadequate explication. Various attempts to overcome these difficulties are described, and their respective merits and defects discussed. Section 5 deals with the fallacious identification of 'conversation' and 'cross -examination', which is traced back to a misunderstanding of a classical paper of Turing. Section 6 digresses, to discuss systems having a two-dimensional input. Section 7 points out the fallacy of indiscriminate application of considerations of efficiency, etc., which are in general not accompanied by careful analysis of the specific systems in question. The conclusion asserts the need to concentrate research on less ambitious lines than hitherto. (Author)" ], "authors": [ { "name": [ "M. Kay" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. G. Bobrow", "J. Bruce Fraser" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Friedman" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Dave L. Londe", "W. J. Schoene" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Chapin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Kay" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Peter S. Rosenbaum", "Fred Blair", "D. Lieberman", "D. Lochak", "P. Postal" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Kuno" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Stevens" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. P. Luhn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Kochen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "L. Earl" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Kasher" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "62545811", "14433444", "60123549", "1616382", "57503576", "26325371", "60851190", "16669681", "57443268", "57853821", "27336172", "16424919", "62477254" ], "intents": [ [], [], [], [], [], [ "background" ], [], [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false, false, false, false, false ] }
Problem: The paper addresses the inadequate intersection between linguists and computer scientists engaged in natural language processing, highlighting their differing perspectives on language nature and data processing approaches. Solution: The hypothesis explores whether the contrasting perspectives of linguists and computer scientists on natural language processing are complementary rather than incompatible, and if so, whether a unified research concept could facilitate a cohesive approach to analyzing natural language.
665
0.004511
null
null
null
null
null
null
null
null
8444ce1af42bd88e2c4c581f67f20efc65b3a1bf
33302434
null
A Rapidly Extensible Language System ({REL} {E}nglish)
REL English in Terms of Modern Linguistics REL, a Rapidly Extensible Language System, is an integrated information system operating in conversational interaction with the computer. It is intended for work with large or small data bases by means of highly individualized languages. The architecture of REL is based on theoretical assumptions about human information dynamics [I], among them the expanding process of conceptualization in working with data, and the idiosyncratic language use of the individual workers. The result of these assumptions is a system which allows the construction of highly individualized languages which are closely knit with the structure of the data and which can be rapidly extended and augmented with new concepts and structures through a facile definitional capability. The REL language processor is designed to accommodate a variety of languages whose structural charaCteristics may be considerably divergent. The REL English is one of the languages within the REL system. It is intended to facilitate sophisticated work with computers without the need for mastering programming languages. I The structural power of REL English matches the extremely flexible organization of data in ring forms. Extensions of the basic REL English language can be achieved either through ",.
{ "name": [ "Dostert, Bozena and", "Thompson, Frederick B." ], "affiliation": [ null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 35
1969-09-01
5
3
null
The result of these assumptions is a system which allows the construction of highly individualized languages which are closely knit with the structure of the data and which can be rapidly extended and augmented with new concepts and structures through a facile definitional capability. The REL language processor is designed to accommodate a variety of languages whose structural charaCteristics may be considerably divergent.The REL English is one of the languages within the REL system. It is intended to facilitate sophisticated work with computers without the need for mastering programming languages. The REL dialect and idiolectsEnglish is our primary mode of verbal communication, therefore everyone has the right to know what someone else means by it. We use the term "English" in its most ordinary sense, i.e.we bear in mind the fact that there really is no one English language. Rather, the tern~ English refers to as many idiolects as there are speakers, these idiolects being grouped into dialects.The REL English is one such dialect. It shares with natural language also the characteristic of being, in its design and functioning, a conglomerate of idiolects, which we call versions.Thompson's design philosophy of REL [Z] defines the theoretical basis for the assumption of individual, idiolectal approach to the use of information.The second basic characteristic is that REL English is a formal language. The characteristics of English as a formal language are discussed in an earlier paper [3] . The central thesis of that paper is that English becomes a formal language when the subject matter which it talks about is limited to material whose interrelationships are specifiable ina limited number of precisely structured categories. It is the type of structuration of the subject matter and not the nature of the subject matter itself that produces the necessary limitations. Natural language encompasses a multitude of formal languages and it is the complexities of the memory structures on which natural language can and does operate that account for the complexities, flexibility and richness of natural language. These latter give rise to the notorious problem of ambiguities in natural language analysis.What about ambiguities in REL English? The purpose of I~EL English grammar is to provide a language facilitating work with computers.It is thus assumed that the language is used for a specific purpose in a specific context. Allowance for ambiguities at the phrase level, with subsequent disambiguition through context, is a powerful mechanism in a language. It is this aspect of ambiguity we wish to include.Ambiguities, in the general case and in our case, are due to different semantic interpretations (data structuration) arising from different deep structures. Ambiguous constructions are of two main types-(1) those which are structurally ambiguous, e.g. , 'Boston ships" is ambiguous over all relations existing between "Boston" and "ships" (built in Boston, with home port in Boston, etc. ); and (Z) those which are semantically ambiguous, e.g. , ~location of King" if "King" can refer both to Captain King and the destroyer King in the data elements.Ambiguities of the first type can be resolved by the specification of the relation, those of the second type by inclusion of larger context. Chomsky's well-known example of an ambiguous sentence "Flying planes can be dangerous"is of the first type; Katz and Fodor's "bachelor" is of the second type. The purpose of REL sentence analysis is not to find all possible interpretations of ambiguous sentences irrespective of context. Rather, the purpose is maximal disambiguation where such disambiguation is possible in terms of semantic interpretation, providing for the preservation of ambiguities present in memory structures if the syntactic form of the query is ambiguous.How does our English compare with English as discussed by modern linguists? On the level of surface structure, they are essentially the same. Some more complex transformationally derived strings, such as certain forms of elipsis are not handled as yet. However, most of the common forms are treated in a straightforward manner.Although some constructions which can be formed in natural conversational English are not provided in the basic English package, such deficiencies can to a large extent be overcome by the capability for definitional extension provided by the system.The level of deep structure presents more problems. As distinct from surface structure, deep structure is that level of syntactic analysis which constitutes the input to semantic analysis, both in Chomsky's [4] terms and ours.What is the nature of this semantic interpretation?In the general case, little is known. In our case, as in most types of computer analysis, interpretation is in terms of the internal forms of organization of the data in memory.To the extent that the constituents of deep structure can be directly correlated with corresponding structures in the data, semantic analysis, and therefore sentence analysis, can be carried to completion.It is important to distinguish, in this regard, between two quite distinct though related ways in which language use can be restricted. The first is by the ways in which the data is organized, that is the structural forms used and the interlinkages which are formed for the manipulation of these structures. This type we will call "structural" restrictions. The second is by restrictions of the subject matter, or the universe of discourse; this we will call "discourse" restrictions. When one restricts the universe of discourse to a body of material which is naturally formal or has been formal- We are accustomed to the traditional definition saying that a verb denotes an action or a state: an action is performed by an actor (subject) on an object, a state is a momentarily or permanently frozen action between subjects and objects. Thus, an action, in this inclusive sense, is characterized by (i) the aspect of beginning, ending, duration or momentariness; (Z) by its situation in time, and(3) by referring to subjects and objects. Two groups of verbs may be distinguished: those referring to a relation between subjects and objects, and those which establish a connection between them. The relation expressed by a verb constitutes (and is here referred to as) its 'predicate'; such verbs are called relation verbs. Relation verbs also express temporal aspects of a relation. Typical relation verbs are "arrive" and "leave"; both refer to the relation of 'location';"arrive" refers to the beginning of the existence of this relation, and "leave" to its ending. For instance, "John left Boston" means that the relation of 'location' existing between John and Boston came to an end. Verbs which express a connection between subjects and objects are referred to as copulas, e.g. "is" in "John is a boy".The copula itself constitutes the predicate.
Each of the four following sentences contains, in surface structure, a different verb.(1) John arrived in Boston.(ii) John left Boston.(iii) John lived in Boston.(iv) John is residing in Boston.The underlying structure of these sentences is identical: 'location (John, Boston)', except for the temporal aspect of this relation:beginning in (i), ending in (ii), momentariness in (iii) and duration in (iv). In REL English, the temporal aspect is denoted by the 'tense character' feature.Verbs are introduced through language extension. They are defined in terms of a relation and a tense character. The relation must denote an already existing ring structure. Thus, given the relation of "location", the verb "arrive" is defined by: def:arrive:verb (location, 1).A verb is internally represented as a "verb and (at present) no tense character. The copulas are: "is", "are", "was", "were", and their contracted negatives. For example, the rule 'C -* wasn't' will result in the creation of a verb table with a copula predicate, time equal to past, and the feature of negation.
The elements of a verb table correspond to the elements of a kernel clause (i. e. one with a single deep structure Phrase-marker). The function of inflectional morphemes and auxiliary verbs is twofold: modification of the original time in the verb table and setting of syntactic features. For example, the past tense morpheme and the auxiliary "did" modify t 1 to be past, thus establishing the time interval '0 to now' (i. e. t 1 = 0, t 2 = now); the auxiliary "will" modifies t 2 to be future, thus establishing t 1 = now, t 2 = = ; 3rd person singular and auxiliaries "has" and "does" set the singular feature. (i) V -* N V (rule putting subject on) (ii) V -* V N (rule putting object on) (iii) V -* VbyN.These rules also check whether the passive feature is on (set by copulas forming passives), and if it is, rule (i) converts the N into an object; rule (ii) converts the N into a subject; rule (iii) applies only if the passive feature is on and converts the N into a subject. is within the time interval '0 to 1950', the answer will be affirmativeIf the input is a negative question, we use the time for which the relation does not hold.The output of rule (ii) is one or more Ns. These are supplied by the clause processing routine as either subjects or objects. In example (ii), subjects are supplied.The output of rule (iii) is a time (or time list) at which the relation indicated in the verb table holds.The output may be ambiguous. For example, "Did Smith live in New York? " would result in both "yes" and "no" as ambiguous output if "Smith" referred to one Smith who did and another who did The data is deleted if the verb table has the negative feature set.There are two restrictions on the above rules: the subject of the verb table in (i) must not be modified; and the predicate of the verb table in (ii) must be a copula.The structure of the input sentence determines the structural relations to be established between items in the data.Subordinate clauses modify some item in another clause, or another clause as a whole.In REL English, clauses of the first type are always relative clauses; they are introduced by the pronouns "who", "which", "that", "whom" and "whose". Clauses modifying other clauses are temporal clauses introduced by "before", "after"and "when", and result in time modification.Examples of relative clauses: In (iii) the N for which the relation R holds is used as subject or object.(i) N -~ N who VParallel to the above rules are rules with a comma preceding the relative pronoun. However, the rules with commas apply only to post-modified NXs and the comma plays a disambiguating function.For instance, "parents of boys who left Boston" can be ambiguous;in our English, the relative clause refers to "boys", while in "parents of boys, who left Boston" the relative clause refers to "parents ".Examples of temporal clauses: (i) M -~ before VThe result is, obviously, a "yes" answer. The output of rule (ii) is a tense modifier with the time interval t 1 -t 2 -date specified by the subordinate clause.By "quantifiers" we refer to "all", "some", "how many", "what", "each" and "no". Phrases containing such words are of In such sentences, a class of objects is considered. The sentence constitutes a condition on each element of this class, the condition depending on the particular quantifier. Consider in detail the example: "Is Boston the location of all men? " To answer this question, each man must be considered in turn. We apply the term generation to this kind of a process. As each man is generated in turn, say man., the sentence "Is Boston the location of man.? " must be processed. If the answer to all cases is "yes", then the original question is answered affirmatively; otherwise, it is answered negatively. In the case of the "some '~ generator, the answer would be affirmative if at least one of the men lives in NGE -~ NJ N NGEJ -~ N, NJ VGE -~ V JN VGEJ -~ V, VJwhere the GE subscript indicates a generated phrase. If the J-phrase is "and", the generator is an "all" generator; if the J-phrase is "or", the generator is a "some" generator. An example is given in figure 8. .:t',,1 f,., \
null
null
Main paper: relation verbs: Each of the four following sentences contains, in surface structure, a different verb.(1) John arrived in Boston.(ii) John left Boston.(iii) John lived in Boston.(iv) John is residing in Boston.The underlying structure of these sentences is identical: 'location (John, Boston)', except for the temporal aspect of this relation:beginning in (i), ending in (ii), momentariness in (iii) and duration in (iv). In REL English, the temporal aspect is denoted by the 'tense character' feature.Verbs are introduced through language extension. They are defined in terms of a relation and a tense character. The relation must denote an already existing ring structure. Thus, given the relation of "location", the verb "arrive" is defined by: def:arrive:verb (location, 1).A verb is internally represented as a "verb and (at present) no tense character. The copulas are: "is", "are", "was", "were", and their contracted negatives. For example, the rule 'C -* wasn't' will result in the creation of a verb table with a copula predicate, time equal to past, and the feature of negation. verb table modification: The elements of a verb table correspond to the elements of a kernel clause (i. e. one with a single deep structure Phrase-marker). The function of inflectional morphemes and auxiliary verbs is twofold: modification of the original time in the verb table and setting of syntactic features. For example, the past tense morpheme and the auxiliary "did" modify t 1 to be past, thus establishing the time interval '0 to now' (i. e. t 1 = 0, t 2 = now); the auxiliary "will" modifies t 2 to be future, thus establishing t 1 = now, t 2 = = ; 3rd person singular and auxiliaries "has" and "does" set the singular feature. (i) V -* N V (rule putting subject on) (ii) V -* V N (rule putting object on) (iii) V -* VbyN.These rules also check whether the passive feature is on (set by copulas forming passives), and if it is, rule (i) converts the N into an object; rule (ii) converts the N into a subject; rule (iii) applies only if the passive feature is on and converts the N into a subject. is within the time interval '0 to 1950', the answer will be affirmativeIf the input is a negative question, we use the time for which the relation does not hold.The output of rule (ii) is one or more Ns. These are supplied by the clause processing routine as either subjects or objects. In example (ii), subjects are supplied.The output of rule (iii) is a time (or time list) at which the relation indicated in the verb table holds.The output may be ambiguous. For example, "Did Smith live in New York? " would result in both "yes" and "no" as ambiguous output if "Smith" referred to one Smith who did and another who did The data is deleted if the verb table has the negative feature set.There are two restrictions on the above rules: the subject of the verb table in (i) must not be modified; and the predicate of the verb table in (ii) must be a copula.The structure of the input sentence determines the structural relations to be established between items in the data.Subordinate clauses modify some item in another clause, or another clause as a whole.In REL English, clauses of the first type are always relative clauses; they are introduced by the pronouns "who", "which", "that", "whom" and "whose". Clauses modifying other clauses are temporal clauses introduced by "before", "after"and "when", and result in time modification.Examples of relative clauses: In (iii) the N for which the relation R holds is used as subject or object.(i) N -~ N who VParallel to the above rules are rules with a comma preceding the relative pronoun. However, the rules with commas apply only to post-modified NXs and the comma plays a disambiguating function.For instance, "parents of boys who left Boston" can be ambiguous;in our English, the relative clause refers to "boys", while in "parents of boys, who left Boston" the relative clause refers to "parents ".Examples of temporal clauses: (i) M -~ before VThe result is, obviously, a "yes" answer. The output of rule (ii) is a tense modifier with the time interval t 1 -t 2 -date specified by the subordinate clause.By "quantifiers" we refer to "all", "some", "how many", "what", "each" and "no". Phrases containing such words are of In such sentences, a class of objects is considered. The sentence constitutes a condition on each element of this class, the condition depending on the particular quantifier. Consider in detail the example: "Is Boston the location of all men? " To answer this question, each man must be considered in turn. We apply the term generation to this kind of a process. As each man is generated in turn, say man., the sentence "Is Boston the location of man.? " must be processed. If the answer to all cases is "yes", then the original question is answered affirmatively; otherwise, it is answered negatively. In the case of the "some '~ generator, the answer would be affirmative if at least one of the men lives in NGE -~ NJ N NGEJ -~ N, NJ VGE -~ V JN VGEJ -~ V, VJwhere the GE subscript indicates a generated phrase. If the J-phrase is "and", the generator is an "all" generator; if the J-phrase is "or", the generator is a "some" generator. An example is given in figure 8. .:t',,1 f,., \ : The result of these assumptions is a system which allows the construction of highly individualized languages which are closely knit with the structure of the data and which can be rapidly extended and augmented with new concepts and structures through a facile definitional capability. The REL language processor is designed to accommodate a variety of languages whose structural charaCteristics may be considerably divergent.The REL English is one of the languages within the REL system. It is intended to facilitate sophisticated work with computers without the need for mastering programming languages. The REL dialect and idiolectsEnglish is our primary mode of verbal communication, therefore everyone has the right to know what someone else means by it. We use the term "English" in its most ordinary sense, i.e.we bear in mind the fact that there really is no one English language. Rather, the tern~ English refers to as many idiolects as there are speakers, these idiolects being grouped into dialects.The REL English is one such dialect. It shares with natural language also the characteristic of being, in its design and functioning, a conglomerate of idiolects, which we call versions.Thompson's design philosophy of REL [Z] defines the theoretical basis for the assumption of individual, idiolectal approach to the use of information.The second basic characteristic is that REL English is a formal language. The characteristics of English as a formal language are discussed in an earlier paper [3] . The central thesis of that paper is that English becomes a formal language when the subject matter which it talks about is limited to material whose interrelationships are specifiable ina limited number of precisely structured categories. It is the type of structuration of the subject matter and not the nature of the subject matter itself that produces the necessary limitations. Natural language encompasses a multitude of formal languages and it is the complexities of the memory structures on which natural language can and does operate that account for the complexities, flexibility and richness of natural language. These latter give rise to the notorious problem of ambiguities in natural language analysis.What about ambiguities in REL English? The purpose of I~EL English grammar is to provide a language facilitating work with computers.It is thus assumed that the language is used for a specific purpose in a specific context. Allowance for ambiguities at the phrase level, with subsequent disambiguition through context, is a powerful mechanism in a language. It is this aspect of ambiguity we wish to include.Ambiguities, in the general case and in our case, are due to different semantic interpretations (data structuration) arising from different deep structures. Ambiguous constructions are of two main types-(1) those which are structurally ambiguous, e.g. , 'Boston ships" is ambiguous over all relations existing between "Boston" and "ships" (built in Boston, with home port in Boston, etc. ); and (Z) those which are semantically ambiguous, e.g. , ~location of King" if "King" can refer both to Captain King and the destroyer King in the data elements.Ambiguities of the first type can be resolved by the specification of the relation, those of the second type by inclusion of larger context. Chomsky's well-known example of an ambiguous sentence "Flying planes can be dangerous"is of the first type; Katz and Fodor's "bachelor" is of the second type. The purpose of REL sentence analysis is not to find all possible interpretations of ambiguous sentences irrespective of context. Rather, the purpose is maximal disambiguation where such disambiguation is possible in terms of semantic interpretation, providing for the preservation of ambiguities present in memory structures if the syntactic form of the query is ambiguous.How does our English compare with English as discussed by modern linguists? On the level of surface structure, they are essentially the same. Some more complex transformationally derived strings, such as certain forms of elipsis are not handled as yet. However, most of the common forms are treated in a straightforward manner.Although some constructions which can be formed in natural conversational English are not provided in the basic English package, such deficiencies can to a large extent be overcome by the capability for definitional extension provided by the system.The level of deep structure presents more problems. As distinct from surface structure, deep structure is that level of syntactic analysis which constitutes the input to semantic analysis, both in Chomsky's [4] terms and ours.What is the nature of this semantic interpretation?In the general case, little is known. In our case, as in most types of computer analysis, interpretation is in terms of the internal forms of organization of the data in memory.To the extent that the constituents of deep structure can be directly correlated with corresponding structures in the data, semantic analysis, and therefore sentence analysis, can be carried to completion.It is important to distinguish, in this regard, between two quite distinct though related ways in which language use can be restricted. The first is by the ways in which the data is organized, that is the structural forms used and the interlinkages which are formed for the manipulation of these structures. This type we will call "structural" restrictions. The second is by restrictions of the subject matter, or the universe of discourse; this we will call "discourse" restrictions. When one restricts the universe of discourse to a body of material which is naturally formal or has been formal- We are accustomed to the traditional definition saying that a verb denotes an action or a state: an action is performed by an actor (subject) on an object, a state is a momentarily or permanently frozen action between subjects and objects. Thus, an action, in this inclusive sense, is characterized by (i) the aspect of beginning, ending, duration or momentariness; (Z) by its situation in time, and(3) by referring to subjects and objects. Two groups of verbs may be distinguished: those referring to a relation between subjects and objects, and those which establish a connection between them. The relation expressed by a verb constitutes (and is here referred to as) its 'predicate'; such verbs are called relation verbs. Relation verbs also express temporal aspects of a relation. Typical relation verbs are "arrive" and "leave"; both refer to the relation of 'location';"arrive" refers to the beginning of the existence of this relation, and "leave" to its ending. For instance, "John left Boston" means that the relation of 'location' existing between John and Boston came to an end. Verbs which express a connection between subjects and objects are referred to as copulas, e.g. "is" in "John is a boy".The copula itself constitutes the predicate. Appendix:
null
null
null
null
{ "paperhash": [ "thompson|rel:_a_rapidly_extensible_language_system", "craig|deacon:_direct_english_access_and_control", "thompson|english_for_the_computer" ], "title": [ "REL: A Rapidly Extensible Language system", "DEACON: direct English access and control", "English for the computer" ], "abstract": [ "In the first two sections of this paper we review the design philosophy which gives rise to these features, and sketch the system architecture which reflects them. Within this framework, we have sought to provide languages which are natural for typical users. The third section of this paper outlines one such application language, REL English.\n The REL system has been implemented at the California Institute of Technology, and will be the conversational system for the Caltech campus this fall. The system hardware consists of an IBM 360/50 computer with 256K bytes of core, a drum, IBM 2314 disks, an IBM 2250 display, 62 IBM 2741 typewriter consoles distributed around the campus, and neighboring colleges. Base languages provided are CITRAN (similar to RAND's JOSS), and REL English. A basic statistical package and a graphics package are also available for building special purpose languages around specific courses and user requirements.", "The extensive syntactic ambiguity inherent in natural language has been convincingly shown by such systems as the Harvard syntactic analyzer. Furthermore, no semantic techniques are in prospect for satisfactory resolution of this ambiguity by computer. In contrast, well-developed semantic techniques exist for formal languages.", "What about English as a programming language? Few would question that this is a desirable goal. On the other hand, I dare say every one of us has rather deep reservations both about its feasibility and about a number of problems that it entails. This paper presents a point of view which gives some clarity to the relationship between English and programming languages. This point of view has found substance in an experimental system called DEACON. The second paper in this session will describe the specific DEACON system and its capabilities." ], "authors": [ { "name": [ "F. B. Thompson", "P. Lockemann", "Bozena Henisz-Dostert", "R. S. Deverill" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. A. Craig", "Susan C. Berezner", "Homer C. Carney", "Christopher R. Longyear" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "F. B. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "14782642", "8315546", "16173809" ], "intents": [ [], [], [] ], "isInfluential": [ false, false, false ] }
null
665
0.004511
null
null
null
null
null
null
null
null
0bff540652f64548639586ec848f5dccb0f8e846
8874769
null
Sur quelques preprietes communes des catagories semantiques et des procedures generatrices de trois modeles de synthese dans le processus de la {TA} (resume)
Ro~ioS 8~mantiouis et den urea&duma g£~£. ~~.de._'k~J.fi_mod~les (-le swnth68e dan~ Is nroossaU8 ds.~s TA (r6oum6) A.Ludokanov (Sofia) Carte communication a pour but:I.do pr6eenter le msdale do g6n6ratton du groups do Sofia(MS)iIlods oO mparsr los oat6go~is8 s6mantieques(CS)et les proo6dures g6nera-
{ "name": [ "Ludskanov, A." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 62: Collection of Abstracts of Papers
1969-09-01
0
0
null
null
null
null
deles de la synth~se ~d6mantiquo de Mosoou(~) at du CETA (EG),d'en d6dutre lee tnvarlants e6mantiquee(IS)et d'analyaor la nattu~e logique des l~;III.do proposer une am6lioration de prtnoipe du MS. l.Le but du HS eat de g6n6rer des 6qutvalances bulgares 8ynth6tiquo8 et analytique8 au n~v~au des formes dee mote at dos 8yntagmes o-n partant du L et des CS.0n d6c~it la proo@dure math6matioo-statiet~que.propos6o pour d6tsr-mtno~ 108 unit68 de d6part(ta)(10~Ue~mesbsp6cifiques) st lea morons de d6rivat~onB(177~morphSmes "etmplee" et"oc~-pos6su). On Lutrodu~t 21 CS(agentis,pattentis,actiOnisj abst~ontionis, ~tr~ent l, loci, o el lec t ionia, qualifAoat ion,s, fominisat ~ae, dem~nutivisat Lee-pour N labundant ~ae. ori-g~n~e,mate~aO,or~nie fem~nalis-pour Djoausativu~ do-mest~m, oau~ .barbarioum, ~nt eneiy~m. ~nohoat ivt~, star uale domost~um, statobarbariotun-pou~ V)o~a r6alisation des CS pou~ ohaque (~o)est reprbsent~ ~ar use m~trice.Les 1~ sent bashes sur @es matrices.....dee CS du ~,des fon~ttons 16-x~oe~es du u~ et des type8 s6mantiquo8 du MG par des ensembles do deux st trolm 616mentss donne la poseibilit6 de Kz'otpo~" lee Unit6s i8~orphes(psex.snom~nes aetionis-8o-8oti~/n~n)et~d'en d6dutre lee ISjmalgr~ le~ buts et los z~'Lvoatcc dJ~fJrents de co 3 mod61es,0eei permet de oompl~to~ l'tnvluta~re de CS et de tracer lee l~nss g6-n~rales des roohsrohes pour dresser la lists deOS,nboo-saa~o8 st suffisantes pour la g6n6ration "s6mantique" sur torts lee n~vsaux. *BOLe. plupart de8 1~ on oa'|:~ee peuvent 6tre repr6eent6ss o~m~-(x~')---~ ~*~ .On analyse e~ sO bas~ut Bur-ia thisrio des snssmbles los propri6tEs @c~unee de cos re-let ~ ~r6floxivlt 6,8yn~trie, trans tvit 6, fen@ t tonalitt,XTToL'analyso des propri6t68 logiques d'un 8ous-on~emble dos 1~ parmst do fender la p~rtie du mod61o-du GS,qu~ a pou~ bu~ do g6nJrer leg ~qulvalan~e8 synthOtiques,sur l'id~e g~n6rals~sutvantssau lieu de reprOsentor co pro-Oosstw sn tant~qu'un attaChement dos suffJ.xse • la ra-• ~'_
null
Main paper: : deles de la synth~se ~d6mantiquo de Mosoou(~) at du CETA (EG),d'en d6dutre lee tnvarlants e6mantiquee(IS)et d'analyaor la nattu~e logique des l~;III.do proposer une am6lioration de prtnoipe du MS. l.Le but du HS eat de g6n6rer des 6qutvalances bulgares 8ynth6tiquo8 et analytique8 au n~v~au des formes dee mote at dos 8yntagmes o-n partant du L et des CS.0n d6c~it la proo@dure math6matioo-statiet~que.propos6o pour d6tsr-mtno~ 108 unit68 de d6part(ta)(10~Ue~mesbsp6cifiques) st lea morons de d6rivat~onB(177~morphSmes "etmplee" et"oc~-pos6su). On Lutrodu~t 21 CS(agentis,pattentis,actiOnisj abst~ontionis, ~tr~ent l, loci, o el lec t ionia, qualifAoat ion,s, fominisat ~ae, dem~nutivisat Lee-pour N labundant ~ae. ori-g~n~e,mate~aO,or~nie fem~nalis-pour Djoausativu~ do-mest~m, oau~ .barbarioum, ~nt eneiy~m. ~nohoat ivt~, star uale domost~um, statobarbariotun-pou~ V)o~a r6alisation des CS pou~ ohaque (~o)est reprbsent~ ~ar use m~trice.Les 1~ sent bashes sur @es matrices.....dee CS du ~,des fon~ttons 16-x~oe~es du u~ et des type8 s6mantiquo8 du MG par des ensembles do deux st trolm 616mentss donne la poseibilit6 de Kz'otpo~" lee Unit6s i8~orphes(psex.snom~nes aetionis-8o-8oti~/n~n)et~d'en d6dutre lee ISjmalgr~ le~ buts et los z~'Lvoatcc dJ~fJrents de co 3 mod61es,0eei permet de oompl~to~ l'tnvluta~re de CS et de tracer lee l~nss g6-n~rales des roohsrohes pour dresser la lists deOS,nboo-saa~o8 st suffisantes pour la g6n6ration "s6mantique" sur torts lee n~vsaux. *BOLe. plupart de8 1~ on oa'|:~ee peuvent 6tre repr6eent6ss o~m~-(x~')---~ ~*~ .On analyse e~ sO bas~ut Bur-ia thisrio des snssmbles los propri6tEs @c~unee de cos re-let ~ ~r6floxivlt 6,8yn~trie, trans tvit 6, fen@ t tonalitt,XTToL'analyso des propri6t68 logiques d'un 8ous-on~emble dos 1~ parmst do fender la p~rtie du mod61o-du GS,qu~ a pou~ bu~ do g6nJrer leg ~qulvalan~e8 synthOtiques,sur l'id~e g~n6rals~sutvantssau lieu de reprOsentor co pro-Oosstw sn tant~qu'un attaChement dos suffJ.xse • la ra-• ~'_ Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
d6dab3c7f80fd14e0b38adadc69b9338f58443bd
29195329
null
An Application of an Extended Generative Semantic Model of Language to Man-machine Interaction
This paper discusses the feasibility of applying a model of language use based on a modification and extension (to be discussed below) of the generative semantic (transformational) theory of language competence recently developed by Paul Postal, George Lakoff, John Robert Ross, ~ames D. McCawley, and others, to problems of computational linguistics.
{ "name": [ "Binnick, Robert I." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 18
1969-09-01
23
4
null
The theory of generative semantics, to be discussed in section II, is an outgrowth of, and reaction to, Chomsky's 1965 theory of transformational linguistics. It is a radical theory which deals with a very great range of problems with very abstract methods.Trose working in this paradigm hold that there is a linguistic level reflecting conceptual or semantic structure which is directly convertible into surface syntax by a single set of garden-variety transformations, with no O significant intermediary level, that is, no deep structure".These of us working in generative semantics believe that methods substantially those long familiar in linguistics can achieve very absract , very general results which treat semantics in a more serious and enlightening way than ever before.I do not, I think, support this very strong claim very well in section II, but I provide summaries of several studies and a lengthy bibliogrpahy of works which when consulted will hopefully give some feeling for what is being attempted, I think not without results.But generative semantics is a model, or rather, a theory, of competence, like most serious theories of language now held to by American linguists.~ven if, as might be claimed, our semantic structures are to be merely variants of the structures long familiar from formal logic, so that if our assumptions are correct, we will ultimately be able to directly transform surface structures into underlying semantic structures, the majority of actual sentences, as well as all hypersentential structures, the treatment of which has been swept under the rug of "performance", will remain unhandleable.Accordingly, I propose initially cert@in extensions and modifications of the theory to make it in some sense 1-2 a model of performance.But if we are to apply it to the computer, a major component must still be added.The impetus to this application is ~he possibility of creating an understanding machine, dewcribed in section IV below. Since the actual human interpretation of language depends on past knowledge (consider which of these sentences is good and why:As for Albuquerque, the ~iffel Tower is pretty. As for Paris, the Eiffel ~ower is pretty.ShirLey is a blonde and Susan is Nordic-looking too. Shirley is a linguist and Susan is Nordic-lloking too.) the old split between semantics, syntax, and pragmatics must be revised, and our model closely linked with a memory and possibly a logic component as well.Obviously this defines a very difficult task, but insofar as such goals as HT, artificial intelligence, and machine reading of handwritten material or writing of spoken material involve comprehension on the part of the machinej o~ which there seems to be no doubt, these important goals will continue to ~lude us until such time as we can devise such an understanding machine as I have ~escribed below.believe that generative semantics lays the foundation for studies relevant to such a development, and it is in this context that my proposals are made.In section II I will d~scuss generative semantics. In section III I will discuss the body of my proposals here.In section IV I will discuss what should be required of a generalized "understanding" machine. Part II. The theory of Generative Semantics.~ae theory of ~enerative semantics is an outgrowth and reaction to the theory of transformational grammar as represented in Chomsky's 1965 book, As s~cts of the Theor~ of S~tax (MIT Press). To a very ~-I-~ extent, this theory has been the development of a small group of former students of Chomsky,s or their close colleagues.John (HaJ) Ross has said that the theory is really Just an attempt to explicate Pa~l Postal's work of five years ago to date. If Postal was the founder of this school, if you can call it that, its main workers have been HaJ Ross and George Lakoff, who between 1965 and swept aside most of transformational linguistics as it then was. But perhaps best known of the group is J~mes McCawley, who graduated from MIT in 1965 with a Ph.D. based on work in ubonology, not syntax or semantics. He promptly amazed Lakoff and 2oss by some very substantive work in the latter areas as well as phonology. s~udent of McCawley's I will be emphasizing his contributions here, and those of my collea~s at Chicago, Jerry L. Morgan and Georgia M. Green, but it should be kept in mind that people like Ross, Lakoff, Postal, Arnold Zwicky, David Perlmutter, Emmon Bach, Robin Lakoff, and several others , have made the current theory possible, and that many others, such as Robert Wall, Lauri Kartunnen, Ronald Langacker, and others, have contributed as well.It should also be kept in mind that the Case Grammar of Fillmore and the work done by Gruber, while differing from generative semantics, have contributed a great deal to it. ~ae basic theory of generative semantics is built upon an attempt to relate the underlying semantic structure of language to the surface, phonetic manifestation of that underlying structure. That is, a phonetic reality is recognized, and a semantic reality is recognized.But unlike other versions of transformational grammar, this theory assigns no special status to syntax; syntax is subsumed in the semantics. McCawley has Jokingly referred to his theor~ as being one of either "semantax" or "synantics". ~11e name generative semantics is not a particularly good one, since it implies that the ~oal of the theory is, as with the work of Chomsky, to "separate the grsumuatical sequences" of a language"from the ~E~__ammatical Sequences." (Chomsky, S~_~ctic Structures~ I~.) In S As a II-2 other words, to generate all and only grammatical sentences of a language.~his is not at all the goal of generative semantics.Rather, what we want to do is in some rigorous way specify the correlations of underlying semantic entities and surface phonetic entities: to specify for any underlying semantic structure what its possible phonetic realizations in some language are, and for some phonetic structure what underlying semantic structures it can represent.Naturally, so~e descriptive ability is predicated as well, that is, we want to be able to define ambiguity in some algorithmic fashion, we want to be able to define levels or classes of ill-correlation between structures on different levels, etc.Chomsky would say that a sentence~like "Golf plays John" is eminently deserving of a star; we would say (I) if it's supposed to mean'John plays golf', it doesn't succeed in conveying the message; (2) if it's suppesed to mean 'John loves Marsha', then it's really bad; and (3) if Golf is a man.s name and Gohn the name of a game or role, it's a good sentence ---indeed, one can very well imagine arcane circumstances under which one might utter that sentence with the intent of saying that the game plays John, that the tail wags the dog~ as it were.Suppose, for example, that John's wife were tired of him spending all his free time playing golf and she grumbled to a heighbor about it, and the neighbor rather unfeelingly replied, "Oh well, John plays golf." I can ~ery well imagine John's wife complaining bitterly, "Oh no, golf plays John."In any case, it is for hus unimaginative approach to language that Chomsky has been Jokingly called a "bourgeois formalist"o Even when we use stars, we try to keep in mind that Just about any valid phonological string of a language conveys one or more meanings in some context, and that it is artificial to take a string out of context and declare it good or bad.So "generative semantics" is a bad ns~e.The following diagram of the components of the theory is based on McCawley's paper in the proceedings of the 4th Regional Feeting of the Chicago Linguistic Society (1968) .A theory very similar is discussed in Ronald Langacker's book LAnguage and its Structure (Harbrace, 1968) The above diagram comes from a report prepared by myself, Jerry Morgan, and Georgia Green, called the Uamelot ~o' which attempted to describe the curren-~te of rmational re, search in the Sum~uer cf 1968, particularly in reference to the LSA Summer Linguistic Institute at the University of lllinois, where HaJ Ross, George Lakcff, and Jim McCawley had lectured to large groups on a huge number of very '~airy" (i.e., difficult and tickleishly novel) topics.In that report (which was prepared for Victor Yngve ), we raised several questions concerning the above representation.We asked: ~ese were by no means all of the questions asked. Needless to say, the answering cf these questions has hardly begun and will undoubtedly guarantee linguists a few gocd centuries of work at least. It is only in the last decade that syntax has been the subject of serious work, and we are still only discovering how ignorant we are. Semantics is even newer, less than a decade old.i. ~hatIf anyone doubts that this is true, consider a) what the above 3 questions would have meant to a linguist in (say) 1955, and b)why he would have been wrong in his (lack of) comprehension of them. One of the great contributions of Postal and Ross has been II-4 their constant critical look at transformational grammar. One of the things they saw was that our transformations were (and are) extremely powerful devices, with practically no constraints placed on their formulation. ~at I will do here is summarize some of the attempts at partial answers to the three above questions. In this way I can delimit and explicate generative semantics best.i will start by abstracting parts of two papers by McCawley that deal with the nature of semantic representation.In a paper in the Japanese Journal ~otoba no Uchu (World of Language) in 1967, McCawley argued tha~ semantic representation would be similar to syntactic representation as familiar from ~-type grammar, but that it would also be quite similar to symbolic logic as familiar from the tons of work that have followed Principia and such studies.That semantic representation should resemble syntactic representation makes sense if only because we are arguing for a single set of rules that transforms (i.e., reEates) the underlying structure into (to) the surface structures.There will be more about that later.McCawley argues as follows: the following devices have all had a role in symbolic logic: I. propositional connectives" 'and', 'or', 'not'.3. predicates, denoting properties and relationships. If the negation applies to John beats his wife, the se'~tence means 'the reason ~at John doesn't beat his wi~e is that he loves her', whereas if it applies to the John heats his wife because he loves her., the mg. is 'the reason that John beats his wi~e -~not-~Hat he loves her. ' Notice that here a surface form represents at least two different underlying structures which nonetheless contain precisely the same semantic elements--grouped differently, however.Another point made is that "semantic representations must include .., some indication of presupposed coreference." (p.2) That is, the followingsentence in neutral (i.e. null) context is ambiguous three-ways:II-6John told Harry that his wife was pretty.John's? Harry's? or a third's? It could be any. However, if we know who his refers to, there is no such ambiguity.This may--seem trivial, but it is a point often ignored.McCawley then gives an argument for referential indices being different from expressions used to de~ribe. The sentence Max d~bied that he kissed the girl Be kissed. / is not contradictory if "the girl he kissed" is the speaker's description.Another notion is that of presupposed set membero ship. lhMax is more intelligent than most Americans.said with primary stress ~n most, the sentence is good if and only if Max is presupp--~a to be American, that is, the sentence implies Max is American. With primary stress on Americans, however, Max is presupposed not to be American.Presupposition is in general a very hairy topic which was recently the subject of an entire conference (at the Ohio State University).We know very little about the nuamces of implication and are only beginning even to identify the problems.But if a machine is ever to rea~ Ga_tcher i~ the Rye catching all the nuances of the italicized words, we had better find out how stress is used to alter the presuppositlonal set of a sentence. I need not be so unsubtle as to suggest the extreme value of such researches to psychology.Perhaps they already know about all this, for all I know.In any case I cannot restrain myself from inclucing McCawley's beautiful example CIA Agents are more stupid than most Americans.He had primary streos on the ~ but I prefer to think of it as going on the Americans. Z would like ~o interject at this point a minor apology.I have been rather fan-clubish here and have waved my hand a lot. Frankly I see no value in rehearsing here all the arguments available elsewhere. But I would like the rea~r to bear in mind my skimpy resume in no way reflects~the quality of the original. Let me also note, lest I seem u~uduly credulous towards tjheDthoughts of. C~irman Quang 4mild-maunered linguist• . mc~awley Is In reality Q. p. Dong, Chairman of Unamerican Studies at an unknown universityJ, that most of us working within the paradigm of generative semantics would be the first to admit that our theories haven't a pra~er of being right, that is, they~~ approach even a partially realistic and naturalistic "J theory of language.If we like it better than other paradigms it is because we believe that no other cureent theory is any better and that this one at least has a good chance of self-improvement. (End of apologia. )If semantic representation looks much like logical representation, it also differs from it. In the Kotoba no Uchu paper McCawley noted the following differences :I. "It is necessary to admit predicates which assert properties not only of individuals but also of sets and propositions."2. "In mathematics one enumerates certain objects which ~one~will talk about, defines other obJecSs in te~ms o? these objects, and co~Ifines[onesel~ to a discussion of objects which[oneS has either postulated or defined .... However, one does not begin a conversation by giving a list of postulates and definitions..... •..people often talk about things which either do not exist or which they have identified incorrectly• indices exist in the minds of the speaker rather than in the real world; they are conceptual entities which the individual speaker creates in interpreting his experience."In the Wenner-Gren symposium, McCawley had more to say about the difference between logic and language.1. Immediate constituent structure (trees)rather than parentheses are basic. First, "semantic representations are to form the input to a system of ~ransformations that relate meaning to superficial form; to the II-8 extent that these transformations have been formulated and Justified, they appear to be stateable only in terms of constituent structure and constituent type, rather than in terms of configurations of parentheses and terminal symbols." Secondly, "it may be necessary to operate in terms of semantic representations in which symbols have no left-to-right ordering .... "2. There will have to be more 'logi~al operators', such as most, almost all, and m~.3. "And and ... or ... cannot be regarded as Just binary operators but-~ust be allowed to take an arbitrary number of operands."4. The quantifiers must be restricted rather than unrestricted as in most logical systems. Some quantiflers imply existence:All dogs like to bite postmen, involves the presupposition that dogs exist, whereas the unrestricted quantifiers logicians use have no such presupposition.involving'shifters' (Jakobson, 1957 ) such as I, YOu~_ ~ now, ..., gestures and deictic ~6rds like this--and that, and tenses, will have to include re~rence-K~-the speech act.The most promising approach to this aspect of semantic representation ... is Rose's (1969) elaboration of Austin's (1962) 6. "The range of indices will ~ave to be enormous.In particular, it will have to include not only indices that purport to refer to physical objects, but also indices corresponding to mythical or literary objects, so that one can represent the meaning of sentences such asThe Trobriand Islanders believe in Santa 61aus, but they call him Ubu Ubu." 7. McC. rejects "the traditional distinction between 'predicate' and 'logical operator' and trea~s~ such 'logical operators' as quantifiers, conjunctions, and negation as predicates...."To clarify the relationship of semantic to syntactic representations let me quote here from McCawley's Kotoba no Uchu paper:Since the rules for combining items into larger units in symbolic logic formulas must be stated in terms of categories such as 'preposition', Ipredicatel, and 'index'~ these categories can be regarded as labels on the nodes of these trees.And since ... these categories all appear to correspond to syntactic categories, the same symbols (S, V, NP, etc. ) may be used as node labels in semantic representations as are used in syntactic representations.Accordingly, semantic representations appear to be extremely close in formal nature to syntactic representations, so close in fact that it becomes possible to catalogue the conceivable formal differences and determine whether those differences are real or apparent° Among such differences he lists:I. "The items in a s#ntactic representation must be assigned a linear order, whereas it is not obvious that linear ordering of items in a semantic representation makes shy sense."2. "Syntactic representations inwolve lexical items from the language as their terminal nodes, whereas the terminal nodes in a semantic representation are semantic units rather than lexical units.""There are many syntactic categories which appear to play no role in semantic representation, for ex., verb-phrase, preposition, and prepositional phrase." (At the 5th Regional Meeting of the CLS, April of this year, A. L. Becket of the University of Michigan presented a paper in which he argued prepositions are underlying predicates; prepositional phrases are accordingly verb-phrases. )McCawEey concluded nonetheless that these differences do not provide an argument that semantic ~epresentations are different in formal nature from syntactic representations. Again, I will omit his reasons for that conclusion.I might summarize all this by saying: i. Semantic representatio~ is a modification of the representations long familiar from ~ormal logic.2. Such representations do not radically differ from the surface syntactic representations of Aspectstype grammar. I will now turn to the second question raised above on p. II-3o This question has as yet received little study.It is a very difficult topic, but a very important one.I will confine myself here to a few brief comments and a few references.One of the important studies underway now is about syntactic variables.This was the subject of Ross' 1967 dissertation. Variables such as X and Y are familiar from transformational grammars, but no one had attempted before to specify in general what the notion of syntactic variable entailed.While Ross' study was important, and he came up with several important constraints on the form of transformations, much work remains.Lakoff an~ Postal are also working on related questions.Let me llst here some of the constraints Ross gave in his thesis:I) The complex NP constraint.No element contained in a sentence dominated by a mounphrase withxa lexical head noun may be moved out of that noun phrase by a transformation. (p, 127)2) The o~oss-over condition. No NPmentioned in the structural index of a transformation may be reordered by that rule in such a way as to cross over a coreferential NP. (p. 132) 3)T~e coordinate structure constraint. In a coordinate structure, no conjunct may be moved, nor may any element ~ontained in a conjunct be moved out of that conjunct. (p. 161 )~) The pied piping convention. Any transformation which is stated in such a way as to effect the reordering of some specified node NP, where this node is preceded and followed by variables in the structural index of the rule, may apply to this NP or to any noncoordinate NP which dominates it, as long as there are no occurences of any coordinate node, nor of the node S, on the branch connecting the higher node and the specified node.(That is,:...any NP above some specified one may be reorder-e~, instead of the specified one, but there are environments where the lower NP ~ay not be moved, and only some higher one can, consonant with the conditions imposed ~rn the convention. A very important constraint occurs on p. 480 of the thesis, but I omit it here because it contains many terms I would not care to define here. I reccomend Ross' dissertation for anyone with doubts about any deep principles of language organization emerging from our studies in transformational grammar.He will be cured.Recently George Lakoff has studied the notion of "derivational constraint". This study is quite recent and still very very hairy, but hints in his 1969 CLS paper, and comments by Postal on it suggest that rule odering is merely a special case or manifestation of a deeper principle of grammar organization. The next revolution effected by generative semantics may well be to drop rule ordering from our canons.For various reasons (partly that it interests me mere ) I will have much more to say here about lexlcal insertion than I will about constraints on transformations, although undoubtedly the . latter is ultimately o~ much greater importance.Until 1965 or so, it was assumed that the terminal symbols of a P-marker are lexical items; the lexicon merely assigns properties to these items.Bruber in his 1965If-13 dissertation argued that certain transformations had to occur before lexical items entered trees: that is, that there were pre-lex±cal transformations.Before Gruber, the system of semantics was one in which T-rules generated from deep structures surface str%~ctures and P-rules generated semantic representations for those deep structures. • T~lis was the theory of intepretive semantics (as in Katz and Postal, for ex. ) Gr~ber proposed a derivational semantics. Gruber intended to "show va-~ consistently recurrent semantic relationships among parts of the sentence and among different sentences, which can best be explained by the existence of some underlying pattern of which the syntactic structure is a particular manifestation." (p.l) He concluded that "a level at which semantic interpretation w~ll be relevant will ... be deeper than the level of 'deep structure' in syntax." (p.2) Later Lakoff showed evidence that in fact the level of semantic interpretation was that of deep structure, but argued that (as Gruber said) "syntax and semantics will have the same representation at the prelexical level"(p. 3): a single set of rules would transform semantic structures containing no lexical items into surface syntactic representations containing them.The s~udy of lexical insertion, the process by which the underlying semantic elements are grouped into units replaceable ~y surface lexical items has led to a large literature containing a great many questions, and some positive answers.An important paper was McCawley's 1968 paper, "Lexical insertion in a transformational grammar without deep structure." There he started by assuming various points concluded in other papers of his.He very clearly presents some of the tehots of generative semantics, so with some repetition from above I quote these points here : I. Syntactic and semantic representations are of the same formal nature....There is a single system of rules ... which relates semantic representation ~o surface structure through intermediate stages.representation to surface structure, terminal nodea may have for labels 'referential indices' such as were ~ntroduced in Chomsky 1965 .... In semantic representation, only indices and 'predicates' a~e terminal node labels ....McCawley then defined tdictionary entry' as a transformation which replaced part of a tree by a surface lexical item. He expressed doubt these rules could be ordered internally or external, since it would hardly be possible, for example, that some question would arise as to the relative ordering of the transformation introducing the word horse and that extraposing NP's in two dialects, that is, ~he ~rdering could not possibly matter.He then raised several possibilities as to the relative ordering of the lexical rules v~s-a-vis other rules. Are the lexical rules last, first, or where? McCawley argued for the lexical rules applying Just before the post-cyclic rules, and adduced evidence for several rules, predicate-raising, equi-NP deletion, etc., being pre-lexlcal.In his 1968 LSA paper, Jerry L. Morgan of the U;~Iversity of Chicago added to this. He pointed out 'the rather strong assumptlon that lexical_items only 'replace' constituents." (P.3) He wrote, "~he process of syntactic derivation begins with semantic representation in terms of trees containing very highly abstract semantic terms, operating upon this by means of rules permuting, deleting, and collapsing parts of the representation, finally deriving a structure whose constituents are replaced by lexical items." (p. 3) He then s~ated a very strong claim of the theory:Given the set of universal pre-~exical rules, the set of universal semantlo primitives, and the set of universal constraints on the operation of rules, such as those described by Ross 1967 , these define the universal set of possible lexical items in their semantic aspect; that is, they rule out as impossible am infinit~ classof a priori possible "meanings" a lexlcal item could have. (P.4)A second very strong claim of the theory is:Insofar as the selection from, and details of implementation of, the universal set of rules is language-specific, the idiosyncracies of a given language in this respect will also be reflected by systematic gaps in the lexicon. The same is true for the set of semantic primitives and the se~ of constraints on rules. .... (P.4-5)Morgan came up with some restrictions on lexical items : only I) "lexical items Jan replace a constituent which ,! is not labelled S.(p.6)2) "verbs cannot incorporate referential indi~es."(p.6)One l~Lher point to be made is that lexical items can only replace well-formed sub~rees.My own work has been concerned with specifying classes of possible lexical items and accounting for the syntactic properties of verbs in terms of their semantics, thereby attempting to capture the intuition long familiar from traditional grammar that certain ~emantic classes of verbs, such as "verbs of giving and taking" or "verbs of motion" also form syntactic classes and hence their syntactic properties can be regarded as derived from their semantics. Georgia Green of the University of Chicago has presented a paper (1969) which is also interesting in terms of lexical insertion.She tends to regard lexical insertion hs fairly ~ivorced from morphology, and views lexical insertion as the replacement of an entire sub-tree by a surface lexical item which may contain more than one morpheme as ~lassically defined.This position is somewhat different from my own, as I regard lexical insertion as primarily involving the replacement of items on a i-I basis.However, this is an empirical question and only future research will decide which of us is more nearly correct. He opted fo~ a "Weinreichian" lexicon in which lexical items were combinations of semantic, syntactic~ and phonological information.McCawley supported this with this evidence: the reason John is sadder than that book. is bad is that the two sads in the underlying structure of the sentence are d~erent lexical items. They therefore cannot participate in comparison: *John is as sad as that book he read yesterday. *He exploits his employees more than the oppurtunity to please. *Is Brazil as independent as the continuum hypothesis?(exx. of Chomsky's. )McCawley called for a theory of "implioaticnal relations", since in cases such as the ambiguity of warm the ambiguity is not a property of the item itself but-B'~--a class of items, and therefore such an ambiguity must be specified in terms of general principles. NcCawley was not clear about the nature of these implicational r~lations, so that the nature of the relationship of the various sads was more or less left open.I have discussed the no-~on of systematic ambiguity, where the ambiguities of an entire class of verbs is specified in terms of the derivational process underlying them all, not Just in terms o~ a descriptive statement.Thus we are seeking to explain lexical gaps in terms of statements such as "The reason some language L lacks a verb ¥ glossing the verb W in the language M is that M, but not L, has the transformation T."Anyone familiar with the lexicons of French, ~n~lish, and German, for example, knows that there are certain kinds of verb which are not typical of one or another of these languages which nonetheless readily occur in the others. Such verbs are derived by processes occuring in one but not another language, and our task is to discover and describe such processes.Thus we may ultimately be able to tell how the class o~ French verbs, say, differs from II-17 the class of all possible verbs.I have attempted in these few pages to present a digest of some works in the paradigm of generative semantics.I have not really attempted to provide even an elementary guide to the methods of generative s~mantics or to its conclusions, its findings, but I hope I have explicated somewhat its goals and given some insight into the direction in which it is moving. Some very strong claims are forthcoming on the nature of grammars and languages and hence of language itself.A tremendous amount of work needs to be done, but one can see clearly that one possible end point of this work will be a very comprehensive, very strong theory of language competence that has a great deal to say about human be ings.One perhaps minor point, though, looms up large here:generative semantics relates semantic structures to stu~face sentences by a single Eet of r~les.There are s-~veral versions of transformational grammar that do this, but generative semantics is perhaps the most-Ceveloped of these.But as the saying goes, what goes up must come down: we may paraphrase this as: what can be generated, can be analyzed.T~e theory permits, idsally, an algorithmic translation of a surface string into one or more underlying semantic structures.For computational linguistics, that may be its most appealing feature.Robert I. "Semantic and syntactic classes of verbs." Mimeo. "The lexicon in a derivational semantic theory ~ofJrna~Sf~i~°~ lslnlg~i~tics 1 In Chicago i i ab from University Microfilms, AnnA~bor, Michigan, as ser~l s-372. "On the nature of the 'lexical item'", in Darden etal. "On transforma~ionally derived verbs in a ~rammar of English", .Ditto, read at LSAoThe characterization of abstract lexical entities", ~Ditto, read at ACL."Transitive verbs and lexical insertion", dittoed, read at Kansas and CLS. "Predicative structure." Unpublished Ph.D. diss.Paul Postal, in a 196~ paper, "Underlying and superficial linguistic structure", seemed to rule out any principled approach to the study of performance.But it seems clear to me that performance has me, ely been a catch-all term used by linguists with a lot of nasty facts on their hands they had no way of handling.In section III mentioned the treatment of semi-grammatical sentences as they used to be called. Now I think we should be able to treat so-called sentence fragments as being part of language proper.I see no reason, once we get over our hang-ups with sharp categorization of grammaticality and Judgments of grammaticality in null context, why we cannot have a principled treatment of sentence fragments.Another area ~sually relegated to Never-never land is that of the structure of discourse.Obviously the sentence pairsHarry is a fool.He voted for Richard Nixon. Ha voted for Richard Nixon.Harry is a vote.are not equivalent. Imagine if we take every other sentence on a page, say, the beginning of Matthew 2. The result is hardly a well-formed discourse. Now the birth of Jesus came about in this way. But her husband, Joseph, was an upright man and did not wish to disgrace her, and he decided to break off the engagement privately."Joseph, descendent of David, do not fear ~o take Mary, your wife, to your home, for it is through the in/'luence of the holy Spirit that she is to become a mother."All this happened in fulfillment of what the Lord said through the prophet .... But he did not live with her as a husband until she had had a son, and he named the child Jesus. "Where is the newly born king of the Jews?"To now, it has generally been held that the structure of discourse is linear, that is, sentences are strung together one after the other and well-formedness is based on kow well these sentences string.But the context is vital to the form of a sentence.Similarly, whether III-2 two clauses are united or put into separate sentences depends on context: by context we cannot mean m~rely the two sentences on either side of the sentence in question, nor can we mean the n sentences to either side. ~is is quite as mad as the foll~ of the early 50's that syntax was a matter of which words had what probability of occurir~ n words to either side of a given word.~at we need is a grammar, a generative grammar, a transformational grammar, of discourse, based on the same methods that have been developed in syntax over the last decade. This worm was pioneered by George Lakoff's 196~ study of Russian folk-tales, in which he revised Propp's phrase structure grammar of the "morphology" of Ruszian ~olk-tales.I subsequently re-modified Lakoff-s work and programmed it in COMIT for a 7090-7094 machine to generate plot outlines of Russian folktales.The results were partly abominable and partly amusing, but the point is that while hardly any discourse is as steretyped as Russian folktales or US patents, that certain structures nonetheless occur which are larger than the sentence. The notions of subordination and coordination of sentences and even whole discourses are quite valid and quite amenable to inve st~gation.A third class of problems concern logic.The implications of a sentence may be quite as important as the statements made by it.We linguists are only beginning to investigate presupposition, implication, insinuation, assertion, etc., but philosophershave been aware of these problems for a long time ano a large literature exists. We want a machine to get as much information out of a sentence as a human would.A fourth class of problems concern memory. Any program must involve knowledge.H~mans do not use language in vacuo.Suppose I know that Sherlock Holmes is a tall, thin man.Suppose further that a fat, short man comes up to me and tells me he is Sherlock Holmes.If my memory and logic components are going full blast I immediately suggest to the gentleman that a)he is either lying, or b)could use a good psychiatrist, or c)he has a bad sense of humor.W~ would not like the computer to read a sarcastic sentence, such as "Surely they have a right to do unto others what they would not want others to do unto them" and file it away neatly.We need to give the computer a &ertain amount of linguistic sophistocation as far as irony, insinuation, and such go.This might seem overly optimistic, since most human beings lack this ability, but let me suggest that the goal of computational linguistics is to understand human capabilities, not reproduce them, something which can be done far cheaper by producing new human beings t~ru natural means than producing software in our labs. The only thing keeping us from programming ~omputers to~ for example, have a sense of human, is our peculiar delusion that we can't do it.So these are the problems that have not been the subject of serious research.Note that I do not mean by this that no one has ewr looked at them and found anything out.~en Newton was Platonian enough to realize that nothing new is ever discovered under the sun.But no linguist operating in terms of a formalized or quasi-formalized system has studied these problems very much.This is not to say that certain conclusions about the future construction of a theory of language use cannot be drawn from ou~ present ignorance. T~e rest of this section will be devoted to how we with our Neanderthalic knowledge of language can outline a decent formal theory of 'la parole', something that we would want to do, I think, even had the computer never been invented.(~nd of sermon.)One question which arises ~ere is what the nature of underlying semantic structures is.Do people think in trees?McCawley in his article on the base rejected the notion of derivation.Instead he instituted a system of "node-sdmisslbility conditions".These are actually conditions on the well-formedness of trees.Any object meeting these requirements is a w ll-formed tree, otherwise it is not (although I have yet to settle in my own mind whether an ill-formed tree is still a tree, Just as I have been confused about whether an Ill-formed sentence of English is still a sentence of gngllsh at al~l.) Each NAC has the form <a; BC> which is read, "a node A is admissible if it immediately and exclusively dominates a node labelled B and a node labelled C." NAC's generate trees directly, as opposed to rewriting rules which, in Choms~y's system, first go through a derivation, from which trees are then constructed.But the important point here is "Grammars are written by fools llke me, but only God can make a tree": meaning iii-~ that linguists need not concern themselves with the origin of trees to discover their properties.Of course, if we are to be manipulating semantic structures, we are going to have to be concerned with where trees come from.A more basic question is whether the kinds of trees generative semantics claims to be semantic are reasonable semantic structures, that is, whether the investigator in artificial intelligence, for example, could live with them.I think there is a very good chance that this is the case.The basic elements of these trees are as follows.We have referential indices referencing individuals.I think that in any system we will need a device such as this.Both these indices and larger entities called senetences or S,s can be dominated by the category N.I think again that any system will heed to consider sentences recursive in this way.Then we will need predicates of arbitrary "weight", tho' in natural language the number of N's associated with any predicate V will undoubtedly be rather small.One possible counter to this is obviated if we assure that we have ways of referring to sets.Then we can define S as a V and associated N's.This is not really a bad scheme.Where it does fall down is in its failure to reflect ~yper-propositional relations.The conceptual universe of a person is not a bunch of unrelated trees or sentences (propositions).We will want ways to connect the Napoleon of "Napoleon ate cheese" with that of "NapoEeon hated Elba".Thus the conceptual universe is a network, with a far more complex structure than our underlying semantic trees.We therefore need some set of rules for isolating part of this network to serve as the underlying tree for some surface sentence or set of sentences, since it may turn out from our study of the structure of discourse that the unit of generation is larger than the sentence.More will be said on these matters in section IV.PART IV. The understanding machine.One basic goal of research into computational linguistics might be to investigate how information is extracted from linguistic source data.(ultimately this ties into such questions as that of automated abstracting.) That component of our projected understanding machine which will model the information abstracting process let us dub the "info grabber~.The info grabber of course is not isolated.It will have to be connected with a logic component and a memory with which it will interact.Nor is this the whole picture. As shown below one needs also a way of encoding the semantic output of the_logic component for later output as linguistic data. Therefore the whole system will look like:I~i nguistlc[~INFO "' ~LoGIc ource I1"~1 ~RAB- ata (LSD)~ BER ~output If SPEW- ~ata (LOD)II ~ER FNotice that I have dubbed that component ~ahich synthesizes the LOD the "infox spewer". W e can regard the above as a reasonable model not only of an understanding machine, but of the speaker. The above model would certainly be of use in the study of the use o f a natural language as a computer input-output language both for programming and for other applications, such as interaction between student and teaching machine in an educational program. I have made some study of such a system, which I called EASIOL (English as an Input-output Language), taking into account the results of the two studies I know of which approximated what I was after, namely Daniel Bobrow's STUDINT program, reported on in "A Question Answering System for High School Algebra Word Problems" Proc. FJCC 25, !9641 and in Scientific American S~ptember 198b, pp. 252-260, which was BobroW's IV-2 research for his doctorate.Bobrow modified LISPo_in the direction of COMIT, walling the hybrid METEOR. Re system he evolved has a fair amount of flexibdlity and generality, and can doal with many kinds of problems expressed in st~llzed language. I might criticize Bobrow for his naivete over natural language, but since I am even more naive about information processing I will not do so.A second system which I have heard later evolved into a more general system, is the BASEBALL program reported on by Green, Wolf, Carol 0homsky, and Laughery in the Feigenbaum-Feldman voltume, Computers and Thou~t. This system bases itself on a ra~-h-er stylized t~e of data structure.I have not followed the prc~ress of either of these projects, but both Betray inherent faults that made it unlikely t11at either could form the basis for a more general system operating on actual discourse.Nonetheless, these systems are very convincing for those Who think that language is the sacrosanct birthright of human Beings and that computers will never Be able to hahdle such tasks as writing abstracts of articles.The above model is also a reasonable model of human speakers (if we forget that people differ from machines in essential ways --vive la [email protected] ) The first part of the "info grab" is the read-ln.Hopefully this will someday Be done by the machine itself, via optical reader or speech analyzer. I think that research on readers and a~alyzers has in general been unhappy because of a failure to realize how complex recognition b~ humans is. Recognition is not simply an ootlcal or auditory problem.All levels of language must interact in the process." it is well-known that real speech is more easily handled than ~pproxlmants to speech,x~this can only be d~e to the recognition process being cyclic and operating slm~ltaneously on all levels.The slmpllst recognition routines would involve something lik~ : Indeed, we have to connect up t~e logic and the memory to this system. Below is a real sample of my handwriting when writing rapidly.No recognition routine, not even my own human one can at all times decipher this garbage.Redundancy is pretty near nil and such words as '!of", "as", "a", and "or" tend to be homologous.What a human reader can't do, we can hardly . expeet a machine to do. But humans can gmess from context what a word must be, and then see if the squiggle on the page is close enough.This involves both syntactic and semantic recognition, and if we ever want machine reading of handwriting, we must give the machine this capability. But suppose the reader still can'~ handle the writing?I suppose then we want ~t to get the logic component to intiate a question such as," What is that?" $2That is, we want the computer to be able to go thru the whole set of levels.This will necessitate a much more complicated program than those around today, incorporating a greater amount of linguistic expertise, but undoubtedly it is necessary.Let us assume that the info grabber has grabbed the info, it ~_ll have (1) to store this information in the memory, and (2) ~et the logic component examine the information.Suppose I know that Richard Daley is the mayor of Chicago, and I read in a Chicago newspaper that the ~ayor of Chicago is the greatest man in the world. The LSD m~st somehow be so stored that I can retrieve from my memory the fact that RichardDa__~l~ is thot~t to be the greatest man in the w---~~t newspaper.This raises . the question of how to convert underlying semantic trees into subnets of the semantic network of which memory probably consists, hmny of the features incorporated into • Sidney Lamb's conceptual networks will, I think, be incorporatable into the moddl. In particular, all occurences of a particular entity (concept) will have to be linked in some way or identified.In a sense info grabbing starts by analyzing the LSD into semantic structures, and ends by synthesizing these structures and those already in memory into a new memory network.One point that should be made clear is that all information will have to be represented on the same level. That is, both the program and the data will reside in the same memory net, as in a computer.Reading an algorithm in a book, the machine will store this in its memory Just as it stores part of its own program, and it will be able to either quote the algorithm later as linguistic material as part of info retrieval, or use that algorithm as part of its own logical operations.There is some question the as to ~hether this quite ideal machine could actually function in this way.But human beings are like this in some ways, and it is part of their language capability that they should or could.The process of info spewing is a reverse of the info grab.The logic component will initiate the spew, using part of the memory net and selecting one or more underlying trees to spew out.It will thenIV-5 go through the derivational process and ultimately genera@e an actual string of sentences. Perhaps feedback will enter here, so that the machine can utilize part of its own spewings as immediate LSD, although it is hard to see why the machine would need to do so, altho humans are constantly correcting themselves mid-sentence.An obvious question is what the role of generative semantics in all t~Is.I think the experience of CL has been ~ in general that ad hoc programs don't work.W e need a basic linguistic theory. I think generative semantics is the best bet.But as I noted, it is a theory of cometence.We will need to modify it. I think we need to l) admit rules of non-recoverable deletion, 2) admit rules for hypersentential constructs, and 3) build strong interactions with lo~ic and memory components.In particular, the relationship of underlying semantic structures to conceptual networks will have to he investigated in depth.If the hypotheses of the GS linguists are correct, then we have a simple but powerful basis for programs directly transforming language source materials into semantic information usuable by programs.For example, if the semantic structures turn out to be universal, they can servq as a pivot or intermediary for the currently out of fashion goal of MT.$4. In the earlier stages of the conversion from semantic
null
null
null
null
Main paper: constants denoting individuals.: 3. predicates, denoting properties and relationships. If the negation applies to John beats his wife, the se'~tence means 'the reason ~at John doesn't beat his wi~e is that he loves her', whereas if it applies to the John heats his wife because he loves her., the mg. is 'the reason that John beats his wi~e -~not-~Hat he loves her. ' Notice that here a surface form represents at least two different underlying structures which nonetheless contain precisely the same semantic elements--grouped differently, however.Another point made is that "semantic representations must include .., some indication of presupposed coreference." (p.2) That is, the followingsentence in neutral (i.e. null) context is ambiguous three-ways:II-6John told Harry that his wife was pretty.John's? Harry's? or a third's? It could be any. However, if we know who his refers to, there is no such ambiguity.This may--seem trivial, but it is a point often ignored.McCawley then gives an argument for referential indices being different from expressions used to de~ribe. The sentence Max d~bied that he kissed the girl Be kissed. / is not contradictory if "the girl he kissed" is the speaker's description.Another notion is that of presupposed set membero ship. lhMax is more intelligent than most Americans.said with primary stress ~n most, the sentence is good if and only if Max is presupp--~a to be American, that is, the sentence implies Max is American. With primary stress on Americans, however, Max is presupposed not to be American.Presupposition is in general a very hairy topic which was recently the subject of an entire conference (at the Ohio State University).We know very little about the nuamces of implication and are only beginning even to identify the problems.But if a machine is ever to rea~ Ga_tcher i~ the Rye catching all the nuances of the italicized words, we had better find out how stress is used to alter the presuppositlonal set of a sentence. I need not be so unsubtle as to suggest the extreme value of such researches to psychology.Perhaps they already know about all this, for all I know.In any case I cannot restrain myself from inclucing McCawley's beautiful example CIA Agents are more stupid than most Americans.He had primary streos on the ~ but I prefer to think of it as going on the Americans. Z would like ~o interject at this point a minor apology.I have been rather fan-clubish here and have waved my hand a lot. Frankly I see no value in rehearsing here all the arguments available elsewhere. But I would like the rea~r to bear in mind my skimpy resume in no way reflects~the quality of the original. Let me also note, lest I seem u~uduly credulous towards tjheDthoughts of. C~irman Quang 4mild-maunered linguist• . mc~awley Is In reality Q. p. Dong, Chairman of Unamerican Studies at an unknown universityJ, that most of us working within the paradigm of generative semantics would be the first to admit that our theories haven't a pra~er of being right, that is, they~~ approach even a partially realistic and naturalistic "J theory of language.If we like it better than other paradigms it is because we believe that no other cureent theory is any better and that this one at least has a good chance of self-improvement. (End of apologia. )If semantic representation looks much like logical representation, it also differs from it. In the Kotoba no Uchu paper McCawley noted the following differences :I. "It is necessary to admit predicates which assert properties not only of individuals but also of sets and propositions."2. "In mathematics one enumerates certain objects which ~one~will talk about, defines other obJecSs in te~ms o? these objects, and co~Ifines[onesel~ to a discussion of objects which[oneS has either postulated or defined .... However, one does not begin a conversation by giving a list of postulates and definitions..... •..people often talk about things which either do not exist or which they have identified incorrectly• indices exist in the minds of the speaker rather than in the real world; they are conceptual entities which the individual speaker creates in interpreting his experience."In the Wenner-Gren symposium, McCawley had more to say about the difference between logic and language.1. Immediate constituent structure (trees)rather than parentheses are basic. First, "semantic representations are to form the input to a system of ~ransformations that relate meaning to superficial form; to the II-8 extent that these transformations have been formulated and Justified, they appear to be stateable only in terms of constituent structure and constituent type, rather than in terms of configurations of parentheses and terminal symbols." Secondly, "it may be necessary to operate in terms of semantic representations in which symbols have no left-to-right ordering .... "2. There will have to be more 'logi~al operators', such as most, almost all, and m~.3. "And and ... or ... cannot be regarded as Just binary operators but-~ust be allowed to take an arbitrary number of operands."4. The quantifiers must be restricted rather than unrestricted as in most logical systems. Some quantiflers imply existence:All dogs like to bite postmen, involves the presupposition that dogs exist, whereas the unrestricted quantifiers logicians use have no such presupposition. "adequate semantic representation of sentences: involving'shifters' (Jakobson, 1957 ) such as I, YOu~_ ~ now, ..., gestures and deictic ~6rds like this--and that, and tenses, will have to include re~rence-K~-the speech act.The most promising approach to this aspect of semantic representation ... is Rose's (1969) elaboration of Austin's (1962) 6. "The range of indices will ~ave to be enormous.In particular, it will have to include not only indices that purport to refer to physical objects, but also indices corresponding to mythical or literary objects, so that one can represent the meaning of sentences such asThe Trobriand Islanders believe in Santa 61aus, but they call him Ubu Ubu." 7. McC. rejects "the traditional distinction between 'predicate' and 'logical operator' and trea~s~ such 'logical operators' as quantifiers, conjunctions, and negation as predicates...."To clarify the relationship of semantic to syntactic representations let me quote here from McCawley's Kotoba no Uchu paper:Since the rules for combining items into larger units in symbolic logic formulas must be stated in terms of categories such as 'preposition', Ipredicatel, and 'index'~ these categories can be regarded as labels on the nodes of these trees.And since ... these categories all appear to correspond to syntactic categories, the same symbols (S, V, NP, etc. ) may be used as node labels in semantic representations as are used in syntactic representations.Accordingly, semantic representations appear to be extremely close in formal nature to syntactic representations, so close in fact that it becomes possible to catalogue the conceivable formal differences and determine whether those differences are real or apparent° Among such differences he lists:I. "The items in a s#ntactic representation must be assigned a linear order, whereas it is not obvious that linear ordering of items in a semantic representation makes shy sense."2. "Syntactic representations inwolve lexical items from the language as their terminal nodes, whereas the terminal nodes in a semantic representation are semantic units rather than lexical units.""There are many syntactic categories which appear to play no role in semantic representation, for ex., verb-phrase, preposition, and prepositional phrase." (At the 5th Regional Meeting of the CLS, April of this year, A. L. Becket of the University of Michigan presented a paper in which he argued prepositions are underlying predicates; prepositional phrases are accordingly verb-phrases. )McCawEey concluded nonetheless that these differences do not provide an argument that semantic ~epresentations are different in formal nature from syntactic representations. Again, I will omit his reasons for that conclusion.I might summarize all this by saying: i. Semantic representatio~ is a modification of the representations long familiar from ~ormal logic.2. Such representations do not radically differ from the surface syntactic representations of Aspectstype grammar. I will now turn to the second question raised above on p. II-3o This question has as yet received little study.It is a very difficult topic, but a very important one.I will confine myself here to a few brief comments and a few references.One of the important studies underway now is about syntactic variables.This was the subject of Ross' 1967 dissertation. Variables such as X and Y are familiar from transformational grammars, but no one had attempted before to specify in general what the notion of syntactic variable entailed.While Ross' study was important, and he came up with several important constraints on the form of transformations, much work remains.Lakoff an~ Postal are also working on related questions.Let me llst here some of the constraints Ross gave in his thesis:I) The complex NP constraint.No element contained in a sentence dominated by a mounphrase withxa lexical head noun may be moved out of that noun phrase by a transformation. (p, 127)2) The o~oss-over condition. No NPmentioned in the structural index of a transformation may be reordered by that rule in such a way as to cross over a coreferential NP. (p. 132) 3)T~e coordinate structure constraint. In a coordinate structure, no conjunct may be moved, nor may any element ~ontained in a conjunct be moved out of that conjunct. (p. 161 )~) The pied piping convention. Any transformation which is stated in such a way as to effect the reordering of some specified node NP, where this node is preceded and followed by variables in the structural index of the rule, may apply to this NP or to any noncoordinate NP which dominates it, as long as there are no occurences of any coordinate node, nor of the node S, on the branch connecting the higher node and the specified node.(That is,:...any NP above some specified one may be reorder-e~, instead of the specified one, but there are environments where the lower NP ~ay not be moved, and only some higher one can, consonant with the conditions imposed ~rn the convention. A very important constraint occurs on p. 480 of the thesis, but I omit it here because it contains many terms I would not care to define here. I reccomend Ross' dissertation for anyone with doubts about any deep principles of language organization emerging from our studies in transformational grammar.He will be cured.Recently George Lakoff has studied the notion of "derivational constraint". This study is quite recent and still very very hairy, but hints in his 1969 CLS paper, and comments by Postal on it suggest that rule odering is merely a special case or manifestation of a deeper principle of grammar organization. The next revolution effected by generative semantics may well be to drop rule ordering from our canons.For various reasons (partly that it interests me mere ) I will have much more to say here about lexlcal insertion than I will about constraints on transformations, although undoubtedly the . latter is ultimately o~ much greater importance.Until 1965 or so, it was assumed that the terminal symbols of a P-marker are lexical items; the lexicon merely assigns properties to these items.Bruber in his 1965If-13 dissertation argued that certain transformations had to occur before lexical items entered trees: that is, that there were pre-lex±cal transformations.Before Gruber, the system of semantics was one in which T-rules generated from deep structures surface str%~ctures and P-rules generated semantic representations for those deep structures. • T~lis was the theory of intepretive semantics (as in Katz and Postal, for ex. ) Gr~ber proposed a derivational semantics. Gruber intended to "show va-~ consistently recurrent semantic relationships among parts of the sentence and among different sentences, which can best be explained by the existence of some underlying pattern of which the syntactic structure is a particular manifestation." (p.l) He concluded that "a level at which semantic interpretation w~ll be relevant will ... be deeper than the level of 'deep structure' in syntax." (p.2) Later Lakoff showed evidence that in fact the level of semantic interpretation was that of deep structure, but argued that (as Gruber said) "syntax and semantics will have the same representation at the prelexical level"(p. 3): a single set of rules would transform semantic structures containing no lexical items into surface syntactic representations containing them.The s~udy of lexical insertion, the process by which the underlying semantic elements are grouped into units replaceable ~y surface lexical items has led to a large literature containing a great many questions, and some positive answers.An important paper was McCawley's 1968 paper, "Lexical insertion in a transformational grammar without deep structure." There he started by assuming various points concluded in other papers of his.He very clearly presents some of the tehots of generative semantics, so with some repetition from above I quote these points here : I. Syntactic and semantic representations are of the same formal nature....There is a single system of rules ... which relates semantic representation ~o surface structure through intermediate stages.representation to surface structure, terminal nodea may have for labels 'referential indices' such as were ~ntroduced in Chomsky 1965 .... In semantic representation, only indices and 'predicates' a~e terminal node labels ....McCawley then defined tdictionary entry' as a transformation which replaced part of a tree by a surface lexical item. He expressed doubt these rules could be ordered internally or external, since it would hardly be possible, for example, that some question would arise as to the relative ordering of the transformation introducing the word horse and that extraposing NP's in two dialects, that is, ~he ~rdering could not possibly matter.He then raised several possibilities as to the relative ordering of the lexical rules v~s-a-vis other rules. Are the lexical rules last, first, or where? McCawley argued for the lexical rules applying Just before the post-cyclic rules, and adduced evidence for several rules, predicate-raising, equi-NP deletion, etc., being pre-lexlcal.In his 1968 LSA paper, Jerry L. Morgan of the U;~Iversity of Chicago added to this. He pointed out 'the rather strong assumptlon that lexical_items only 'replace' constituents." (P.3) He wrote, "~he process of syntactic derivation begins with semantic representation in terms of trees containing very highly abstract semantic terms, operating upon this by means of rules permuting, deleting, and collapsing parts of the representation, finally deriving a structure whose constituents are replaced by lexical items." (p. 3) He then s~ated a very strong claim of the theory:Given the set of universal pre-~exical rules, the set of universal semantlo primitives, and the set of universal constraints on the operation of rules, such as those described by Ross 1967 , these define the universal set of possible lexical items in their semantic aspect; that is, they rule out as impossible am infinit~ classof a priori possible "meanings" a lexlcal item could have. (P.4)A second very strong claim of the theory is:Insofar as the selection from, and details of implementation of, the universal set of rules is language-specific, the idiosyncracies of a given language in this respect will also be reflected by systematic gaps in the lexicon. The same is true for the set of semantic primitives and the se~ of constraints on rules. .... (P.4-5)Morgan came up with some restrictions on lexical items : only I) "lexical items Jan replace a constituent which ,! is not labelled S.(p.6)2) "verbs cannot incorporate referential indi~es."(p.6)One l~Lher point to be made is that lexical items can only replace well-formed sub~rees.My own work has been concerned with specifying classes of possible lexical items and accounting for the syntactic properties of verbs in terms of their semantics, thereby attempting to capture the intuition long familiar from traditional grammar that certain ~emantic classes of verbs, such as "verbs of giving and taking" or "verbs of motion" also form syntactic classes and hence their syntactic properties can be regarded as derived from their semantics. Georgia Green of the University of Chicago has presented a paper (1969) which is also interesting in terms of lexical insertion.She tends to regard lexical insertion hs fairly ~ivorced from morphology, and views lexical insertion as the replacement of an entire sub-tree by a surface lexical item which may contain more than one morpheme as ~lassically defined.This position is somewhat different from my own, as I regard lexical insertion as primarily involving the replacement of items on a i-I basis.However, this is an empirical question and only future research will decide which of us is more nearly correct. He opted fo~ a "Weinreichian" lexicon in which lexical items were combinations of semantic, syntactic~ and phonological information.McCawley supported this with this evidence: the reason John is sadder than that book. is bad is that the two sads in the underlying structure of the sentence are d~erent lexical items. They therefore cannot participate in comparison: *John is as sad as that book he read yesterday. *He exploits his employees more than the oppurtunity to please. *Is Brazil as independent as the continuum hypothesis?(exx. of Chomsky's. )McCawley called for a theory of "implioaticnal relations", since in cases such as the ambiguity of warm the ambiguity is not a property of the item itself but-B'~--a class of items, and therefore such an ambiguity must be specified in terms of general principles. NcCawley was not clear about the nature of these implicational r~lations, so that the nature of the relationship of the various sads was more or less left open.I have discussed the no-~on of systematic ambiguity, where the ambiguities of an entire class of verbs is specified in terms of the derivational process underlying them all, not Just in terms o~ a descriptive statement.Thus we are seeking to explain lexical gaps in terms of statements such as "The reason some language L lacks a verb ¥ glossing the verb W in the language M is that M, but not L, has the transformation T."Anyone familiar with the lexicons of French, ~n~lish, and German, for example, knows that there are certain kinds of verb which are not typical of one or another of these languages which nonetheless readily occur in the others. Such verbs are derived by processes occuring in one but not another language, and our task is to discover and describe such processes.Thus we may ultimately be able to tell how the class o~ French verbs, say, differs from II-17 the class of all possible verbs.I have attempted in these few pages to present a digest of some works in the paradigm of generative semantics.I have not really attempted to provide even an elementary guide to the methods of generative s~mantics or to its conclusions, its findings, but I hope I have explicated somewhat its goals and given some insight into the direction in which it is moving. Some very strong claims are forthcoming on the nature of grammars and languages and hence of language itself.A tremendous amount of work needs to be done, but one can see clearly that one possible end point of this work will be a very comprehensive, very strong theory of language competence that has a great deal to say about human be ings.One perhaps minor point, though, looms up large here:generative semantics relates semantic structures to stu~face sentences by a single Eet of r~les.There are s-~veral versions of transformational grammar that do this, but generative semantics is perhaps the most-Ceveloped of these.But as the saying goes, what goes up must come down: we may paraphrase this as: what can be generated, can be analyzed.T~e theory permits, idsally, an algorithmic translation of a surface string into one or more underlying semantic structures.For computational linguistics, that may be its most appealing feature.Robert I. "Semantic and syntactic classes of verbs." Mimeo. "The lexicon in a derivational semantic theory ~ofJrna~Sf~i~°~ lslnlg~i~tics 1 In Chicago i i ab from University Microfilms, AnnA~bor, Michigan, as ser~l s-372. "On the nature of the 'lexical item'", in Darden etal. "On transforma~ionally derived verbs in a ~rammar of English", .Ditto, read at LSAoThe characterization of abstract lexical entities", ~Ditto, read at ACL."Transitive verbs and lexical insertion", dittoed, read at Kansas and CLS. "Predicative structure." Unpublished Ph.D. diss.Paul Postal, in a 196~ paper, "Underlying and superficial linguistic structure", seemed to rule out any principled approach to the study of performance.But it seems clear to me that performance has me, ely been a catch-all term used by linguists with a lot of nasty facts on their hands they had no way of handling.In section III mentioned the treatment of semi-grammatical sentences as they used to be called. Now I think we should be able to treat so-called sentence fragments as being part of language proper.I see no reason, once we get over our hang-ups with sharp categorization of grammaticality and Judgments of grammaticality in null context, why we cannot have a principled treatment of sentence fragments.Another area ~sually relegated to Never-never land is that of the structure of discourse.Obviously the sentence pairsHarry is a fool.He voted for Richard Nixon. Ha voted for Richard Nixon.Harry is a vote.are not equivalent. Imagine if we take every other sentence on a page, say, the beginning of Matthew 2. The result is hardly a well-formed discourse. Now the birth of Jesus came about in this way. But her husband, Joseph, was an upright man and did not wish to disgrace her, and he decided to break off the engagement privately."Joseph, descendent of David, do not fear ~o take Mary, your wife, to your home, for it is through the in/'luence of the holy Spirit that she is to become a mother."All this happened in fulfillment of what the Lord said through the prophet .... But he did not live with her as a husband until she had had a son, and he named the child Jesus. "Where is the newly born king of the Jews?"To now, it has generally been held that the structure of discourse is linear, that is, sentences are strung together one after the other and well-formedness is based on kow well these sentences string.But the context is vital to the form of a sentence.Similarly, whether III-2 two clauses are united or put into separate sentences depends on context: by context we cannot mean m~rely the two sentences on either side of the sentence in question, nor can we mean the n sentences to either side. ~is is quite as mad as the foll~ of the early 50's that syntax was a matter of which words had what probability of occurir~ n words to either side of a given word.~at we need is a grammar, a generative grammar, a transformational grammar, of discourse, based on the same methods that have been developed in syntax over the last decade. This worm was pioneered by George Lakoff's 196~ study of Russian folk-tales, in which he revised Propp's phrase structure grammar of the "morphology" of Ruszian ~olk-tales.I subsequently re-modified Lakoff-s work and programmed it in COMIT for a 7090-7094 machine to generate plot outlines of Russian folktales.The results were partly abominable and partly amusing, but the point is that while hardly any discourse is as steretyped as Russian folktales or US patents, that certain structures nonetheless occur which are larger than the sentence. The notions of subordination and coordination of sentences and even whole discourses are quite valid and quite amenable to inve st~gation.A third class of problems concern logic.The implications of a sentence may be quite as important as the statements made by it.We linguists are only beginning to investigate presupposition, implication, insinuation, assertion, etc., but philosophershave been aware of these problems for a long time ano a large literature exists. We want a machine to get as much information out of a sentence as a human would.A fourth class of problems concern memory. Any program must involve knowledge.H~mans do not use language in vacuo.Suppose I know that Sherlock Holmes is a tall, thin man.Suppose further that a fat, short man comes up to me and tells me he is Sherlock Holmes.If my memory and logic components are going full blast I immediately suggest to the gentleman that a)he is either lying, or b)could use a good psychiatrist, or c)he has a bad sense of humor.W~ would not like the computer to read a sarcastic sentence, such as "Surely they have a right to do unto others what they would not want others to do unto them" and file it away neatly.We need to give the computer a &ertain amount of linguistic sophistocation as far as irony, insinuation, and such go.This might seem overly optimistic, since most human beings lack this ability, but let me suggest that the goal of computational linguistics is to understand human capabilities, not reproduce them, something which can be done far cheaper by producing new human beings t~ru natural means than producing software in our labs. The only thing keeping us from programming ~omputers to~ for example, have a sense of human, is our peculiar delusion that we can't do it.So these are the problems that have not been the subject of serious research.Note that I do not mean by this that no one has ewr looked at them and found anything out.~en Newton was Platonian enough to realize that nothing new is ever discovered under the sun.But no linguist operating in terms of a formalized or quasi-formalized system has studied these problems very much.This is not to say that certain conclusions about the future construction of a theory of language use cannot be drawn from ou~ present ignorance. T~e rest of this section will be devoted to how we with our Neanderthalic knowledge of language can outline a decent formal theory of 'la parole', something that we would want to do, I think, even had the computer never been invented.(~nd of sermon.)One question which arises ~ere is what the nature of underlying semantic structures is.Do people think in trees?McCawley in his article on the base rejected the notion of derivation.Instead he instituted a system of "node-sdmisslbility conditions".These are actually conditions on the well-formedness of trees.Any object meeting these requirements is a w ll-formed tree, otherwise it is not (although I have yet to settle in my own mind whether an ill-formed tree is still a tree, Just as I have been confused about whether an Ill-formed sentence of English is still a sentence of gngllsh at al~l.) Each NAC has the form <a; BC> which is read, "a node A is admissible if it immediately and exclusively dominates a node labelled B and a node labelled C." NAC's generate trees directly, as opposed to rewriting rules which, in Choms~y's system, first go through a derivation, from which trees are then constructed.But the important point here is "Grammars are written by fools llke me, but only God can make a tree": meaning iii-~ that linguists need not concern themselves with the origin of trees to discover their properties.Of course, if we are to be manipulating semantic structures, we are going to have to be concerned with where trees come from.A more basic question is whether the kinds of trees generative semantics claims to be semantic are reasonable semantic structures, that is, whether the investigator in artificial intelligence, for example, could live with them.I think there is a very good chance that this is the case.The basic elements of these trees are as follows.We have referential indices referencing individuals.I think that in any system we will need a device such as this.Both these indices and larger entities called senetences or S,s can be dominated by the category N.I think again that any system will heed to consider sentences recursive in this way.Then we will need predicates of arbitrary "weight", tho' in natural language the number of N's associated with any predicate V will undoubtedly be rather small.One possible counter to this is obviated if we assure that we have ways of referring to sets.Then we can define S as a V and associated N's.This is not really a bad scheme.Where it does fall down is in its failure to reflect ~yper-propositional relations.The conceptual universe of a person is not a bunch of unrelated trees or sentences (propositions).We will want ways to connect the Napoleon of "Napoleon ate cheese" with that of "NapoEeon hated Elba".Thus the conceptual universe is a network, with a far more complex structure than our underlying semantic trees.We therefore need some set of rules for isolating part of this network to serve as the underlying tree for some surface sentence or set of sentences, since it may turn out from our study of the structure of discourse that the unit of generation is larger than the sentence.More will be said on these matters in section IV.PART IV. The understanding machine.One basic goal of research into computational linguistics might be to investigate how information is extracted from linguistic source data.(ultimately this ties into such questions as that of automated abstracting.) That component of our projected understanding machine which will model the information abstracting process let us dub the "info grabber~.The info grabber of course is not isolated.It will have to be connected with a logic component and a memory with which it will interact.Nor is this the whole picture. As shown below one needs also a way of encoding the semantic output of the_logic component for later output as linguistic data. Therefore the whole system will look like:I~i nguistlc[~INFO "' ~LoGIc ource I1"~1 ~RAB- ata (LSD)~ BER ~output If SPEW- ~ata (LOD)II ~ER FNotice that I have dubbed that component ~ahich synthesizes the LOD the "infox spewer". W e can regard the above as a reasonable model not only of an understanding machine, but of the speaker. The above model would certainly be of use in the study of the use o f a natural language as a computer input-output language both for programming and for other applications, such as interaction between student and teaching machine in an educational program. I have made some study of such a system, which I called EASIOL (English as an Input-output Language), taking into account the results of the two studies I know of which approximated what I was after, namely Daniel Bobrow's STUDINT program, reported on in "A Question Answering System for High School Algebra Word Problems" Proc. FJCC 25, !9641 and in Scientific American S~ptember 198b, pp. 252-260, which was BobroW's IV-2 research for his doctorate.Bobrow modified LISPo_in the direction of COMIT, walling the hybrid METEOR. Re system he evolved has a fair amount of flexibdlity and generality, and can doal with many kinds of problems expressed in st~llzed language. I might criticize Bobrow for his naivete over natural language, but since I am even more naive about information processing I will not do so.A second system which I have heard later evolved into a more general system, is the BASEBALL program reported on by Green, Wolf, Carol 0homsky, and Laughery in the Feigenbaum-Feldman voltume, Computers and Thou~t. This system bases itself on a ra~-h-er stylized t~e of data structure.I have not followed the prc~ress of either of these projects, but both Betray inherent faults that made it unlikely t11at either could form the basis for a more general system operating on actual discourse.Nonetheless, these systems are very convincing for those Who think that language is the sacrosanct birthright of human Beings and that computers will never Be able to hahdle such tasks as writing abstracts of articles.The above model is also a reasonable model of human speakers (if we forget that people differ from machines in essential ways --vive la [email protected] ) The first part of the "info grab" is the read-ln.Hopefully this will someday Be done by the machine itself, via optical reader or speech analyzer. I think that research on readers and a~alyzers has in general been unhappy because of a failure to realize how complex recognition b~ humans is. Recognition is not simply an ootlcal or auditory problem.All levels of language must interact in the process." it is well-known that real speech is more easily handled than ~pproxlmants to speech,x~this can only be d~e to the recognition process being cyclic and operating slm~ltaneously on all levels.The slmpllst recognition routines would involve something lik~ : Indeed, we have to connect up t~e logic and the memory to this system. Below is a real sample of my handwriting when writing rapidly.No recognition routine, not even my own human one can at all times decipher this garbage.Redundancy is pretty near nil and such words as '!of", "as", "a", and "or" tend to be homologous.What a human reader can't do, we can hardly . expeet a machine to do. But humans can gmess from context what a word must be, and then see if the squiggle on the page is close enough.This involves both syntactic and semantic recognition, and if we ever want machine reading of handwriting, we must give the machine this capability. But suppose the reader still can'~ handle the writing?I suppose then we want ~t to get the logic component to intiate a question such as," What is that?" $2That is, we want the computer to be able to go thru the whole set of levels.This will necessitate a much more complicated program than those around today, incorporating a greater amount of linguistic expertise, but undoubtedly it is necessary.Let us assume that the info grabber has grabbed the info, it ~_ll have (1) to store this information in the memory, and (2) ~et the logic component examine the information.Suppose I know that Richard Daley is the mayor of Chicago, and I read in a Chicago newspaper that the ~ayor of Chicago is the greatest man in the world. The LSD m~st somehow be so stored that I can retrieve from my memory the fact that RichardDa__~l~ is thot~t to be the greatest man in the w---~~t newspaper.This raises . the question of how to convert underlying semantic trees into subnets of the semantic network of which memory probably consists, hmny of the features incorporated into • Sidney Lamb's conceptual networks will, I think, be incorporatable into the moddl. In particular, all occurences of a particular entity (concept) will have to be linked in some way or identified.In a sense info grabbing starts by analyzing the LSD into semantic structures, and ends by synthesizing these structures and those already in memory into a new memory network.One point that should be made clear is that all information will have to be represented on the same level. That is, both the program and the data will reside in the same memory net, as in a computer.Reading an algorithm in a book, the machine will store this in its memory Just as it stores part of its own program, and it will be able to either quote the algorithm later as linguistic material as part of info retrieval, or use that algorithm as part of its own logical operations.There is some question the as to ~hether this quite ideal machine could actually function in this way.But human beings are like this in some ways, and it is part of their language capability that they should or could.The process of info spewing is a reverse of the info grab.The logic component will initiate the spew, using part of the memory net and selecting one or more underlying trees to spew out.It will thenIV-5 go through the derivational process and ultimately genera@e an actual string of sentences. Perhaps feedback will enter here, so that the machine can utilize part of its own spewings as immediate LSD, although it is hard to see why the machine would need to do so, altho humans are constantly correcting themselves mid-sentence.An obvious question is what the role of generative semantics in all t~Is.I think the experience of CL has been ~ in general that ad hoc programs don't work.W e need a basic linguistic theory. I think generative semantics is the best bet.But as I noted, it is a theory of cometence.We will need to modify it. I think we need to l) admit rules of non-recoverable deletion, 2) admit rules for hypersentential constructs, and 3) build strong interactions with lo~ic and memory components.In particular, the relationship of underlying semantic structures to conceptual networks will have to he investigated in depth.If the hypotheses of the GS linguists are correct, then we have a simple but powerful basis for programs directly transforming language source materials into semantic information usuable by programs.For example, if the semantic structures turn out to be universal, they can servq as a pivot or intermediary for the currently out of fashion goal of MT.$4. In the earlier stages of the conversion from semantic : The theory of generative semantics, to be discussed in section II, is an outgrowth of, and reaction to, Chomsky's 1965 theory of transformational linguistics. It is a radical theory which deals with a very great range of problems with very abstract methods.Trose working in this paradigm hold that there is a linguistic level reflecting conceptual or semantic structure which is directly convertible into surface syntax by a single set of garden-variety transformations, with no O significant intermediary level, that is, no deep structure".These of us working in generative semantics believe that methods substantially those long familiar in linguistics can achieve very absract , very general results which treat semantics in a more serious and enlightening way than ever before.I do not, I think, support this very strong claim very well in section II, but I provide summaries of several studies and a lengthy bibliogrpahy of works which when consulted will hopefully give some feeling for what is being attempted, I think not without results.But generative semantics is a model, or rather, a theory, of competence, like most serious theories of language now held to by American linguists.~ven if, as might be claimed, our semantic structures are to be merely variants of the structures long familiar from formal logic, so that if our assumptions are correct, we will ultimately be able to directly transform surface structures into underlying semantic structures, the majority of actual sentences, as well as all hypersentential structures, the treatment of which has been swept under the rug of "performance", will remain unhandleable.Accordingly, I propose initially cert@in extensions and modifications of the theory to make it in some sense 1-2 a model of performance.But if we are to apply it to the computer, a major component must still be added.The impetus to this application is ~he possibility of creating an understanding machine, dewcribed in section IV below. Since the actual human interpretation of language depends on past knowledge (consider which of these sentences is good and why:As for Albuquerque, the ~iffel Tower is pretty. As for Paris, the Eiffel ~ower is pretty.ShirLey is a blonde and Susan is Nordic-looking too. Shirley is a linguist and Susan is Nordic-lloking too.) the old split between semantics, syntax, and pragmatics must be revised, and our model closely linked with a memory and possibly a logic component as well.Obviously this defines a very difficult task, but insofar as such goals as HT, artificial intelligence, and machine reading of handwritten material or writing of spoken material involve comprehension on the part of the machinej o~ which there seems to be no doubt, these important goals will continue to ~lude us until such time as we can devise such an understanding machine as I have ~escribed below.believe that generative semantics lays the foundation for studies relevant to such a development, and it is in this context that my proposals are made.In section II I will d~scuss generative semantics. In section III I will discuss the body of my proposals here.In section IV I will discuss what should be required of a generalized "understanding" machine. Part II. The theory of Generative Semantics.~ae theory of ~enerative semantics is an outgrowth and reaction to the theory of transformational grammar as represented in Chomsky's 1965 book, As s~cts of the Theor~ of S~tax (MIT Press). To a very ~-I-~ extent, this theory has been the development of a small group of former students of Chomsky,s or their close colleagues.John (HaJ) Ross has said that the theory is really Just an attempt to explicate Pa~l Postal's work of five years ago to date. If Postal was the founder of this school, if you can call it that, its main workers have been HaJ Ross and George Lakoff, who between 1965 and swept aside most of transformational linguistics as it then was. But perhaps best known of the group is J~mes McCawley, who graduated from MIT in 1965 with a Ph.D. based on work in ubonology, not syntax or semantics. He promptly amazed Lakoff and 2oss by some very substantive work in the latter areas as well as phonology. s~udent of McCawley's I will be emphasizing his contributions here, and those of my collea~s at Chicago, Jerry L. Morgan and Georgia M. Green, but it should be kept in mind that people like Ross, Lakoff, Postal, Arnold Zwicky, David Perlmutter, Emmon Bach, Robin Lakoff, and several others , have made the current theory possible, and that many others, such as Robert Wall, Lauri Kartunnen, Ronald Langacker, and others, have contributed as well.It should also be kept in mind that the Case Grammar of Fillmore and the work done by Gruber, while differing from generative semantics, have contributed a great deal to it. ~ae basic theory of generative semantics is built upon an attempt to relate the underlying semantic structure of language to the surface, phonetic manifestation of that underlying structure. That is, a phonetic reality is recognized, and a semantic reality is recognized.But unlike other versions of transformational grammar, this theory assigns no special status to syntax; syntax is subsumed in the semantics. McCawley has Jokingly referred to his theor~ as being one of either "semantax" or "synantics". ~11e name generative semantics is not a particularly good one, since it implies that the ~oal of the theory is, as with the work of Chomsky, to "separate the grsumuatical sequences" of a language"from the ~E~__ammatical Sequences." (Chomsky, S~_~ctic Structures~ I~.) In S As a II-2 other words, to generate all and only grammatical sentences of a language.~his is not at all the goal of generative semantics.Rather, what we want to do is in some rigorous way specify the correlations of underlying semantic entities and surface phonetic entities: to specify for any underlying semantic structure what its possible phonetic realizations in some language are, and for some phonetic structure what underlying semantic structures it can represent.Naturally, so~e descriptive ability is predicated as well, that is, we want to be able to define ambiguity in some algorithmic fashion, we want to be able to define levels or classes of ill-correlation between structures on different levels, etc.Chomsky would say that a sentence~like "Golf plays John" is eminently deserving of a star; we would say (I) if it's supposed to mean'John plays golf', it doesn't succeed in conveying the message; (2) if it's suppesed to mean 'John loves Marsha', then it's really bad; and (3) if Golf is a man.s name and Gohn the name of a game or role, it's a good sentence ---indeed, one can very well imagine arcane circumstances under which one might utter that sentence with the intent of saying that the game plays John, that the tail wags the dog~ as it were.Suppose, for example, that John's wife were tired of him spending all his free time playing golf and she grumbled to a heighbor about it, and the neighbor rather unfeelingly replied, "Oh well, John plays golf." I can ~ery well imagine John's wife complaining bitterly, "Oh no, golf plays John."In any case, it is for hus unimaginative approach to language that Chomsky has been Jokingly called a "bourgeois formalist"o Even when we use stars, we try to keep in mind that Just about any valid phonological string of a language conveys one or more meanings in some context, and that it is artificial to take a string out of context and declare it good or bad.So "generative semantics" is a bad ns~e.The following diagram of the components of the theory is based on McCawley's paper in the proceedings of the 4th Regional Feeting of the Chicago Linguistic Society (1968) .A theory very similar is discussed in Ronald Langacker's book LAnguage and its Structure (Harbrace, 1968) The above diagram comes from a report prepared by myself, Jerry Morgan, and Georgia Green, called the Uamelot ~o' which attempted to describe the curren-~te of rmational re, search in the Sum~uer cf 1968, particularly in reference to the LSA Summer Linguistic Institute at the University of lllinois, where HaJ Ross, George Lakcff, and Jim McCawley had lectured to large groups on a huge number of very '~airy" (i.e., difficult and tickleishly novel) topics.In that report (which was prepared for Victor Yngve ), we raised several questions concerning the above representation.We asked: ~ese were by no means all of the questions asked. Needless to say, the answering cf these questions has hardly begun and will undoubtedly guarantee linguists a few gocd centuries of work at least. It is only in the last decade that syntax has been the subject of serious work, and we are still only discovering how ignorant we are. Semantics is even newer, less than a decade old.i. ~hatIf anyone doubts that this is true, consider a) what the above 3 questions would have meant to a linguist in (say) 1955, and b)why he would have been wrong in his (lack of) comprehension of them. One of the great contributions of Postal and Ross has been II-4 their constant critical look at transformational grammar. One of the things they saw was that our transformations were (and are) extremely powerful devices, with practically no constraints placed on their formulation. ~at I will do here is summarize some of the attempts at partial answers to the three above questions. In this way I can delimit and explicate generative semantics best.i will start by abstracting parts of two papers by McCawley that deal with the nature of semantic representation.In a paper in the Japanese Journal ~otoba no Uchu (World of Language) in 1967, McCawley argued tha~ semantic representation would be similar to syntactic representation as familiar from ~-type grammar, but that it would also be quite similar to symbolic logic as familiar from the tons of work that have followed Principia and such studies.That semantic representation should resemble syntactic representation makes sense if only because we are arguing for a single set of rules that transforms (i.e., reEates) the underlying structure into (to) the surface structures.There will be more about that later.McCawley argues as follows: the following devices have all had a role in symbolic logic: I. propositional connectives" 'and', 'or', 'not'. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.006015
null
null
null
null
null
null
null
null
36cf376b0d032cd770124faa34485e93bf160d17
12163907
null
Computer-Produced Representation of Dialectal Variation: Initial Fricatives in {S}outhern {B}ritish {E}nglish
It has become apparent to some dialectolo6ists that di~lectology, particularly in its interpretive phase, is a branch of linguistics particularly adapted to the use of computers. The dialectol0gist typically deals with large bodies of data, usually in the form of single words and short phrases, and he is interested in sorting and comparing individual items on many bases: phonological, morphological, lexlcal, and geographical. The major obstacle that has prevented widespread use of computers in dialect study is the fact that the data for most of the great dialect surveys have been collec-I ted, recorded, and in most cases edited prior to the computer age. Thus the problem of preparing large bodies of data, much of it in narrow phonetic transcription, for computer use has been formidable. One of the aims of the present paper is to show that results can be obtained relatively easily by computerized sorting and mapping that would take endless hours by traditional methods, and hopefully to encourage others to invest time and money in preparing data for the computer rather than in time-cc~suming hand sorting and map-making. Accordingly we sought a proble~ I that would be canplex enough to reveal the advantages of computerized dialectolo~y while at the same are 31 Somersetshire,
{ "name": [ "Francis, W. Nelson and", "Svartvik, Jan and", "Rubin, Gerald M." ], "affiliation": [ null, null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 52
1969-09-01
10
3
null
time involving a body of data small enough %0 be quickly prepared.Since two of the three,authors are specialists in English (the third is a computer specialist), we ~tural/y ~urned to the published volumes of the 5'u~e~ d of ~dZ'~h D~a/~c~, 2 which embody carefully controlled data, collected with professional skill, and presented in convenient tabular form in meticulously edited and printed volumes.And since une of the two areas coveredby the volumes in print at the time the study was ~adertaken (May 1969) was the south of England, the problem of the voicing of initial fricatives in the southwest naturally suggested itself. This lyrohlem had the further advs~tage, for our purposes, of dea//ng with cm~sonants (s inkier than vowels in most varieties of English) in /D/tial position, hence easily sorted and exsmined. The selection of this proble~ has proved to be a happy one.The area covered by Volume ~ of SED comprises the ten southernmost counties of England, which, with their key numbers in the S~w;ey, 3 included all those words beginning with graphic f-, s-, or thfollowed by a vowel or voiced consonant which were starred in the ~ED questionnaire. 3 To this list we later added a few non-starred words which showed universal distribution and were otherwise of interest. The final list contained 68 words, of which 27 are f-words, 22 s-words, 16 Sh-words and 3 ah-words (i.e. words beginning with /~/ in standard English). We took only the first recorded form from each locality; this is presumably a citation form, produced by 8ninformant in response to a question, and recorded in narrow IPA transcription. The 59 cases where no response was given were coded XXX in our computer code.The corpus thus comprised 68 x 75 or 5100 items, including the 59 blanks. Our c~,puter expert then produced 68 decks of punch-cards, one for each word~ each deck containing 75 cards, one for each locality, These were numbered at the left for locality and on the right for the reference number of the item in the SED questionnaire. ~ A coding system was devised which preserved all significant features of the phonetic transcription while passing over apparently irrelevant fine points (see Appendix A), and the words were transcribed in this code directly onto the cards for the guidance of the key-puncher, who then punched the coded words in a fixed place on the cards.Subsequently the standard spelling was inserted by the computer to the left of the coded phonetic spelling. This whole process took about a dozen hours of the investigators' time (not counting the relatively simple progras~ing involved) and about the same amount of the key-puncher's time. The result was a body of data consisting of 5100 entries of the following sort:3101 FINGER FIgG)R. 6 7 7This is to be interpreted as indicating that at locality 3101 (Weston in Somerset) the word f~nger, which appears as item VI.7.7 of the SED questionnaire, is pronounced [f,ngar-] (or perhaps more accurately /ftngar-/ in the quasi-phonemic transcription used).The nature of the problem with which we are dealing ms~ be most simply introduced by an excerpt frc~ the full treatment given to the voicing of initial fricatives in Middle English by Horn and Lehnert (195~, Vol. If, ~37) : The first step was to sort the data in as many ways as we felt would be productive. Accordingly our computer expert produced four lists of the 5100 items, sorted as follows :InList I: sorted first by keyword (the standard English graphic word identifying the item) alphabetically; then by locality.This list presents the data in the same kind of order in which it appears in the SED records and allows easy inspection of all versions of each word in one list. region is concerned, that the maxim attributed to Gilli~ron, "Every word has its own history," is true.Accordingly we asked our c~nputer expert for various more sophisticated sortings and counts, and for two kinds of maps, to be produced by the CalCump plotter (see Appendix B). These were the following: Tables i -~ s~pport our suspicion that each word has its own unique distribution with regard to the initial c~nsonant. Thus Table Word FERN F V F F F V . FINGER V F ~ V F F F FIRE F F F V V FThe same kind of discrepancy is shown by items 9-ii, with 33:42 ratio, and items 12 -14, with 34:41. . FELLIES also shows six instances of the substitution of the dental fricative for the labial, which otherwise occurs only in FRIDAY, and which is reversed in six occurrences of /v/ in THATCH.If these five words are set aside, the range of voiceless-voiced i ratios, from 28:~7 to 43:3~ much more closely resembles that of the th-words (Table 3) , which range from 29:45 to 41:33. The th-words in Table 3 The maps also show that the voicing did not extend to the southwest tip of Cornwall except in a few words. Presumably the English brought into this formerly Celtic area was more strongly influenced by standard English. The extreme case is represented by 3607, which has voicing in only two of the 68 words: FELLIES and FURROW, which, as we have seen above also extend beyond the voicing area on the east.Or-) Vowels present a more complex situation. Table 6 shows that for f-, s-, and th-words, high and low front vowels /y, i; ~/ and low central vowels /a, a/ associate with initial voicing. For the three sets taken together the proportion of voiced initial fricatives occurring with these vowels following is between 60 and 63 per cent. Rounded high front vowels /Y, y/ showed a more marked association with initial voicing than unrounded high front vowels /I, i/.EQUATIONThe ratios for all f-and s-words with rounded front vowels are: It is clear that even more study, of individual words and individual localities, is needed before all the complications of this one dialect feature can be unraveled. We should, for example~ take into account the second and third responses for many of the words~ many of which were taken from incidental conversation and hence are inclined to be more natural. Even casual inspection of the data indicates that they show a much higher incidence of initial voicing than do the citation forms. But we hope that this paper has shown that, given adequate and convenient data, the computer can be of inestimable aid to the dialectologist.NOTES 1An exception is the Dictionary of American Regional English (DARE), being prepared at the University of Wisconsin under the direction of Frederic O. Cassidy, which is employing seine sophisticated computer techniques.2See under Orton and Dieth in Bibliography. This work will henceforth be referred to as SED or the Survey. 3The starred words are those which were included primarily for their phonolosical imports~ce. Fieldworkers were instructed to obtain them at all costs, even if they had to suggest the word and ask the informant to pronounce it. In most cases words were chosen ,that have universal distribution in the dialects, but occasionally a word thought to be common turned out to be unfamiliar or even unknown, as in the case of FORKS, FORD, end FLITCH in our corpus.~he questionnaire is divided into nine books, each of which is subdivided into sections containing several questions. An item is thus identified by a ~hree-part number, e.g. VIII.4.6, indicating question 6 in section 4 of book VIII. We changed the Roman numerals to Arabic in the interest of simpler coding.5For an alt~rn~ive theory, holding that initial fricatives were already voiced in the language of the Jutes and Frisians who settled Kent, see Bennett 1955 in Bibliography.6We hope to explore the linguistic implications of this project more fully in a later article.7One instance of /~r-/ in THRESH is reported from 3905 Hambledon, Hants., which is Just within the eastern border of the voicing 8~ea.APPENDIX A, CODING SYST]~4 22The following system was used in coding the data for the computer. A second method discussed was to output the entire map and data on a visual display unit such as a CRT scope.Here we could draw the map, but our printing format was again too strict. This method also is expensive, since it requires the use of an on-line scope.We finally decided upon an off-line plotter. The one we used was a CalCump #563 Digital Plotter. This machine takes a conputer tape which has been prodneed by an on-line computer program and draws the date in the tape onto a roll of paper. In order to visualize how the plotter works, imagine a set of coordinate axes with a y-axis about 12" long and an infinitely long x-axis. This grid is the piece of paper to he plotted on. The instructions to the plotter are simple. They boil down to two: , ~ lower or raise the pen point (so it will or will not write as it moves) and move the pen in a straight line to location (x,y) on the grid. In this manner anything can be drawn, fr~ straight lines to circles, letters, and numbers (curves are actually made up of very short straight line segments ).The routines used to produce CalComp tapes are FORTRAN subroutines. It was therefore necessary to write a FORTRAN program whose input would be (i) instructions for drawing the outline of a map and plotting localities within it, and (2) linguistic data in a specially processed format. The output would be the tape which directs the plotter.Two methods were tried out for producing the map outline. In the first, a transparent grid was placed over a map and coordinates of "bends" in the outline were recorded. Thus a map would be produced by moving in a straight line from one bend to another The raw linguistic data was keypunched onto cards using the phonetic coding discussed above (Appendix A). Also on the cards were the numbers of the county and locality and the keywords. For example, the card for SUGAR, county 31, locality ~, was 310~ SU~A~ ;%G)R: 5. 8.10 f Since the corpus of data included 68 words for each of the 75 localities in Southern England, the card input consisted of 5100 cards.These records could be sorted by phonetic word (citation), by locality, by key~ord, or by any combination of these.uhows that ~ the f-words the proportion of voiceless to voiced ranges fr~ 20:5~ in F~.LT~ to 52:22 in FOAL. Even in those cases vhere the proportions are the same, recourse to List I ~eveals ~hat~/1/ SLEDGE (68:3)In the case of the other sets, however, there is no clear connection:
null
null
null
null
Main paper: : time involving a body of data small enough %0 be quickly prepared.Since two of the three,authors are specialists in English (the third is a computer specialist), we ~tural/y ~urned to the published volumes of the 5'u~e~ d of ~dZ'~h D~a/~c~, 2 which embody carefully controlled data, collected with professional skill, and presented in convenient tabular form in meticulously edited and printed volumes.And since une of the two areas coveredby the volumes in print at the time the study was ~adertaken (May 1969) was the south of England, the problem of the voicing of initial fricatives in the southwest naturally suggested itself. This lyrohlem had the further advs~tage, for our purposes, of dea//ng with cm~sonants (s inkier than vowels in most varieties of English) in /D/tial position, hence easily sorted and exsmined. The selection of this proble~ has proved to be a happy one.The area covered by Volume ~ of SED comprises the ten southernmost counties of England, which, with their key numbers in the S~w;ey, 3 included all those words beginning with graphic f-, s-, or thfollowed by a vowel or voiced consonant which were starred in the ~ED questionnaire. 3 To this list we later added a few non-starred words which showed universal distribution and were otherwise of interest. The final list contained 68 words, of which 27 are f-words, 22 s-words, 16 Sh-words and 3 ah-words (i.e. words beginning with /~/ in standard English). We took only the first recorded form from each locality; this is presumably a citation form, produced by 8ninformant in response to a question, and recorded in narrow IPA transcription. The 59 cases where no response was given were coded XXX in our computer code.The corpus thus comprised 68 x 75 or 5100 items, including the 59 blanks. Our c~,puter expert then produced 68 decks of punch-cards, one for each word~ each deck containing 75 cards, one for each locality, These were numbered at the left for locality and on the right for the reference number of the item in the SED questionnaire. ~ A coding system was devised which preserved all significant features of the phonetic transcription while passing over apparently irrelevant fine points (see Appendix A), and the words were transcribed in this code directly onto the cards for the guidance of the key-puncher, who then punched the coded words in a fixed place on the cards.Subsequently the standard spelling was inserted by the computer to the left of the coded phonetic spelling. This whole process took about a dozen hours of the investigators' time (not counting the relatively simple progras~ing involved) and about the same amount of the key-puncher's time. The result was a body of data consisting of 5100 entries of the following sort:3101 FINGER FIgG)R. 6 7 7This is to be interpreted as indicating that at locality 3101 (Weston in Somerset) the word f~nger, which appears as item VI.7.7 of the SED questionnaire, is pronounced [f,ngar-] (or perhaps more accurately /ftngar-/ in the quasi-phonemic transcription used).The nature of the problem with which we are dealing ms~ be most simply introduced by an excerpt frc~ the full treatment given to the voicing of initial fricatives in Middle English by Horn and Lehnert (195~, Vol. If, ~37) : The first step was to sort the data in as many ways as we felt would be productive. Accordingly our computer expert produced four lists of the 5100 items, sorted as follows :InList I: sorted first by keyword (the standard English graphic word identifying the item) alphabetically; then by locality.This list presents the data in the same kind of order in which it appears in the SED records and allows easy inspection of all versions of each word in one list. region is concerned, that the maxim attributed to Gilli~ron, "Every word has its own history," is true.Accordingly we asked our c~nputer expert for various more sophisticated sortings and counts, and for two kinds of maps, to be produced by the CalCump plotter (see Appendix B). These were the following: Tables i -~ s~pport our suspicion that each word has its own unique distribution with regard to the initial c~nsonant. Thus Table Word FERN F V F F F V . FINGER V F ~ V F F F FIRE F F F V V FThe same kind of discrepancy is shown by items 9-ii, with 33:42 ratio, and items 12 -14, with 34:41. . FELLIES also shows six instances of the substitution of the dental fricative for the labial, which otherwise occurs only in FRIDAY, and which is reversed in six occurrences of /v/ in THATCH.If these five words are set aside, the range of voiceless-voiced i ratios, from 28:~7 to 43:3~ much more closely resembles that of the th-words (Table 3) , which range from 29:45 to 41:33. The th-words in Table 3 The maps also show that the voicing did not extend to the southwest tip of Cornwall except in a few words. Presumably the English brought into this formerly Celtic area was more strongly influenced by standard English. The extreme case is represented by 3607, which has voicing in only two of the 68 words: FELLIES and FURROW, which, as we have seen above also extend beyond the voicing area on the east.Or-) Vowels present a more complex situation. Table 6 shows that for f-, s-, and th-words, high and low front vowels /y, i; ~/ and low central vowels /a, a/ associate with initial voicing. For the three sets taken together the proportion of voiced initial fricatives occurring with these vowels following is between 60 and 63 per cent. Rounded high front vowels /Y, y/ showed a more marked association with initial voicing than unrounded high front vowels /I, i/.EQUATIONThe ratios for all f-and s-words with rounded front vowels are: It is clear that even more study, of individual words and individual localities, is needed before all the complications of this one dialect feature can be unraveled. We should, for example~ take into account the second and third responses for many of the words~ many of which were taken from incidental conversation and hence are inclined to be more natural. Even casual inspection of the data indicates that they show a much higher incidence of initial voicing than do the citation forms. But we hope that this paper has shown that, given adequate and convenient data, the computer can be of inestimable aid to the dialectologist.NOTES 1An exception is the Dictionary of American Regional English (DARE), being prepared at the University of Wisconsin under the direction of Frederic O. Cassidy, which is employing seine sophisticated computer techniques.2See under Orton and Dieth in Bibliography. This work will henceforth be referred to as SED or the Survey. 3The starred words are those which were included primarily for their phonolosical imports~ce. Fieldworkers were instructed to obtain them at all costs, even if they had to suggest the word and ask the informant to pronounce it. In most cases words were chosen ,that have universal distribution in the dialects, but occasionally a word thought to be common turned out to be unfamiliar or even unknown, as in the case of FORKS, FORD, end FLITCH in our corpus.~he questionnaire is divided into nine books, each of which is subdivided into sections containing several questions. An item is thus identified by a ~hree-part number, e.g. VIII.4.6, indicating question 6 in section 4 of book VIII. We changed the Roman numerals to Arabic in the interest of simpler coding.5For an alt~rn~ive theory, holding that initial fricatives were already voiced in the language of the Jutes and Frisians who settled Kent, see Bennett 1955 in Bibliography.6We hope to explore the linguistic implications of this project more fully in a later article.7One instance of /~r-/ in THRESH is reported from 3905 Hambledon, Hants., which is Just within the eastern border of the voicing 8~ea.APPENDIX A, CODING SYST]~4 22The following system was used in coding the data for the computer. A second method discussed was to output the entire map and data on a visual display unit such as a CRT scope.Here we could draw the map, but our printing format was again too strict. This method also is expensive, since it requires the use of an on-line scope.We finally decided upon an off-line plotter. The one we used was a CalCump #563 Digital Plotter. This machine takes a conputer tape which has been prodneed by an on-line computer program and draws the date in the tape onto a roll of paper. In order to visualize how the plotter works, imagine a set of coordinate axes with a y-axis about 12" long and an infinitely long x-axis. This grid is the piece of paper to he plotted on. The instructions to the plotter are simple. They boil down to two: , ~ lower or raise the pen point (so it will or will not write as it moves) and move the pen in a straight line to location (x,y) on the grid. In this manner anything can be drawn, fr~ straight lines to circles, letters, and numbers (curves are actually made up of very short straight line segments ).The routines used to produce CalComp tapes are FORTRAN subroutines. It was therefore necessary to write a FORTRAN program whose input would be (i) instructions for drawing the outline of a map and plotting localities within it, and (2) linguistic data in a specially processed format. The output would be the tape which directs the plotter.Two methods were tried out for producing the map outline. In the first, a transparent grid was placed over a map and coordinates of "bends" in the outline were recorded. Thus a map would be produced by moving in a straight line from one bend to another The raw linguistic data was keypunched onto cards using the phonetic coding discussed above (Appendix A). Also on the cards were the numbers of the county and locality and the keywords. For example, the card for SUGAR, county 31, locality ~, was 310~ SU~A~ ;%G)R: 5. 8.10 f Since the corpus of data included 68 words for each of the 75 localities in Southern England, the card input consisted of 5100 cards.These records could be sorted by phonetic word (citation), by locality, by key~ord, or by any combination of these.uhows that ~ the f-words the proportion of voiceless to voiced ranges fr~ 20:5~ in F~.LT~ to 52:22 in FOAL. Even in those cases vhere the proportions are the same, recourse to List I ~eveals ~hat~/1/ SLEDGE (68:3)In the case of the other sets, however, there is no clear connection: Appendix:
null
null
null
null
{ "paperhash": [ "kurath|the_loss_of_long_consonants_and_the_rise_of_voiced_fricatives_in_middle_english", "mossé|a_handbook_of_middle_english" ], "title": [ "The Loss of Long Consonants and the Rise of Voiced Fricatives in Middle English", "A handbook of Middle English" ], "abstract": [ "1. In OE,' long consonants are fully established in one and only one position: between a fully stressed or a half-stressed SHORT vowel (or diphthong) and a following unstressed vowel, as in clyppan, settan, reccan, ebba, middel, hycgan, frogga, fremman, spinnan, spillan, steorra, sifban, missan, hliehhan (hlxhhan), and (after a half-stressed short vowel) in bliccettan 'glitter', oretta 'challenger', faranne (inf.), westenne 'wilderness'.2 In MSS of the 10th century, double letters are written with great consistency in such words, whereas in other words single letters appear with equal consistency in the same position, e.g. in witan 'know', sunu 'son', stelan 'steal'. The consistent writing of double letters in some of these words and of single letters in others makes it clear that between a short stressed or half-stressed vowel and a following unstressed vowel long and short consonants are in phonemic contrast in OE. To support this inference from the spelling, one may cite such minimally differentiated pairs as sittan 'sit' : witan 'know', sellan 'sell' : stelan 'steal'.", "Professor Fernand Mosse of the College de France is at home in all the Germanic languages and literatures, but for many years he has paid particular attention to English. Since he is a medievalist he has interested himself first and foremost in the earlier periods, and since he is a teacher as well as investigator he has been long concerned to smooth the path of students taking their first steps into a field far from our day and time. A few years ago, this concern of his ripened into a work that won general recognition as soon as it came out: his Manuel de l'Anglais du Moyen Age des Origines an XIVe Siecle. In the Manuel the author's mastery of the material and talent for clear and orderly presentation are happily combined. In this work we have by far the best introduction to medieval English now available.-from the Foreword" ], "authors": [ { "name": [ "H. Kurath" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Fernand Mossé", "James A. Walker" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "147411635", "193949379" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
null
665
0.004511
null
null
null
null
null
null
null
null
d5bb72f21a25bcfbeb0dbff30a55aab9eca5f74a
18306263
null
Applications of a Computer System for Transformational Grammar
Writing a transformational grammar for even a fragment of a natural language is a task of a high order of complexity. Not only must the individual rules of the grammar perform as intended in isolation, but the rules must work correctly together in order to pro~nce the desired results. The details of grammar-writing are likely to be regarded as secondary by the linguist, who is most concerned with what is in the language and how it is generated, and would generally prefer to pay less attention to formal and notational detail. It is thus natural to ask if a computer can be used to assist the linguist in developing a grammar. The model is formal; there are a large number of details to be worked out; the rules interact with one another in ways which may not be foreseen. Most of the errors which occur in writing grammars can be corrected if only they are brought to the attention of the linguist. Those which cannot be so corrected should be of even greater interest to the writer of the grsmmar.
{ "name": [ "Friedman, Joyce" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 14
1969-09-01
0
6
null
null
null
The technical details of the particular model of transformational grammar have been described elsewhere [3] . This presentation will emphasize the ways in which the programs can be used, and will describe experiences in using them both in grammar writing and inteaching.The notation for grammars and the use of the programs will not be described formally here, but will be illustrated by an extended example.The example consists of a small grarmnar and a sample derivation. Each part will be presented twice, first as initially iThis system was designed and programmed by the author 3 with T. H. Bredt, E. W. Doran~ T. S. Martner, and B.W. Pollack.prepared by linguists at the University of Montreal [i] , and then as redone in the computer system. The grammar has been greatly reduced by selecting only those transformations which were used in the derivation of the sample sentence selected.In Figures i and 2 the phrase-structure rules are given in parallel, first as written by the linguists, secondly as prepared for input to the computer system. The computer form can be seen to be a linearization of the usual form, with both parentheses and curly brackets represented by parentheses. No ambiguity arises from this since the presence of a comma distinguishes the choices from the options. The only other differences are minor: the symbol "~" has been replaced by "DELTA", the sentence symbol "P" has been translated into English "S", and accents have been omitted. None of these changes is of any consequence. Lexicon The use of the ~ro6rams^ A Z ~ ^ A O V Z~J O9 V r~ ~ Q_ ~,~ ~' n ---j m--~--t~.l ^ I=.I I.~I E • "'. ~I 0.-3~ A E O <=I:L z Q" _-3 IQ.. --L} 0 "~' I-,, I-, O <1: l.,.. =~ V O {'1~ E I.~ o z {)~ :3I. U'I L,L.] "--~ ~ ~ CY') V VI I 1 I E', V V V V VThe system was designed to be used by a linguist who is in the process of writing a transformational grammar. As lexical items or transformations are added they can be tested in the context of all previous rules and their effect can be examined. Alternative forms of Base Tree Deeper errors arise when a grammar is syntactically correct, but does not correctly describe the language of which it purports to be a grammar. ~lese errors of intent cannot be detected directly by the program, since it has no standard of comparison.~ Z ,..I lxJ t~ A ^ A V A V U'/ V A V (fl V '-I r~ A V V Figure 8The program attempts to provide enough feedback to the linguist so that he will be able to detect and investigate the errors.The information produced by the program consists of derivations which may be partially controlled by the user.Since random derivations have been found to be of r@latively little interest, the system allows the user to control the sentences to be generated so that they are relevant to his current problem. (The device used for this purpose has been described in [g] .) It is only in the sense of providing feedback to the user that the system can be called a As it turned out there were a few difficulties which arose because the notation had not been explained clearly enough, but the results of the run were also revealing about the grsm~nar.One general effect which was noticed in these first few cases had continued to be striking: the need for complete precision in the statement of a grammar forces the linguist to consider problems which are important, but of which he would otherwise be unaware. This error was easily repaired once it was detected.On the other hand, a similar problem which was not easily fixed arose with another transformation which was marked optional.Testing showed that for certain base trees the ~esult was bad if the tr~usformation did not apply; however3 when the transformation was l temporarily changed to obligatory, the grammsx then failed to produce some intended sentences.The proper correction to the grammar would have required specification of the contexts in which the transformation was obligatory. Although this problem has no simple solution in the current framework, the inputs to the program can be controlled to avoid generating sentences of this form.An interesting change to the system was suggested by the attempt to formalize the Core grammar.In both the WH-attraction and the Question-transformations the structural description contains a two-part choice between a PREP NP pair and simply an NP. This is of the form:% (PREP NP, ~P)where ~ is a variable. Any structure which satls~ies the first part of the choice will also satisfy the second, and any analysis algorithm must have some order of search which will either always select PREP NP or always select NP only. But the intent is that there should be a genuine choice, so that the grammar produces The solution which was found for the problem was to add an additional value (AAC) for the repetition parameter for a transformation.If a transformation is marked AAC, all possible analyses will be found, but only one of them, selected at random, will be used as the basis for structural change. This seined the appropriate way to solve the problem for the Core grammar, and it turned out also to solve a slightly different repetition problem in the grammar of A1fredian prose. Notice that this is really an observation about the form of grammars, rather than about a particular grammar. Yet it arose by consideration of particular examples.The surface Structure associated with a sentence derivation is much easier to study if it can be produced automatically. In several cases it has been apparent from the information provided by the computer runs that revisions in the grammar were needed if the surface structure is to be at all reasonable. This is a case where the computer runs are certainly not necessary, but where they reduce the tediousness of studying the problem.In stmmmary, it seems to me that main value in computer testing of a completed grsm~nar is that the need for a precise statement brings to the consideration of the linguist problems which are otherl wise below the surface. These problems may be in the grammar itself or they may be in the linguistic model itself. For a grammar in process of being written the greatest advantage is in allowing rules to be checked as they are added, and in bringing out the interaction between rules.The system has now been used by The method of use is to make available to the students a file of one or more grammars to be used as examples and as bases for modifications. The fragments from Aspects and the IEM Core grammar have been most useful3 although small grammar written for this purpose have also been used. The students are then asked to make modifications and additions to the grammars.For graduate students, a reasonable exercise for a term paper is to read a current journal article on transformational grammar, and then show how the results can be incorporated into the basic grammar, or show why they cannot be. The papers chosen by the students have generally been ones in which transformations are actually given.This project has been very successful as am introduction to transformational grammar for computer science students.Other students have chosen simply to use the computer to obtain fully developed examples of derivations illustrating aspects of grammar in which they are interested.These experiences have confirmed our belief that specificexamples presented by the computer, and the feedback provided when a student modifies a grammar, are valuable in enabling the udent to understand the notion of trausformational grammar.
null
null
Main paper: : The technical details of the particular model of transformational grammar have been described elsewhere [3] . This presentation will emphasize the ways in which the programs can be used, and will describe experiences in using them both in grammar writing and inteaching.The notation for grammars and the use of the programs will not be described formally here, but will be illustrated by an extended example.The example consists of a small grarmnar and a sample derivation. Each part will be presented twice, first as initially iThis system was designed and programmed by the author 3 with T. H. Bredt, E. W. Doran~ T. S. Martner, and B.W. Pollack.prepared by linguists at the University of Montreal [i] , and then as redone in the computer system. The grammar has been greatly reduced by selecting only those transformations which were used in the derivation of the sample sentence selected.In Figures i and 2 the phrase-structure rules are given in parallel, first as written by the linguists, secondly as prepared for input to the computer system. The computer form can be seen to be a linearization of the usual form, with both parentheses and curly brackets represented by parentheses. No ambiguity arises from this since the presence of a comma distinguishes the choices from the options. The only other differences are minor: the symbol "~" has been replaced by "DELTA", the sentence symbol "P" has been translated into English "S", and accents have been omitted. None of these changes is of any consequence. Lexicon The use of the ~ro6rams^ A Z ~ ^ A O V Z~J O9 V r~ ~ Q_ ~,~ ~' n ---j m--~--t~.l ^ I=.I I.~I E • "'. ~I 0.-3~ A E O <=I:L z Q" _-3 IQ.. --L} 0 "~' I-,, I-, O <1: l.,.. =~ V O {'1~ E I.~ o z {)~ :3I. U'I L,L.] "--~ ~ ~ CY') V VI I 1 I E', V V V V VThe system was designed to be used by a linguist who is in the process of writing a transformational grammar. As lexical items or transformations are added they can be tested in the context of all previous rules and their effect can be examined. Alternative forms of Base Tree Deeper errors arise when a grammar is syntactically correct, but does not correctly describe the language of which it purports to be a grammar. ~lese errors of intent cannot be detected directly by the program, since it has no standard of comparison.~ Z ,..I lxJ t~ A ^ A V A V U'/ V A V (fl V '-I r~ A V V Figure 8The program attempts to provide enough feedback to the linguist so that he will be able to detect and investigate the errors.The information produced by the program consists of derivations which may be partially controlled by the user.Since random derivations have been found to be of r@latively little interest, the system allows the user to control the sentences to be generated so that they are relevant to his current problem. (The device used for this purpose has been described in [g] .) It is only in the sense of providing feedback to the user that the system can be called a As it turned out there were a few difficulties which arose because the notation had not been explained clearly enough, but the results of the run were also revealing about the grsm~nar.One general effect which was noticed in these first few cases had continued to be striking: the need for complete precision in the statement of a grammar forces the linguist to consider problems which are important, but of which he would otherwise be unaware. This error was easily repaired once it was detected.On the other hand, a similar problem which was not easily fixed arose with another transformation which was marked optional.Testing showed that for certain base trees the ~esult was bad if the tr~usformation did not apply; however3 when the transformation was l temporarily changed to obligatory, the grammsx then failed to produce some intended sentences.The proper correction to the grammar would have required specification of the contexts in which the transformation was obligatory. Although this problem has no simple solution in the current framework, the inputs to the program can be controlled to avoid generating sentences of this form.An interesting change to the system was suggested by the attempt to formalize the Core grammar.In both the WH-attraction and the Question-transformations the structural description contains a two-part choice between a PREP NP pair and simply an NP. This is of the form:% (PREP NP, ~P)where ~ is a variable. Any structure which satls~ies the first part of the choice will also satisfy the second, and any analysis algorithm must have some order of search which will either always select PREP NP or always select NP only. But the intent is that there should be a genuine choice, so that the grammar produces The solution which was found for the problem was to add an additional value (AAC) for the repetition parameter for a transformation.If a transformation is marked AAC, all possible analyses will be found, but only one of them, selected at random, will be used as the basis for structural change. This seined the appropriate way to solve the problem for the Core grammar, and it turned out also to solve a slightly different repetition problem in the grammar of A1fredian prose. Notice that this is really an observation about the form of grammars, rather than about a particular grammar. Yet it arose by consideration of particular examples.The surface Structure associated with a sentence derivation is much easier to study if it can be produced automatically. In several cases it has been apparent from the information provided by the computer runs that revisions in the grammar were needed if the surface structure is to be at all reasonable. This is a case where the computer runs are certainly not necessary, but where they reduce the tediousness of studying the problem.In stmmmary, it seems to me that main value in computer testing of a completed grsm~nar is that the need for a precise statement brings to the consideration of the linguist problems which are otherl wise below the surface. These problems may be in the grammar itself or they may be in the linguistic model itself. For a grammar in process of being written the greatest advantage is in allowing rules to be checked as they are added, and in bringing out the interaction between rules.The system has now been used by The method of use is to make available to the students a file of one or more grammars to be used as examples and as bases for modifications. The fragments from Aspects and the IEM Core grammar have been most useful3 although small grammar written for this purpose have also been used. The students are then asked to make modifications and additions to the grammars.For graduate students, a reasonable exercise for a term paper is to read a current journal article on transformational grammar, and then show how the results can be incorporated into the basic grammar, or show why they cannot be. The papers chosen by the students have generally been ones in which transformations are actually given.This project has been very successful as am introduction to transformational grammar for computer science students.Other students have chosen simply to use the computer to obtain fully developed examples of derivations illustrating aspects of grammar in which they are interested.These experiences have confirmed our belief that specificexamples presented by the computer, and the feedback provided when a student modifies a grammar, are valuable in enabling the udent to understand the notion of trausformational grammar. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.009023
null
null
null
null
null
null
null
null
eacbf6d36a4246966e873c525acac8298e3d15a6
5249899
null
Semantics of Prepositional Constructs in {R}ussian: Tentative Approach
The primary objective of this paper is to describe an experiment designed to investigate the semantic relationships between the three basis components of a prepositional construct: the governor, preposition and the complement. Because of the preliminary nature of the experiment, only simple data processing equipment, such as the keypunch and the sorter, was used. The implementation of this approach on a larger scale, however, would necessitate the use of more sophisticated hardware. The described procedure uses Russian prepositions because, while working on this problem, the author was a research staff member of the Russian-English mechanical translation group at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York. While the described procedure presents a tentative approach, which does not offer a solution to the semantic ambiguities within prepositional constructs in Russian, it does suggest a method for examining each basic component of a given construct in relation to other constructs containing different types of prepositions. The data used in the model was collected mainly from the Soviet Academy of Sciences Grau~ar and, to some extent, from the Soviet Academy of Sciences Dictionary. Initially an attempt was also made to compile data from other dictionaries. It was found, however, that the presentation and the classification of the data was not detailed enough for the purposes of this study. TherefOre, only some of the
{ "name": [ "Woyna, Adam G." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 32
1969-09-01
3
1
null
null
These relationships can be diagrarmned as follows:~ (C)sn blwhere sn=sematic property of any value.If either of the semantic components is found to exclusively govern the combination of the two remaining semantic components then it can be said that Since, in an initial study of this type, a large number of semantic classes might tend to obscure the existence of possible patterns, an attempt was made to keep the number of these classes at a minimum.As stated earlier~ the adoption of this approach in an extensive study of constituents within prepositional constructs would require more elaborate semantic mapping. For the purposes of this study, the total number of semantic classes for nouns was narrowed to 24, for verbs 9 and 6 for adjectives. (See Appendix II)The classification of numerals and adverbs as governors was aban- In order to fit the data for each TR on a singleiBM card (for easier sorting), those TRs which seemed somewhat redundant or insufficiently documented were combined and the total number of TRs was reduced to Ii. Again, while the TRs were translated literally from the Gramsmr (admitedly, some of the translations seem a little awkward, e. g. 'togetherness'), the reduction of their total number was an arbitrary arrangement aimed at simplifying the overall research procedure.The manner in which the 43 TRs were reduced to II is shown in Appendix IV.-5- EQUATIONOkolo OT (NB+E)=AT (V2~E)=AT (AX4Y)=AT (NL4B)=AT (VX~Y)=AT (AX~E)=SP (NL-Q)=AT (V2~F)=OB (AX+S)=TE (NL~E)=OB (VX+A)=OB (VX+K) =SP (VX~F)=SP (VX+J) =SP (VX ,E ) =S P (VX~Q)=TE (NL,B)=AT (VX~Y)=OB (NL4Y)=OB (VX-B)=SP (VX.A) OB (VX~B)=SP (NE *A ) =AT (VX~L)=Sr (VX*B)=SP (VX÷B)=SP (NL~B)=OB (VX+E)=OB (NY*A)=AT (VX÷A)=OB (NF÷B)=AT (VX÷B)=SP (AX~E)=SP (RX+E)=AT (DX~B)=SP (NE+B)=AT (VX÷K)=CA (AX~Y)=CA (DX~B)=SP (RX+B)=AT (NB*B)=AT (VX~A)=OB (AX+A)=DE (NK+Q)=AT (VX+H)=SP (AXtB)=SP (NE+A)=AT (AX,Q)=TE (NL*E)=AT (NE~E)=AT 0tnositel6no (NE+A)=AT ~C =W (cont.) (NL~A)=PU (NL+B)=SP (NL4Q)=TE V prep. (NB+E)=AT (NL.B)=AT (NA÷B)=AT (NB,B)=OB (NL~B)=OB (NK~K) =OB V dele V oblasti V otnowenih k V otnowenii V prodoljenie V qel4x* V silu V tecenie Vblizi Vdol6 Vmesto Vnutr6 Vokrug Voprekl (NF#B)=AT (NF,B)=AT (NA.A) =OB (NF+B)=AT *q = Q (VX~K) =AT (VX+E) =AT (V7+K)=OB (V8*K)=0B(VXtA) =OB
null
prepositions not listed as such in the previously named sources were included in the experiment. The next logical step, using the arrangement of the data as shown below, should be the culling out of additional data in the case of Russian, and complete data in the case of other languages~ from dictionaries, concordances and random texts. Following various sorting patterns, the results should then be tested through generative processes and checked against concorded 'real life' examples.As stated earlier, the purpose of the proposed approach is the establishment of patterns of sementic correlations between: I. Given Governor and its Preposition G(----~P (left boundaries)2. Given Preposition and its Complement P~--"')C (right boundaries)
null
Main paper: given governor and its l>reposition's g~----~c complement: These relationships can be diagrarmned as follows:~ (C)sn blwhere sn=sematic property of any value.If either of the semantic components is found to exclusively govern the combination of the two remaining semantic components then it can be said that Since, in an initial study of this type, a large number of semantic classes might tend to obscure the existence of possible patterns, an attempt was made to keep the number of these classes at a minimum.As stated earlier~ the adoption of this approach in an extensive study of constituents within prepositional constructs would require more elaborate semantic mapping. For the purposes of this study, the total number of semantic classes for nouns was narrowed to 24, for verbs 9 and 6 for adjectives. (See Appendix II)The classification of numerals and adverbs as governors was aban- In order to fit the data for each TR on a singleiBM card (for easier sorting), those TRs which seemed somewhat redundant or insufficiently documented were combined and the total number of TRs was reduced to Ii. Again, while the TRs were translated literally from the Gramsmr (admitedly, some of the translations seem a little awkward, e. g. 'togetherness'), the reduction of their total number was an arbitrary arrangement aimed at simplifying the overall research procedure.The manner in which the 43 TRs were reduced to II is shown in Appendix IV.-5- EQUATIONOkolo OT (NB+E)=AT (V2~E)=AT (AX4Y)=AT (NL4B)=AT (VX~Y)=AT (AX~E)=SP (NL-Q)=AT (V2~F)=OB (AX+S)=TE (NL~E)=OB (VX+A)=OB (VX+K) =SP (VX~F)=SP (VX+J) =SP (VX ,E ) =S P (VX~Q)=TE (NL,B)=AT (VX~Y)=OB (NL4Y)=OB (VX-B)=SP (VX.A) OB (VX~B)=SP (NE *A ) =AT (VX~L)=Sr (VX*B)=SP (VX÷B)=SP (NL~B)=OB (VX+E)=OB (NY*A)=AT (VX÷A)=OB (NF÷B)=AT (VX÷B)=SP (AX~E)=SP (RX+E)=AT (DX~B)=SP (NE+B)=AT (VX÷K)=CA (AX~Y)=CA (DX~B)=SP (RX+B)=AT (NB*B)=AT (VX~A)=OB (AX+A)=DE (NK+Q)=AT (VX+H)=SP (AXtB)=SP (NE+A)=AT (AX,Q)=TE (NL*E)=AT (NE~E)=AT 0tnositel6no (NE+A)=AT ~C =W (cont.) (NL~A)=PU (NL+B)=SP (NL4Q)=TE V prep. (NB+E)=AT (NL.B)=AT (NA÷B)=AT (NB,B)=OB (NL~B)=OB (NK~K) =OB V dele V oblasti V otnowenih k V otnowenii V prodoljenie V qel4x* V silu V tecenie Vblizi Vdol6 Vmesto Vnutr6 Vokrug Voprekl (NF#B)=AT (NF,B)=AT (NA.A) =OB (NF+B)=AT *q = Q (VX~K) =AT (VX+E) =AT (V7+K)=OB (V8*K)=0B(VXtA) =OB : prepositions not listed as such in the previously named sources were included in the experiment. The next logical step, using the arrangement of the data as shown below, should be the culling out of additional data in the case of Russian, and complete data in the case of other languages~ from dictionaries, concordances and random texts. Following various sorting patterns, the results should then be tested through generative processes and checked against concorded 'real life' examples.As stated earlier, the purpose of the proposed approach is the establishment of patterns of sementic correlations between: I. Given Governor and its Preposition G(----~P (left boundaries)2. Given Preposition and its Complement P~--"')C (right boundaries) Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.001504
null
null
null
null
null
null
null
null
2251f33152d5f80b92441d20c3228987d5135fea
62131524
null
A Progress Report on the Use of {E}nglish in Information Retrieval
Progress is reported in the further development of an already working model for communicating in English with a computer about the contents of a library. The revised grammar of this model combines the phrase structure and transformational rules of the underlying grammar into a single efficient component. Problems of implementation and ambiguity resolution are discussed. During the academic year 1966-1967 a system, Proto-RELAUES, was designed and implemented at Boston Programming Center, IBM Corporation, for communication with a computer (System/360, Models 40 and 50). This system has been operational since June 1967 I. It permits the user to communicate with the computer in English about the contents of the library at the Center 2. The underlying grammar in this system is a recognition grammar based on the generative approach in linguistic theory. The pioneering work for a recognizer for a generative grammar was done by Petrick (1965). Among the transformational grammars |. This system was reported in Moyne (1967a) and a detailed specification of it is included in Moyne (1967b). 2. One can type English sentences at a computer terminal making queries, giving commands and, in general, asking for the retrieval of any pertinent data about the content of the library.
{ "name": [ "Moyne, J. A." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 40
1969-09-01
7
14
null
developed for computer application two stand out for their historical impact on this approach: The Mitre (1964) granxnar developed by a number of M.I.T. scholars, and the so-called IBM Core Grammar 3. A lucid and informative discussion of the implications of the use of natura] languages in computers is given in Kuno (1967) .The theoretical and historical significance of these grammars notwithstanding, they all have serious practical disadvantages in that tney generate all the possible syntactic analyses for every ambiguous sentence but have no practical way of selecting in a fast and efficient manner the sense of the sentence either intended by the user or inherent in the nature of the discourse. In Proto-RELAUES, we tried to avoid this difficulty by restricting the discourse to a highly-specialized field and tnus reduced most of the ambiguities to the lexic~l level. In his important work on semantics for question answering systems. Woods (1967) adopts the same approach, but ne stipulates that the ultimate solution for resolving ambiguities in a more general system is in interaction with the user. This is, of course, the most general solution. If one can generate all the possible analyses of a sentence and let the user select the analysis which reflects his sense of the sentence, one would delegate the choice of understanding to the user and will satisfy nim as long as the user knows what he is talking 3. Rosenbaum and Lochak (1966) . grammar, see Rosenbaum (1968) .For the latest version of this about. However, this approach is also unsatisfactory for practical reasons, even if an easy way to build such an interactive system were known. Under a time-sharing environment, which is the only practical environment for on-line systems of this kind, every interruption and interaction will cost time, and the total effect will make the system so slow and cumbersome to make it impractical.In tills paper, we will propose some additional devices for the automatic resolution of ambiguities. These devices are now being studied and implemented at the IBM Boston Programming Center. Ideally, one should not have to arbitrarily restrict the types of sentences Which the user of the system may input to the grammar, i.e., the grammar should be able to parse any sentence of any length. Implementation of this ideal goal is, however, presently untenable. We will outline here our efforts to approach this goal to the extent which is possible under the present state of the art.The grammar of Proto-RELADES was a standard recognition grammar with separate phrase structure and transformational components; that is, phrase structure rules would apply to the input sentence and produce a surfacestructure. The latter would then be the input to the transformational component andthe output of this component would be the deep structure of the sentence. Our new experimental grammar combines these two components into one integrated system of rules.To understand the implication of this, we must look at the form and -3--nature of the rules in this gramnar.. Each rule in this grammar has the following format:(1) Li: A'BC ~ D'E ~ F $X$ @Y@ *** IL nThis rule has a label L i and a GOTO instruction L n. The function of the rule can be paraphrased as follows: Check to see that the elements ABC are to the left of the pointer I" in the input sentence and that the elements D and E are to the right of it (there is no upper limit to the number of the elements to the left and right of the pointer; there must be at least one element to left of the horizontal arrow ~ .) If this is the case, then if condition X is satisfied, perform action Y and create a node F to dominate over the s)nubols between the two dots (') on the left of the arrow (X and Y can be null). Next, move the pointer to the right according to the number of the stars (*) at the tail end of the rule and go to the rule labeled L n. If this rule does not apply, the control will pass on to the next rule in the sequence, i.e., to Li+ I.We see at once that this rule format permits one to write context sensitive rules constrained by some conditioning factors and also build local transformations in the Y part of the rule. The traffic in the rule application is controlled by the GOTO label L n. Underlying this system of rules is the "reductions analysis" (RA) recognizer which reads the rules and applies them to the input sentence resulting in a tree structure (P-marker) representing the deep structure of the sentence.The RA in our system is an extension of the model proposed by Cheatham (1968). Culicover (1969) and Lewis (1969) have written and implemented a grammar which uses these rules with exclusively local transformations.The net result of this grammar is that a canonical deep structure is produced for the input sentence without the generation of the intermediate surface structure. In terms of computer efficiency and speed, this is a significant step. The theoretical significance of such a recognition grammar has yet to be studied.The ambiguities can be resolved by the following interactions, all of which are automatic internal and, therefore, fast interactions, except the last one. In a fully-generalized system, all these interactions must be implemented in a manner that they will tradeoff against each other for reducing the complexity and increasing the speed.The final interaction on list (2), i.e., human interaction, which is the last resort in this system can be omitted or its use greatly restricted in many practical situations. The interactions are with:(2) (i) the lexicon (ii) the date base (iii) the system (iv) the human user Lexical entries have a certain number of features which play a role in the structural analysis of the input sentence. This is based on the already well-known proposal of Chomsky (1965) for syntactic features. A simple example of a semantic feature of a sort is given -5-below:(3) John wrote the book on the shelf.If the word shelf in the lexicon has a feature or features denoting that it is a place for storing books, etc., but normally people do not write on it or reside on it, then in the process of the analysis of (3) the prepositional phrase on the shelf will be recognized as modifying the noun boo___k_kand not the verb write or the proper noun John. The trouble with this solution is obvious: there will be too many simple and complex features for each entry in the dictionary 4, and we run into severe problems for practical applications. This is why we want to reduce the reliance on the dictionary features to the minimum and tradeoff as far as possible with the other interactions listed under(2) above.Interaction with the data base will provide the discourse background and may turn out to be the most significant and practical means for resolving ambiguities. For our system, this category of interaction includes looking up in micro-glossaries; that is, specialized glossaries containing the jargon of each narrow field of application. Again, a highly simplified example of interaction with the data base is the following. Suppose that the input sentence was (4) Do you have any books on paintings by Smith?Somewhere in the process of the derivation of the underlying structure 4. For a fractional grammar of English with partial features specified, see Rosenbaum (1968) . Interaction with the system is similar to the interaction with the data base except that here we question the capabilities of the underlying system in order to resolve the!ambiguity. Consider the following example:(5) Do you have any documents on computers?The ambiguity in (5) is, among others, in whether we want documents written about computers or we are referring to piles of documents on the top of computers. Now the underlying system which analyzes and interprets (5) and produces the answer to the question has certain capabilities; for example, it has computer routines for searching lists of titles, authors, etc., printing data, and whatever else there is.However, if the system does not have a facility for "looking" on the top of the computers in search of documents, we can reject that interpretation and adopt one which concerns documents containing information about computers.The human interaction becomes necessary only when none of the above devices resolve the ambiguity; for example, in the case of the -7-data base sample in sentence (4) above when the data base has the name Smith under both the author and painter columns. In this case, the system should formulate some sort of simple question to ask the human user before the final interpretation is effected; for example:"Do you mean books by Smith or paintings by Smith or both?" But, as I mentioned above, we have found in practice that, within a specified discourse and with a properly organized lexicon and data base, the need for taking this last resort seldom arises; and that is why systems such as Proto-RELADES and Woods (1967) can have significant practical claims.In summary, we visualize a restricted but completely practical natural language system for communication with a computer and information retrieval with a general lexicon and specialized micro-glossaries.Certain restrictions in the lexicon and in the micro-glossaries will prevent wild generation of all possible and obscure (or unlikely) analyses but will permit generation of all the reasonable analyses for each input sentence. Interactions with the lexicon, the data base (i.e., the subject of the discourse) and system will further eliminate the various analyses for eacll sentence until one analysis is left.In such cases when the system is unable to reduce the query to one analysis, the human user is asked to help in clarifying the ambiguity.-8-I would like to close this paper, however, with a word of caution.No linguist and no serious conq~utational linguist will claim that he knows how to build a system such as outlined above for a completely unrestricted processing of a natural language. The stress throughout this paper has been on practicality.We visualize a restricted natural language system of the sort which is fully practical and useful for many applications in information sciences.. ¸ -II-
null
null
null
null
Main paper: : developed for computer application two stand out for their historical impact on this approach: The Mitre (1964) granxnar developed by a number of M.I.T. scholars, and the so-called IBM Core Grammar 3. A lucid and informative discussion of the implications of the use of natura] languages in computers is given in Kuno (1967) .The theoretical and historical significance of these grammars notwithstanding, they all have serious practical disadvantages in that tney generate all the possible syntactic analyses for every ambiguous sentence but have no practical way of selecting in a fast and efficient manner the sense of the sentence either intended by the user or inherent in the nature of the discourse. In Proto-RELAUES, we tried to avoid this difficulty by restricting the discourse to a highly-specialized field and tnus reduced most of the ambiguities to the lexic~l level. In his important work on semantics for question answering systems. Woods (1967) adopts the same approach, but ne stipulates that the ultimate solution for resolving ambiguities in a more general system is in interaction with the user. This is, of course, the most general solution. If one can generate all the possible analyses of a sentence and let the user select the analysis which reflects his sense of the sentence, one would delegate the choice of understanding to the user and will satisfy nim as long as the user knows what he is talking 3. Rosenbaum and Lochak (1966) . grammar, see Rosenbaum (1968) .For the latest version of this about. However, this approach is also unsatisfactory for practical reasons, even if an easy way to build such an interactive system were known. Under a time-sharing environment, which is the only practical environment for on-line systems of this kind, every interruption and interaction will cost time, and the total effect will make the system so slow and cumbersome to make it impractical.In tills paper, we will propose some additional devices for the automatic resolution of ambiguities. These devices are now being studied and implemented at the IBM Boston Programming Center. Ideally, one should not have to arbitrarily restrict the types of sentences Which the user of the system may input to the grammar, i.e., the grammar should be able to parse any sentence of any length. Implementation of this ideal goal is, however, presently untenable. We will outline here our efforts to approach this goal to the extent which is possible under the present state of the art.The grammar of Proto-RELADES was a standard recognition grammar with separate phrase structure and transformational components; that is, phrase structure rules would apply to the input sentence and produce a surfacestructure. The latter would then be the input to the transformational component andthe output of this component would be the deep structure of the sentence. Our new experimental grammar combines these two components into one integrated system of rules.To understand the implication of this, we must look at the form and -3--nature of the rules in this gramnar.. Each rule in this grammar has the following format:(1) Li: A'BC ~ D'E ~ F $X$ @Y@ *** IL nThis rule has a label L i and a GOTO instruction L n. The function of the rule can be paraphrased as follows: Check to see that the elements ABC are to the left of the pointer I" in the input sentence and that the elements D and E are to the right of it (there is no upper limit to the number of the elements to the left and right of the pointer; there must be at least one element to left of the horizontal arrow ~ .) If this is the case, then if condition X is satisfied, perform action Y and create a node F to dominate over the s)nubols between the two dots (') on the left of the arrow (X and Y can be null). Next, move the pointer to the right according to the number of the stars (*) at the tail end of the rule and go to the rule labeled L n. If this rule does not apply, the control will pass on to the next rule in the sequence, i.e., to Li+ I.We see at once that this rule format permits one to write context sensitive rules constrained by some conditioning factors and also build local transformations in the Y part of the rule. The traffic in the rule application is controlled by the GOTO label L n. Underlying this system of rules is the "reductions analysis" (RA) recognizer which reads the rules and applies them to the input sentence resulting in a tree structure (P-marker) representing the deep structure of the sentence.The RA in our system is an extension of the model proposed by Cheatham (1968). Culicover (1969) and Lewis (1969) have written and implemented a grammar which uses these rules with exclusively local transformations.The net result of this grammar is that a canonical deep structure is produced for the input sentence without the generation of the intermediate surface structure. In terms of computer efficiency and speed, this is a significant step. The theoretical significance of such a recognition grammar has yet to be studied.The ambiguities can be resolved by the following interactions, all of which are automatic internal and, therefore, fast interactions, except the last one. In a fully-generalized system, all these interactions must be implemented in a manner that they will tradeoff against each other for reducing the complexity and increasing the speed.The final interaction on list (2), i.e., human interaction, which is the last resort in this system can be omitted or its use greatly restricted in many practical situations. The interactions are with:(2) (i) the lexicon (ii) the date base (iii) the system (iv) the human user Lexical entries have a certain number of features which play a role in the structural analysis of the input sentence. This is based on the already well-known proposal of Chomsky (1965) for syntactic features. A simple example of a semantic feature of a sort is given -5-below:(3) John wrote the book on the shelf.If the word shelf in the lexicon has a feature or features denoting that it is a place for storing books, etc., but normally people do not write on it or reside on it, then in the process of the analysis of (3) the prepositional phrase on the shelf will be recognized as modifying the noun boo___k_kand not the verb write or the proper noun John. The trouble with this solution is obvious: there will be too many simple and complex features for each entry in the dictionary 4, and we run into severe problems for practical applications. This is why we want to reduce the reliance on the dictionary features to the minimum and tradeoff as far as possible with the other interactions listed under(2) above.Interaction with the data base will provide the discourse background and may turn out to be the most significant and practical means for resolving ambiguities. For our system, this category of interaction includes looking up in micro-glossaries; that is, specialized glossaries containing the jargon of each narrow field of application. Again, a highly simplified example of interaction with the data base is the following. Suppose that the input sentence was (4) Do you have any books on paintings by Smith?Somewhere in the process of the derivation of the underlying structure 4. For a fractional grammar of English with partial features specified, see Rosenbaum (1968) . Interaction with the system is similar to the interaction with the data base except that here we question the capabilities of the underlying system in order to resolve the!ambiguity. Consider the following example:(5) Do you have any documents on computers?The ambiguity in (5) is, among others, in whether we want documents written about computers or we are referring to piles of documents on the top of computers. Now the underlying system which analyzes and interprets (5) and produces the answer to the question has certain capabilities; for example, it has computer routines for searching lists of titles, authors, etc., printing data, and whatever else there is.However, if the system does not have a facility for "looking" on the top of the computers in search of documents, we can reject that interpretation and adopt one which concerns documents containing information about computers.The human interaction becomes necessary only when none of the above devices resolve the ambiguity; for example, in the case of the -7-data base sample in sentence (4) above when the data base has the name Smith under both the author and painter columns. In this case, the system should formulate some sort of simple question to ask the human user before the final interpretation is effected; for example:"Do you mean books by Smith or paintings by Smith or both?" But, as I mentioned above, we have found in practice that, within a specified discourse and with a properly organized lexicon and data base, the need for taking this last resort seldom arises; and that is why systems such as Proto-RELADES and Woods (1967) can have significant practical claims.In summary, we visualize a restricted but completely practical natural language system for communication with a computer and information retrieval with a general lexicon and specialized micro-glossaries.Certain restrictions in the lexicon and in the micro-glossaries will prevent wild generation of all possible and obscure (or unlikely) analyses but will permit generation of all the reasonable analyses for each input sentence. Interactions with the lexicon, the data base (i.e., the subject of the discourse) and system will further eliminate the various analyses for eacll sentence until one analysis is left.In such cases when the system is unable to reduce the query to one analysis, the human user is asked to help in clarifying the ambiguity.-8-I would like to close this paper, however, with a word of caution.No linguist and no serious conq~utational linguist will claim that he knows how to build a system such as outlined above for a completely unrestricted processing of a natural language. The stress throughout this paper has been on practicality.We visualize a restricted natural language system of the sort which is fully practical and useful for many applications in information sciences.. ¸ -II- Appendix:
null
null
null
null
{ "paperhash": [ "salton|a_comparison_between_manual_and_automatic_indexing_methods", "salton|the_smart_automatic_document_retrieval_systems—an_illustration", "salton|automatic_information_organization_and_retrieval", "salton|computer_evaluation_of_indexing_and_text_processing", "williams|computer_classification_of_documents" ], "title": [ "A Comparison Between Manual and Automatic Indexing Methods", "The SMART automatic document retrieval systems—an illustration", "Automatic Information Organization And Retrieval", "Computer Evaluation of Indexing and Text Processing", "COMPUTER CLASSIFICATION OF DOCUMENTS" ], "abstract": [ "The effectiveness of conventional document indexing is compared with that achievable by fully-automatic text processing methods. Evaluation results are given for a comparison between the MEDLARS search system used at the National Library of Medicine, and the experimental SMART system, and conclusions are reached concerning the design of future automatic information systems.", "A fully automatic document retrieval system operating on the IBM 7094 is described. The system is characterized by the fact that several hundred different methods are available to analyze documents and search requests. This feature is used in the retrieval process by leaving the exact sequence of operations initially unspecified, and adapting the search strategy to the needs of individual users. The system is used not only to simulate an actual operating environment, but also to test the effectiveness of the various available processing methods. Results obtained so far seem to indicate that some combination of analysis procedures can in general be relied upon to retrieve the wanted information. A typical search request is used as an example in the present report to illustrate systems operations and evaluation procedures .", "Spend your time even for only few minutes to read a book. Reading a book will never reduce and waste your time to be useless. Reading, for some people become a need that is to do every day such as spending time for eating. Now, what about you? Do you like to read a book? Now, we will show you a new book enPDFd automatic information organization and retrieval that can be a new way to explore the knowledge. When reading this book, you can get one thing to always remember in every reading time, even step by step.", "Automatic indexing methods are evaluated and design criteria for modern information systems are derived.", "Abstract : A word selection measure is employed to delete those terms that rarely occur and those that have a low conditional probability of occurring in a category. A set of sample documents known to belong to each category is used to estimate the mean frequency, the within category variance and the between category variance of the remaining terms. These statistics are then employed to compute discriminant functions which provide weighting coefficients for each term. A new document is classified by counting the frequencies of the selected terms occurring in it, and weighting the difference between this vector of observed frequencies and the mean vector of every category. The probability of membership in each category is computed and the document is assigned to the category having the highest probability. For applications in which assignment to one category is not desirable, the probabilities can be used to indicate multi-category assignment. A thesaurus capability allows the following types of words to be considered equivalent: inflected words, compound words, and semantically similar words with different orthographic spellings. Since the technique is based on statistical measures, it can classify documents written in any language provided a sample set of documents in that language is available." ], "authors": [ { "name": [ "G. Salton" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Salton", "M. Lesk" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Salton" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Salton", "M. Lesk" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. H. Williams" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null ], "s2_corpus_id": [ "62237369", "13135194", "56515970", "13158831", "60240661" ], "intents": [ [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false ] }
null
665
0.021053
null
null
null
null
null
null
null
null
21e6ba6c442402c8d98ec244e2977ac33d14144f
21946555
null
Organization and Programming of the {M}ultistore Parser
The Multistore system was developed in order to recoge nize and explain structural patterns in natural-language sentences (specifically English) and eventually yield an output in which the relations between the various items of the sentence are hierarchically displayed. The recognition of these structural patterns is made by means of a system of rules which operate on a sequence of words, i.e. a sentence, whose individual characteristics are pre-established. By individual characteristics are meant the possibilities a word has to correlate (i.e. to form a syntactic combination) with another item; these possibilities are represented by 'correlators', that is, by syntactic elements which link two items in a correlation.
{ "name": [ "Pisani, Pier Paolo" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 25
1969-09-01
2
1
null
null
null
Each Ic represents a possible syntactic connection between two items and is identified by:i) the code number of the relation it establishes between two items;2) the 'type' of correlation. There are six different types of correlation which split into two groups:'explicit' correlators and 'implicit' correlators.By 'explicit' correlator we mean a linking element which is represented by a linguistic item; prepositions and conjunctions are explicit correlators; by 'implicit' correlator we mean a relation between two items, which is not expressed by any linguistic item but is indicated by the relative position of the two items (which we call their correlational function For each type there are different correlational functions which determine the position a word has in a correlation. When two adjacent words have complementary functions of the same Ic -for instance, word A has 5050 N1 and word Bhas 5050 N2 -a 'product' is made and recorded in the form:Word A 5050 N Word BThis product is considered as one piece and can become first or second correlatum in a wider correlation and is therefore treated as though it were a single word, i.e., it is assigned strings of Ic's which indicate its correlational possibilities both with adjacent words and with adjacent products already made. Single words, however, being vocabulary items, have their strings of Ic's assigned a priori;products, since they arise during the procedure, have to be assigned their Ic-strings dynamic&lly. The assignation of specific Ic's to a product depends on: a) the correlator responsible for the particular correlation; b) the characteristics (Ic's) of the word (or product) whichEQUATIONmakes up the first or the second correlatum.The operational cycle that assigns Ic's to a product we call 'reclassification'.The amount of data involved in an analysis of this kind is really enormous. Let us consider a sentence consisting of ten words, each of which has two different senses (S's).On an average 50 correlational indices are assigned to each sense of a word. Now, just to check the correlational compatibility of two adjacent words about 10,000 matching operations would be necessary; the matching procedure for all the words of the sentence would involve about 90,000 operations. On an average five products would result from the first 10,000 matching operations; each of them would be assigned about 50 correlational indices that represent the product's correlational possibilities to correlate with a another adjacent piece -either a word or a product. The procedure to match these five products with another piece would involve about 637,000 operations. If to this figure~ we add the number of operations necessary starting from level 3 (see p. 7) with all the products made in the immediately preceding levels (200,000), the total number of operations involved would come to 927,000.The reclassification routine also involves a great number of operations of this kind: about half of the correlational indices a product is assigned depends on the corre- The information contained in the area 'correlator' of the line containing the product's record gives the address of the Multistore column dedicated to the correlator responsible for that product. The column is then searched , from the top down, for a bit 5 set ON (see Fig. 3 on p.10).! i I r I¢ -~ L3 . . . . . . i i. ' i i i Ln-I t J I ~ ' "i 'If it is found, this implicitly means that on the line to which the bit belongs, there will be found the record of a reclassification rule relevant to the product to be reclassified. Section A of the same line contains the instructions concerning the assignation of the Ic's whose markers are contained in bit 6 (see I i • 11Conditioned rule. i Unconditioned rule. # Check on 1st correlatum. Ic 4 CFI. @ Assign--the string CFI contained in the rule to the product. _% Assign the string CF2 contained in the rule to the product.The analysis of the sentence is complete when the last marker of the last word-sense has been inserted and there are no further products to be reclassified or re-cycled. This is a general outline of the procedure of combination, production, reclassification and output. In addition to that there are several routines which meet special requirements. A special rule, for instance prevents specific RH pieces from becoming eligible LII pieces once a certain correlation -which contains them as RH pieces -has been made. A word like "LITTLE", for instance, in its function as a quantifier, once it has been correlated with the de~ finite article and made the product "THE//LITTLE" cannot become LH piece in the correlation:LITTLE // HE KNOWS L 16The indication 'discard' on print-out type 'a' -i.e. on the list of all the products made during the analysiswill show that "LITTLE" is no more available as LH piece for any other correlation.
null
null
Main paper: : Each Ic represents a possible syntactic connection between two items and is identified by:i) the code number of the relation it establishes between two items;2) the 'type' of correlation. There are six different types of correlation which split into two groups:'explicit' correlators and 'implicit' correlators.By 'explicit' correlator we mean a linking element which is represented by a linguistic item; prepositions and conjunctions are explicit correlators; by 'implicit' correlator we mean a relation between two items, which is not expressed by any linguistic item but is indicated by the relative position of the two items (which we call their correlational function For each type there are different correlational functions which determine the position a word has in a correlation. When two adjacent words have complementary functions of the same Ic -for instance, word A has 5050 N1 and word Bhas 5050 N2 -a 'product' is made and recorded in the form:Word A 5050 N Word BThis product is considered as one piece and can become first or second correlatum in a wider correlation and is therefore treated as though it were a single word, i.e., it is assigned strings of Ic's which indicate its correlational possibilities both with adjacent words and with adjacent products already made. Single words, however, being vocabulary items, have their strings of Ic's assigned a priori;products, since they arise during the procedure, have to be assigned their Ic-strings dynamic&lly. The assignation of specific Ic's to a product depends on: a) the correlator responsible for the particular correlation; b) the characteristics (Ic's) of the word (or product) whichEQUATIONmakes up the first or the second correlatum.The operational cycle that assigns Ic's to a product we call 'reclassification'.The amount of data involved in an analysis of this kind is really enormous. Let us consider a sentence consisting of ten words, each of which has two different senses (S's).On an average 50 correlational indices are assigned to each sense of a word. Now, just to check the correlational compatibility of two adjacent words about 10,000 matching operations would be necessary; the matching procedure for all the words of the sentence would involve about 90,000 operations. On an average five products would result from the first 10,000 matching operations; each of them would be assigned about 50 correlational indices that represent the product's correlational possibilities to correlate with a another adjacent piece -either a word or a product. The procedure to match these five products with another piece would involve about 637,000 operations. If to this figure~ we add the number of operations necessary starting from level 3 (see p. 7) with all the products made in the immediately preceding levels (200,000), the total number of operations involved would come to 927,000.The reclassification routine also involves a great number of operations of this kind: about half of the correlational indices a product is assigned depends on the corre- The information contained in the area 'correlator' of the line containing the product's record gives the address of the Multistore column dedicated to the correlator responsible for that product. The column is then searched , from the top down, for a bit 5 set ON (see Fig. 3 on p.10).! i I r I¢ -~ L3 . . . . . . i i. ' i i i Ln-I t J I ~ ' "i 'If it is found, this implicitly means that on the line to which the bit belongs, there will be found the record of a reclassification rule relevant to the product to be reclassified. Section A of the same line contains the instructions concerning the assignation of the Ic's whose markers are contained in bit 6 (see I i • 11Conditioned rule. i Unconditioned rule. # Check on 1st correlatum. Ic 4 CFI. @ Assign--the string CFI contained in the rule to the product. _% Assign the string CF2 contained in the rule to the product.The analysis of the sentence is complete when the last marker of the last word-sense has been inserted and there are no further products to be reclassified or re-cycled. This is a general outline of the procedure of combination, production, reclassification and output. In addition to that there are several routines which meet special requirements. A special rule, for instance prevents specific RH pieces from becoming eligible LII pieces once a certain correlation -which contains them as RH pieces -has been made. A word like "LITTLE", for instance, in its function as a quantifier, once it has been correlated with the de~ finite article and made the product "THE//LITTLE" cannot become LH piece in the correlation:LITTLE // HE KNOWS L 16The indication 'discard' on print-out type 'a' -i.e. on the list of all the products made during the analysiswill show that "LITTLE" is no more available as LH piece for any other correlation. Appendix:
null
null
null
null
{ "paperhash": [ "glasersfeld|the_multistore_parser_for_hierarchical_syntactic_structures" ], "title": [ "The multistore parser for hierarchical syntactic structures" ], "abstract": [ "A syntactic parser is described for hierarchical concatenation patterns that are presented to the analyzer in the form of linear strings. Particular emphasis is given to the system of “significant addresses” by means of which processing times for large-scale matching procedures can be substantially reduced. The description makes frequent use of examples taken from the fully operational implementation of the parser in an experimental English sentence analyzer. By structuring an area of the computer's central core storage in such a way that the individual locations of bytes and bits come to represent the data involved in the matching procedure, the shifting of information is reduced to a minimum, and the searching of lists is eliminated altogether. The matches are traced by means of binary masks and the state of single bits determines the operational flow of the procedure. The method could be implemented with any interpretive grammar, provided it can be expressed by the functional classification of the items composing the input hierarchical structures." ], "authors": [ { "name": [ "E. Glasersfeld", "P. Pisani" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "14240877" ], "intents": [ [] ], "isInfluential": [ false ] }
null
665
0.001504
null
null
null
null
null
null
null
null
e1e56a08f88c83f43978402f18855db6f589a13b
26396685
null
Mathematical Models for {B}alkan Phonological Convergence
The high structuring of phonology, the obvious classes of sounds, and the classes of their classes, have made phonological typologies a not too rare proposal. And even where typologies were not claimed as such, they were often implicit in the statements made.
{ "name": [ "Afendras, Evangelos A." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 56
1969-09-01
18
1
null
More complicated yet much more adequate measures of distinctive feature distributions were proposed by the Soviet linguist Postovalova.Although they were first used for the study of Just one systems typological applications were also suggested by the author. (Havr~nek, 1933 ) which actually drew heavy criticism (Ma~ecki Stankiewicz) .Interesting results were obtained by aPPlying the above method to the 4 study of several Balkan idioms.But before discussing the results some of the basic problems encountered will be mentioned:The systems were compared against a maximal matrix which included all the features occurring in the population of the systems analyzed. 5Any of the actual systems include a subset of this maximal set of features.In the final correlation each system was considered as having O's throughout for the features which it did not utilize.But 0Ws were also indicative of impertinence of a feature for a given phoneme when the feature was distinctive for other phonemes in the system. Thus two kinds of concepts were collapsed as they both were represented by 0. However, this has probably been rectified by the fact that features not used in a system have a 0 throughout, Another actual handicap is the non-availability of distinctive feature descriptions for the vast majority of the systems compared.And even when available, they were often tinted by both the author's views and his preferences (e.g. Petrovici on Rumanian) or were out of different periods of theoretical development of distinctive features.in such cases, I took the liberty of normalizing the data by modifying the existing analyses (:the same method was followed throughout e.g.constructing branching-trees).In some other instances more than one solution were possible and for lack of data I kept the alternatives. V~(b) = Pa(b) where K -# of phonemes • (n -i)This weightinEmakes the index much more sensitive to variations in the number of features.Higher order conditional probabilities can also be intro- For general discussions, see Birnbaum 1966 , 1969 , Edmundson 1967 , Greenberg 1957 3) 4)Notes (cont.)It seems that no two "mirror" systems could be distinguished by this typology. A theoretical shortcoming in spite of the fact that not terribly many such cases exist.Much of the data analysis used for the present paper was done at the Johns Hopkins University, as part of my doctoral research which culminated in a thesis (May 1968).The idea of a maximal system thus defined can also be found (2) COMPACT/NON. COMP.(3) GRAVE/ACUTE . Andreyev, N.D. (ed.) 1965. S!atistiko-Komblnatorno~e modelirovanlJe Jazykov. Moscow.
null
null
null
It is obvious after inspection of the indices, that the two systems are not distinguls~ed. Such aloss of information is characteristic of averaging.
Main paper: postovalova's valence and probability indices: More complicated yet much more adequate measures of distinctive feature distributions were proposed by the Soviet linguist Postovalova.Although they were first used for the study of Just one systems typological applications were also suggested by the author. (Havr~nek, 1933 ) which actually drew heavy criticism (Ma~ecki Stankiewicz) .Interesting results were obtained by aPPlying the above method to the 4 study of several Balkan idioms.But before discussing the results some of the basic problems encountered will be mentioned:The systems were compared against a maximal matrix which included all the features occurring in the population of the systems analyzed. 5Any of the actual systems include a subset of this maximal set of features.In the final correlation each system was considered as having O's throughout for the features which it did not utilize.But 0Ws were also indicative of impertinence of a feature for a given phoneme when the feature was distinctive for other phonemes in the system. Thus two kinds of concepts were collapsed as they both were represented by 0. However, this has probably been rectified by the fact that features not used in a system have a 0 throughout, Another actual handicap is the non-availability of distinctive feature descriptions for the vast majority of the systems compared.And even when available, they were often tinted by both the author's views and his preferences (e.g. Petrovici on Rumanian) or were out of different periods of theoretical development of distinctive features.in such cases, I took the liberty of normalizing the data by modifying the existing analyses (:the same method was followed throughout e.g.constructing branching-trees).In some other instances more than one solution were possible and for lack of data I kept the alternatives. V~(b) = Pa(b) where K -# of phonemes • (n -i)This weightinEmakes the index much more sensitive to variations in the number of features.Higher order conditional probabilities can also be intro- For general discussions, see Birnbaum 1966 , 1969 , Edmundson 1967 , Greenberg 1957 3) 4)Notes (cont.)It seems that no two "mirror" systems could be distinguished by this typology. A theoretical shortcoming in spite of the fact that not terribly many such cases exist.Much of the data analysis used for the present paper was done at the Johns Hopkins University, as part of my doctoral research which culminated in a thesis (May 1968).The idea of a maximal system thus defined can also be found (2) COMPACT/NON. COMP.(3) GRAVE/ACUTE . Andreyev, N.D. (ed.) 1965. S!atistiko-Komblnatorno~e modelirovanlJe Jazykov. Moscow. : It is obvious after inspection of the indices, that the two systems are not distinguls~ed. Such aloss of information is characteristic of averaging. Appendix:
null
null
null
null
{ "paperhash": [ "voegelin|obtaining_an_index_of_phonological_differentiation_from_the_construction_of_non-existent_minimax_systems", "pierce|possible_electronic_computation_of_typological_indices_for_linguistic_structures", "lyons|phonemic_and_non-phonemic_phonology:_some_typological_reflections", "krámský|a_quantitative_typology_of_languages", "greenberg|the_nature_and_uses_of_linguistic_typologies", "saporta|methodological_considerations_regarding_a_statistical_approach_to_typologies", "pierce|a_statistical_study_of_consonants_in_new_world_languages._i:_introduction", "wells|archiving_and_language_typology", "menzerath|typology_of_languages" ], "title": [ "Obtaining an Index of Phonological Differentiation from the Construction of Non-Existent Minimax Systems", "Possible Electronic Computation of Typological Indices for Linguistic Structures", "Phonemic and Non-Phonemic Phonology: Some Typological Reflections", "A Quantitative Typology of Languages", "The Nature and Uses of Linguistic Typologies", "Methodological Considerations regarding a Statistical Approach to Typologies", "A Statistical Study of Consonants in New World Languages. I: Introduction", "Archiving and Language Typology", "Typology of Languages" ], "abstract": [ "0. In archaeology, ethnography, and socio-cultural studies generally, the difference between typology and structure is blurred. It may be, as Kroeber thought, that much in cultural inventories is simply not structurable, or not as structurable as linguistic materials. More likely, the blurring is one consequence of the emphasis in the usual anthropological definition of culture on inclusiveness of scope rather than on interdependence of spheres-what an ecologist might call the biosphere (including the landscape), the culturosphere, and the linguistic", "0. The renewed interest in linguistic typology during the past decade has taken two somewhat divergent routes.l The approach to 'whole languages' has resulted in a recent proposal by Greenberg that certain indices of structural features of language might be utilized to type linguistic structures.2 Sub-system typologies have so far attempted to 'summarize the essentials'3 of sub-systems within the overall structure.4 One of the most important reasons for developing typologies is the creation of models so that if the structure of a language were", "American linguistics has proudly and more or less consciously adopted the pragmatic position; the philosophy of justification by results, of first getting things done and only then, if at all, asking what in fact has been done.' In the preface to his collection of articles by American linguists, Martin Joos brings out this point well. He goes on to remark: \"Altogether there is ample reason why both Americans and (for example) Europeans are likely on each side to consider the other side both irresponsible and arrogant. We may request the Europeans to try to regard the American style as a tradition comme une autre; but the Americans can't be expected to reciprocate: they are having too much fun to be bothered, and few of them are aware that either side has a tradition.\"2 As a representative of one European tradition in the enviable position of having secured a captive American audience for an hour or so, I propose to put before you views that absorption in the fun might otherwise prevent you from considering. To those of you who, having heard these views, might feel inclined to say that they are \"of only theoretical interest\" and that the linguist's job is to describe what actually occurs in particular languages without troubling him-", "This paper discusses the necessity for a quantitative investigation of qualitative linguistic facts, mentions several conceptions of typology and deals in detail with the classification into vocalic and consonantal languages. The main task of the paper is to attempt to classify languages according to the manner in which they exploit particular kinds of consonants. The exploitation of the sounds of a language is given by the relation of the sounds of the inventory to their relative occurrence in texts. Having examined 23 languages, the author distinguishes, on the basis of the distribution of consonant articulations, three types of language according to the manner of articulation and similarly three types of language according to the place of articulation.", "1. As contrasted with the other two main methods of linguistic classification, the areal and the genetic, typological procedures have tended to an uncertain and marginal status in linguistic science. In view of recent stirrings which appear to indicate renewed interest in typological procedures in linguistics as well as other fields, it seems appropriate to consider the general logic of typological classifications with particular reference to linguistics, to survey the kinds of linguistic typologies both potential and actual, and to evaluate the possible uses of typological analyses. This latter consideration is of particular importance in view of the common imputation of arbitrariness to typological as opposed to genetic classifications in linguistics. It is of some interest to note that the", "The present remarks are aimed at restating some of the problems involved in any statistical analysis of the consonant systems of a large number of languages, such as the analysis proposed by Joe E. Pierce, A Statistical Study of the Consonants of New World Languages, IJAL 23.36-45,94-108. Although the data collected by Pierce are in themselves of interest, the main contribution of his presentation is his suggestion for a method to 'test statistically the validity of... a grouping... [of] phonological systems on the basis of the number of series of stop consonants.' Before his results may be accepted as valid, however, some clarification of basic concepts seems indicated. At C. F. Voegelin's suggestion three such concepts are briefly discussed below (1. Comparability of data; 2. Method; 3. Typologies and correlations) followed by speculations and suggestions for further investigation (4. Alternate methods). 1. Of both theoretical and practical interest is the problem of to what degree phonological systems or sub-systems may be equated. The point has been clearly made by Weinreich:' 'the phonemes... are defined within each language by oppositions to other phonemes... of that language. For example, /p/ in Russian... is defined... by its distinctive feature of non-palatality, .. . while the definition of /p/ in English... involves no such restriction. From the point of view of the languages, therefore R[ussian] /p/ and E[nglish] /p/ cannot be \"the same\"'. Weinreich indicates however, that while the phonemes", "0. In a recent paper C. F. Voegelin attempted to type phonological systems on the basis of the number of series of stop consonants.1 The purpose of this paper is to test statistically the validity of such a grouping. Many points of divergence could be selected on which to classify languages, and the number of series of stops is only one of these. The question then arises does this grouping correlate with other features of the phonological system, thus giving the group meaning, or are they simply a convenient way of classifying consonant systems? The results of the investigation bring out some very interesting things about the languages of the New World. However, almost as important as the results obtained is the method employed. Until recently the application of sampling was not feasible in linguistics. However, the field has expanded now to the point where there are many field workers turning out descriptions of new languages all the time. This means that in the very near future there will be sufficient good descriptions of languages throughout the world to make sampling meaningful. The purpose of this investigation is simply to examine a sample of languages, representing a wide range of structural types and", "1. Need for an archive 2. Nature and value of language typology 3. Natural classification 4. Typologically important properties 5. Classification, seriation, measurement 6. A uniform scheme for archiving 7. A sample inventory 8. Bibliography 1. Of the many uses to which a linguistic archive could be put, one of the most important for the linguist is language typology, i.e. the natural classification of languages. No really adequate classification has ever been worked out, even in its main features-a classification whose basis is not the genetic relationships between languages but their intrinsic natures regardless of genealogy.' The archivist would like to wait for a natural classification around which to arrange his archive, but unfortunately the typologist is holding up his work until he has a good archive at his", "Phonemics having been atomistic, it is now completed by typology, or integral phonemics. Encoding, entropy, spelling reform. Criteria for classifying given vocabulary. Monosyllabics‐parallelogram. Phonetic nets. Frequency distribution of vowels in monosyllabic words (French, English, German)." ], "authors": [ { "name": [ "F. Voegelin", "S. Wurm", "G. O'Grady", "Tokuichiro Matsuda", "C. F. Voegelin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Pierce" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Lyons" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Krámský" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Greenberg" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Saporta" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Pierce" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Rulon S. Wells" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Paul Menzerath" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "143836977", "144625066", "224809428", "142530769", "144662912", "262256133", "144128615", "144495967", "120573752" ], "intents": [ [], [], [], [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false ] }
null
665
0.001504
null
null
null
null
null
null
null
null
d9cc9533341737a886d3b30f9d16736fe649f749
28848363
null
Towards an automatic morphological segmentation
Towards an automatic morphological segmentation. Ottr intention is to obtain a morphological segmentation of French acceptable to a present-day speaker or listener~ for a corpus provided without any previous ir~exlng~ by means of electronic calculation. Experience tends to prove that it is possible to achieve such a segmentation by means of exclusively formal criteria.
{ "name": [ "De Kock, Josse and", "Bossaert, Walter" ], "affiliation": [ null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 62: Collection of Abstracts of Papers
1969-09-01
0
2
null
null
null
null
null
establishing and grading these criteria and formulating them mathematicall~ The use of the computer guarantees objectiveness up to a significant linguistic level; the computer is an instrument of research for new rules; it guarantees the control of established rules.The method followed by us is based on the hypothesis that the linguistic performance of human memory consists in a constant segmentation or r4construction of the signs of the linguistic code on levels which are graded and organized each in accordance wlth its own rules~ in function of the specific capacities of the human braln~ and with a ~ertain degree of productiveness.No segmentation is excluded beforehand. The segmentation is implemented by means of factors of association or alternation of the separate segments~ according to a law of minimal economy~ as well as by quantitative or statistical factors concerning the number of different segments~ their, frequency on each slde of the proposed dlvlslon~ and their internal relationship.These factors seem to apply to a large number of languages and to the majority of French forms. Certain counter-indications and some correctives proper to the French la/Igu/%ge must be observed. ~insi ".in? s:;~:,en.De.Dio'n ne slol.;~re ~;as on reaction ~vtune 6chell~: fie val~urs a])solu~-sl z~ais :~n foncgion il: la tension :uorph.')lozique sp6eifique do chaq-.,e ;.;or viz-~t-vls d:~z z.:v.ls ;.,ors ~,ui Irl ~.~s.~e,~l~lent du poinde "~;e ~uori>h~Iozique.Lo cori~z~s u~ilis6 asD consti~.~6 jar S7bi :2o~,.~s. i.,hon6tiqu~s, isol<-~r~ con,~,.:'~16es et ,~.'clin'-"es i !~ui son~ fo',:rnios par l~'s ~0i-! Lots los plus frequents solon Juill~nd. ~i ce jour !a i)rozrazu<ation des factours osrentills .~st ~n",r6e on o.~:i~!oi~ation sur 6chantillons.Ghent, May 15 1969.
Main paper: : establishing and grading these criteria and formulating them mathematicall~ The use of the computer guarantees objectiveness up to a significant linguistic level; the computer is an instrument of research for new rules; it guarantees the control of established rules.The method followed by us is based on the hypothesis that the linguistic performance of human memory consists in a constant segmentation or r4construction of the signs of the linguistic code on levels which are graded and organized each in accordance wlth its own rules~ in function of the specific capacities of the human braln~ and with a ~ertain degree of productiveness.No segmentation is excluded beforehand. The segmentation is implemented by means of factors of association or alternation of the separate segments~ according to a law of minimal economy~ as well as by quantitative or statistical factors concerning the number of different segments~ their, frequency on each slde of the proposed dlvlslon~ and their internal relationship.These factors seem to apply to a large number of languages and to the majority of French forms. Certain counter-indications and some correctives proper to the French la/Igu/%ge must be observed. ~insi ".in? s:;~:,en.De.Dio'n ne slol.;~re ~;as on reaction ~vtune 6chell~: fie val~urs a])solu~-sl z~ais :~n foncgion il: la tension :uorph.')lozique sp6eifique do chaq-.,e ;.;or viz-~t-vls d:~z z.:v.ls ;.,ors ~,ui Irl ~.~s.~e,~l~lent du poinde "~;e ~uori>h~Iozique.Lo cori~z~s u~ilis6 asD consti~.~6 jar S7bi :2o~,.~s. i.,hon6tiqu~s, isol<-~r~ con,~,.:'~16es et ,~.'clin'-"es i !~ui son~ fo',:rnios par l~'s ~0i-! Lots los plus frequents solon Juill~nd. ~i ce jour !a i)rozrazu<ation des factours osrentills .~st ~n",r6e on o.~:i~!oi~ation sur 6chantillons.Ghent, May 15 1969. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.003008
null
null
null
null
null
null
null
null
e9b9cb7d9ed387ddda17d3939e7a1e7bf3016a52
16195304
null
Automatic Simulation of Historical Change
One of the principal reasons for studying the history of a language has been to explain the system of its modern reflex, the contemporary language. This has been especially true in attempting to deal with certain anomalies in the modern language. But the role, if appropriate, of utilizing information concerning diaehronie processes in a synchronic description is not a~ all i clear. Recent studies describing contemporary languages, based
{ "name": [ "Smith, Raoul N." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 9
1969-09-01
6
2
null
purely on synchronically motivated grounds, suggest a much more intimate relation between a synchronic grammar and what has been 2 previously posited as a dlachronic description of that language.The two major problems involved in historical studies have been the statement of the sound change (or, as this has been reinterpreted, the grammar change) and the relation of this change to other diachronic changes, that is, its relative chronology.A great deal of attention has been paid to the former but very little to the latter, whose significance has g-oatly increasedSee, for example, Lightner, 1965 and Schaehter and Fromkin, 1968 .Since a naive native speaker of a language can not be ex~cte + know the history of his language, the reason for this rei~.ion may lie in the manner in which the rules were added to the grammars of his ancestors.It is hoped that the future results of this study will help to shed light on this relation by comparing them and their associated grammars with synchronic descriptions. due to recent results in generative grammar. One of the reasons for the lack of rigor in stating the relative chronology has probably been the large amounts of data required for input to the set of rules and the very large number of stages/rules which must be accounted for and the many permutations of these rules which should be tested. This lack of rigor, in turn, has made it very difficult to discuss coherently the historical development of a language.The purpose of this paper is to discuss certain limited aspects of historical language change and suggest the possible use of the computer in approaching their solution. The types of problems discussed are only phonological and include only those changes conditioned by phonetic environment and which do not require syntactic information (for example, the change of the Old Russian unstressed infinitive ending /t'i/ to /t'/). The comparative method assumea~ among other things and in a simplified version, that hF comparing sets of sounds occurring in the same positions of the same words in the sister languages one can reconstruct the sound from whi h these sister sounds evolved. ("Same position" and "same word" may be difficult to define in a particular case.) For example, the word for "three" which of these is the actual cause for the incorrect output is simplest only in the case where all of the output was the result of the application o£ only one rule.2.0 A sketch of the phonological ~ of Russian. The rules which were tested were an abridged version of a set presented by 1 the author in a recent paper.The rules attempt to account for certain aspects of the development of the phonological system of C~ntemporary Standard Russian from a late form of Proto-Indo-European. These rules were: I Kantor, Marvin and R.N. Smith, "A sketch of the major develppments in Russian historical phonology" (to appear).The original formulation was in terms of distinctive features; however, for this programmatic study a segmental notation has been used for ease of statement, etc. Proto-lndo-European forms were chosen from Walde and Pokorny (1932) .These were punched onto cards along with their English glosses. The program was written in SNOBOL4 for the CDC 6400. Each rule set was numbered so as to coincide with the set of rules listed in section 2, wlth a zero appended to each rule number so as to allow for later insertions. Changing a rule consists at the moment of simple removal and replacement of cards. The history of a word or set of words can be gotten by a11'owlng it to be processed with accompanying output generated by each rule set.Similarly, the lexicon for a particular stage can be generated by allowing the input to be processed up through the rule covering that stage and, if wanted, suppressing output from intermediate stages.With the availability of larger storage capacity the output f~em each stage can be generated once and stored in such a way that it can be referenced simply and thereby ellminate regeneration of input forms when the need for a rule change arises.Frequency counters will be added in older to measure the functional load of a rule, at least in terms of dictionary II frequency. How this can be incorporated meaningfully into a theory of language change is not clear at this time.The effect of borrowing can be simulated by introduction of lexical items just prior to a specific stage, There are too many variables involved in this case and the predictions have been poor. The effects of loss of original PIE are even more obvious but will require much further study.
null
null
null
null
Main paper: : purely on synchronically motivated grounds, suggest a much more intimate relation between a synchronic grammar and what has been 2 previously posited as a dlachronic description of that language.The two major problems involved in historical studies have been the statement of the sound change (or, as this has been reinterpreted, the grammar change) and the relation of this change to other diachronic changes, that is, its relative chronology.A great deal of attention has been paid to the former but very little to the latter, whose significance has g-oatly increasedSee, for example, Lightner, 1965 and Schaehter and Fromkin, 1968 .Since a naive native speaker of a language can not be ex~cte + know the history of his language, the reason for this rei~.ion may lie in the manner in which the rules were added to the grammars of his ancestors.It is hoped that the future results of this study will help to shed light on this relation by comparing them and their associated grammars with synchronic descriptions. due to recent results in generative grammar. One of the reasons for the lack of rigor in stating the relative chronology has probably been the large amounts of data required for input to the set of rules and the very large number of stages/rules which must be accounted for and the many permutations of these rules which should be tested. This lack of rigor, in turn, has made it very difficult to discuss coherently the historical development of a language.The purpose of this paper is to discuss certain limited aspects of historical language change and suggest the possible use of the computer in approaching their solution. The types of problems discussed are only phonological and include only those changes conditioned by phonetic environment and which do not require syntactic information (for example, the change of the Old Russian unstressed infinitive ending /t'i/ to /t'/). The comparative method assumea~ among other things and in a simplified version, that hF comparing sets of sounds occurring in the same positions of the same words in the sister languages one can reconstruct the sound from whi h these sister sounds evolved. ("Same position" and "same word" may be difficult to define in a particular case.) For example, the word for "three" which of these is the actual cause for the incorrect output is simplest only in the case where all of the output was the result of the application o£ only one rule.2.0 A sketch of the phonological ~ of Russian. The rules which were tested were an abridged version of a set presented by 1 the author in a recent paper.The rules attempt to account for certain aspects of the development of the phonological system of C~ntemporary Standard Russian from a late form of Proto-Indo-European. These rules were: I Kantor, Marvin and R.N. Smith, "A sketch of the major develppments in Russian historical phonology" (to appear).The original formulation was in terms of distinctive features; however, for this programmatic study a segmental notation has been used for ease of statement, etc. Proto-lndo-European forms were chosen from Walde and Pokorny (1932) .These were punched onto cards along with their English glosses. The program was written in SNOBOL4 for the CDC 6400. Each rule set was numbered so as to coincide with the set of rules listed in section 2, wlth a zero appended to each rule number so as to allow for later insertions. Changing a rule consists at the moment of simple removal and replacement of cards. The history of a word or set of words can be gotten by a11'owlng it to be processed with accompanying output generated by each rule set.Similarly, the lexicon for a particular stage can be generated by allowing the input to be processed up through the rule covering that stage and, if wanted, suppressing output from intermediate stages.With the availability of larger storage capacity the output f~em each stage can be generated once and stored in such a way that it can be referenced simply and thereby ellminate regeneration of input forms when the need for a rule change arises.Frequency counters will be added in older to measure the functional load of a rule, at least in terms of dictionary II frequency. How this can be incorporated meaningfully into a theory of language change is not clear at this time.The effect of borrowing can be simulated by introduction of lexical items just prior to a specific stage, There are too many variables involved in this case and the predictions have been poor. The effects of loss of original PIE are even more obvious but will require much further study. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.003008
null
null
null
null
null
null
null
null
71a69e392d0cff0b4ba094b3b287fcf27222a487
8507267
null
Disambiguating Verbs with Multiple Meaning in the {MT}-System of {IBM} {G}ermany
TheSystem iln glen eral Since The MT-System of IBM Germany has already been described elsewhere [5] [63, we shall confine us to the aspects of the system which are substantial to our problem.
{ "name": [ "Batori, I." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 31
1969-09-01
7
2
null
TheSystem iln glen eral Since The MT-System of IBM Germany has already been described elsewhere [5] [63, we shall confine us to the aspects of the system which are substantial to our problem.The system consists of two linguistically significant parts: a machine lexicon residing on a direct access device and a program package. The machine lexicon contains: I. English words, 2. a code providing grammatical information about each word, 3. German equivalents.In case of more Than one possible German equivalent of an English word, there are -roughly spoken -as many separate lexicon entries in the machine lexicon as German Translations.To put it in a more precise way: for an English homograph, redundant as it may be, there is a separate lexicon entry for each v of its syntactic funtions, i.e. an English word like CHANGE has been entered in the lexicon as a noun CHANGE, with the German translation: KNDERUNG and also as a verb -CHANGE, with the Translation VERXNDERN. The order of entries is fixed.While The machine lexicon provides each English word with one or more German equivalents, the program package operating on the lexicon develops the proper Translation for the English input text.The first step of processing is the lexical assignment, i.e. each English word is looked up in The machine lexicon and its first occurence in The lexicon is Taken as the provisional translation ~The following steps care for the proper choice of the German equivalents from Those other alternatives provided by the lexicon, which remain accessible throughout the entire subsequent processing. In order to choose the correct equivalent, The programs rely on the original word order of the English input text and the codes which are primarily drawn from the lexicon.Generally it can be said, that as the processing progresses, we have more and more information about The linguistic structures at our disposal, and we can make increasingly subtle decisions. It is obvious that to distinguish between verbs and nouns is logically prior to the distinction between, say, abstract nouns and concrete nouns. (In praxi, however, the sequence of subroutines may not always be as simple as that.) Similarly, we can distinguish between transitive, reflexive and intransitive verbs, only after having established the category of verbs in general, i.e. after the solution of homographs like:PLACE noun PLACE verb CHANGE noun CHANGE verbAlthough the disambiguation of syntactic homographs is by no means a trivial task, we shall not go into further details about its actual realisation.For the following discussion we shall therefore presume, that occurring syntactic homographs have been solved.There are some subsidiary problems arising after the exchange of the transitive form of a multifunctional verb against its intransitive correspondence.Consider:(6) The temperature ha~ dropped rapidly.(6') Die Temperatur hat ~allen 8e£a~sen schnell.(6") Die Tamparatar hat gcfallen ~chnell.Inspire of the correct translation of the main verb, sentence (6") is still false, because fallen -fiel -~efallen counts as a motion verb, and as such it takes the ~ary verb sein in perfect tense. Therefore provision must be made to provide the proper auxiliaries for the motion verbs in perfext tense. The actual exchange of the auxiliary might be postponed to a later step, if the appropriate marking is provided by the lexicon.E.G. (L2') I-MOUNT ] FSTEIGEN ] ~--transitivell "'-~-I[-transitive]i --- L[+ motion] J (L4') transitive]J . rZU~OCKKEH~ I~-transitive~l !t,t ,,,otio jSince the auxiliary selection has to be done for all motion verbs in perfect tense, i.e. also for the non-homographic type like has arrived i.st angekommen has gone ist gegangen etc., it is practical to place the auxiliary ~election behind (RI) -(R3) and even after the insertion of the improved, intransitive entries ((L2'), But if we make the verb inversion rule (R6) also for (7) applicable the verb will be automatically inverted. Thus (7b') is preferable to (7a'), since (R6) yields the correct translation out of (7b'), but not out of (7a'). All what we have to do is to treat the infinitive zu bleiben as a unit: (R6) applies first vacuously, i.e. it effects only the marking, but not the physical sequence of the constituents, since V • .rest is an empty string. The variable zu bleiben is the remalnaer of the verb constituent after the separation of the last independent morphological unit. At the second application, there is a V t produeed by step 1, (R6) places the no, lastand only -m~er of the verb constituent after Vrest ~ .These considerations show also the crucial importance of the ordering of rules and subroutines. We can solve the sequencing of the newly inserted compound verbs with minimal effort, if we have the verb inversion routines run after the disambiguation of the transitive/intransitive verbs.Finally I should like to point out a theoretical implication of the present problem which might have bearings on linguistic analysis in general.In a formal linguistic analysis it is unavoidable to set out from the information directly available in the terminal string. This primary information is organised necessarily in a linear fashion.As the analysis proceeds the possibility emerges to organize the linguistic information in a non-linear, hierarchical way. The results of the analysis steps can be used to find and mark the boundaries of the clauses, to recognize the individual constituents within the larger units, i.e. to build up a tentative constituent structure.As the disambiguation of verb with multiple meaning shows, we have to rely on hierarchically organized information in order to make the proper choice between transitive and intransitive usages even in this rather simple ease. To solve the polysemy of the verbs in questions, we have to be able to find out that an NP is dominated directly by a VP and that this VP dominates directly a verb, which is in turn marked in an specific way.Our transformation terminology, and the fact, that the systems lends itself readily for such a reinterpretation suggest our conclusion: We claim that the MT-system described above uses transformational rules.This claim is supported 1. by the hierarchical organization of grammatical information, which is the computational representation of phrase markers (trees).2. -and this is the main point -by our reinterpretation of the computational algorithms as mapping of phrase markers into phrase markers[2] [3]This implication seems interesting for the following reason. The MT-system of IBM Germany has grown out of the intention to create a translating tool for practical purposes. The system was developed in a basically empirical manner; problems were attacked at points, where we hoped to find the easiest way to the solution. A general frame of action has been invented, rules have been worked out and ordered coding considerations have been made, but it is only in retrospect that the theoretical evaluation of the system has become possible.On the whole the system, heterogeneous as it is, resembles the fulcrum type of MT-scheme, operating on a different language pair as Garvin's system [1] . We have tried to show, however, that even the empirical approach felt the need for multidimensional structuring, and certain rules set up on empirical considerations -e.g. those treating verb with multiple meaning -turn out to be mappings of phrase markers into phrase markers in the transformational sense.-......In the past years the Department of Basic Research IBM Germany developed an experimental Machine Translation System to translate technical texts on data processing and electronics from English into German. This MT-system displays some resemblances with the Fulcrum-type of systems, but at least some of its subroutines contain also transformational devices.The treatment of verbs with multiple meaning of the type CHANGE -VER~NDERN -SICH ~NDERN is one of these cases. The correct recognition of the passive voice and direct objects presuppose hierarchically organised and permanently accessible syntactic markers.While presenting this rather special case it has been attempted to show some further aspects as well as the general functioning of the system. In our notation symbols refer to code information, provided primarily by the word assignment routines, and -as we have already pointed out -improved by the syntactic component of the system. We differentiate F category symbols from features by putting the latter in [ The symbol X denotes strings having no relevance to the rules.Although the description of the voluminous analysis is beyond the scope of the present discussion, its concept influences also the treatment of verbs with multiple meaning. E.g. in (RI') we must know whether the constituent NP comprises initial adverbs on not. To dinstinguish an initial adverb dominated by an NP from an adverb merely preeeeding on NP (but no± dominated by it) is extremely complicated. If this distinction is not made, adverbs standing betw%en verbs and NP-s must be jumped over.We cannot deal here with all problems arising, while translating the sentences. We shall return to the problem how to generate the correct German word order in 3.2.The actual machine realisation does not affect the basic properties of the passive recognition (Rq); since in the actual system English verbs~ which may be transitive and J © ® intransitive, are provided uniformly with a transitive German eorrespondence in the first place, it is enough to recognize the passives and approve the given choice by taking no action. The actual recovery of the deep object of passive sentences seems to be redundant, since English passives correspond normally to passives in German.The rule appying to main clauses is more complicated. Therefore we have chosen the simpler rule for subordinate clauses, which suffices to illustrate the principle of verb conversion.A similar reasoning speaks for separating prefixes from main verbs and placing prefixes after the verb stems: BRINGEN ZUROCK, KEHREN ZUROCK usw. The entries in machine lexicon are actually in this sequence.
null
null
Consider the following sentence pairs:lie) The company has increased the production. (Ib) The production has increased rapidly.The access arm returned to its original position.It is clear that the (a) and (b) sentences have to be translated in a different way:(la") Dig Gesgllschaft hat dig Prod~ktion grhoeht Q (Ib") Dig Produktion hat sich schnell erhoe-~t-. It is not difficult to see that the linguistically relevant criterion for the proper German translation is the presence or absence of a direct object-NP. For the German translation of (Ib)the implicit object must be made explicit by the insertion of the German reflexive pronoun SICH. In the second example we have to select two differe~r<t verbs to render both (2a) and (2b) correctly into German 2~ .As a~irat a~proximation the following scheme can be proposed ~ :E U V -~ V transitiv ~- + transitiv NP (R2') I-+traVnsitive I 1 ... t ~ -traVsitive~Jote that (RI') and (R2') must follow each other immediately, otherwise the environment in (R2') must be made explicit.Note also that verbs like INCREASE need not be mentioned at all: they can be marked in the lexicon as invariably + transitive] , to which the missing formal object should be inserted on the German side by an other rule: (R3') transitiv _~ ~-transitiv /~ X Conditions: X contains no NP; X may be O.As the final step, the correct German equivalent has to be determined by the English verb and the feature [+ transitiv~ or [ -transitive~ ; to secure the proper translation we have to provide appropriate lexicon entries for the rules:(LI) I L2 1 sTEIG N (L3) I+ RETURN ~ _ transitive~U I+ ZUROCKBRINGEN] transitive] J (L4) ~__ RETURN ~ ZUROCKKEHREN-- transitive~J > I-transitive~ 1In order to be able to cope with reflexives of variable verb stems like:CHANGE --VER~NDERN ~__/ SICH 2[NDERN MOVE --0BERTRAGEN ~JSICH BEWEGENit is necessary to mark reflexives in (R3') by an additional feature :trans it iv reflexive 71 /reXConditions: X contains no NP; X may be O.This basic schemehas to be made somewhat more precise. Consider:(5) AS data processing needs have increased, the basle card 1.an@uase remained the same.Without referring to the clause boundary after the word INCREASED, the following NP (= subject of the next clause) would be interpreted as an object belonging to the verb INCREASED.Thus the environment in (RI') (and accordingly also in (R2') and (R3")) has to be further specified: in addition to requirement, that the NP immediately follows the verb, we must also demand that the NP may not be separated from the verb by a sentence boundary. This means that the NP must belong to the same clause as the verb.This can be done on the basis of the sentence analysis, which is motivated independently from the cu~r~rent problem, and the results of which remain accessible ~ . Thus the final set of rules would read as follows:(RI) transitiv f • transitiv -- - -- __ NP (R2) !_+ tranVitivii ~.~!Z tranVsitiv I (R3) i-V iJ ![+ transitiv .'- J "~ I+ transitiv / reflexive ~ IConditions: X contains no NP; X may be 0.Symbol g~ means that we restrict the applicability of our rules t~ one and the same clause.Further more we should also be able to treat sentences like the following:(4) It has been returned to the initial position.(4")Es ist zu der ursg~ruengli~hen Position zurueckggbracht wordgn ~ .If the disk has Seen mounted, start the machine.(5") Wenn die Platte montiert worden ist, starte die Maschine.In order to get the required feature [ + transitive] in spite of the physical lack of a direct object NP in the terminal string~ we have to' recognize passive-voice constructions as such.The passive recognition should preeeed (RI) and should discover the object of the main verb, otherwise (R2) treats sentences like (4) and (5) as intransitives, after which we have to eliminate the consequences of this misinterpretation. Presuming unique morphological marking of passive-voice the following recognition rule would be sufficient for the present purpose:(R4) NP -S Vnli i X transitiv rP ~VP ver~" pas.s i ve] ] -S Conditions: X~ V contains no NP V 4. n. . , transltlv~ ! I < 5 passiv~ ] i verD [ vP X may be O. (R4) preceeds (R1).For the further discussion note that the passi~v~ recognition uneseapably involves hierarchical structures 6~ .
null
Main paper: the problem: Consider the following sentence pairs:lie) The company has increased the production. (Ib) The production has increased rapidly.The access arm returned to its original position.It is clear that the (a) and (b) sentences have to be translated in a different way:(la") Dig Gesgllschaft hat dig Prod~ktion grhoeht Q (Ib") Dig Produktion hat sich schnell erhoe-~t-. It is not difficult to see that the linguistically relevant criterion for the proper German translation is the presence or absence of a direct object-NP. For the German translation of (Ib)the implicit object must be made explicit by the insertion of the German reflexive pronoun SICH. In the second example we have to select two differe~r<t verbs to render both (2a) and (2b) correctly into German 2~ .As a~irat a~proximation the following scheme can be proposed ~ :E U V -~ V transitiv ~- + transitiv NP (R2') I-+traVnsitive I 1 ... t ~ -traVsitive~Jote that (RI') and (R2') must follow each other immediately, otherwise the environment in (R2') must be made explicit.Note also that verbs like INCREASE need not be mentioned at all: they can be marked in the lexicon as invariably + transitive] , to which the missing formal object should be inserted on the German side by an other rule: (R3') transitiv _~ ~-transitiv /~ X Conditions: X contains no NP; X may be O.As the final step, the correct German equivalent has to be determined by the English verb and the feature [+ transitiv~ or [ -transitive~ ; to secure the proper translation we have to provide appropriate lexicon entries for the rules:(LI) I L2 1 sTEIG N (L3) I+ RETURN ~ _ transitive~U I+ ZUROCKBRINGEN] transitive] J (L4) ~__ RETURN ~ ZUROCKKEHREN-- transitive~J > I-transitive~ 1In order to be able to cope with reflexives of variable verb stems like:CHANGE --VER~NDERN ~__/ SICH 2[NDERN MOVE --0BERTRAGEN ~JSICH BEWEGENit is necessary to mark reflexives in (R3') by an additional feature :trans it iv reflexive 71 /reXConditions: X contains no NP; X may be O.This basic schemehas to be made somewhat more precise. Consider:(5) AS data processing needs have increased, the basle card 1.an@uase remained the same.Without referring to the clause boundary after the word INCREASED, the following NP (= subject of the next clause) would be interpreted as an object belonging to the verb INCREASED.Thus the environment in (RI') (and accordingly also in (R2') and (R3")) has to be further specified: in addition to requirement, that the NP immediately follows the verb, we must also demand that the NP may not be separated from the verb by a sentence boundary. This means that the NP must belong to the same clause as the verb.This can be done on the basis of the sentence analysis, which is motivated independently from the cu~r~rent problem, and the results of which remain accessible ~ . Thus the final set of rules would read as follows:(RI) transitiv f • transitiv -- - -- __ NP (R2) !_+ tranVitivii ~.~!Z tranVsitiv I (R3) i-V iJ ![+ transitiv .'- J "~ I+ transitiv / reflexive ~ IConditions: X contains no NP; X may be 0.Symbol g~ means that we restrict the applicability of our rules t~ one and the same clause.Further more we should also be able to treat sentences like the following:(4) It has been returned to the initial position.(4")Es ist zu der ursg~ruengli~hen Position zurueckggbracht wordgn ~ .If the disk has Seen mounted, start the machine.(5") Wenn die Platte montiert worden ist, starte die Maschine.In order to get the required feature [ + transitive] in spite of the physical lack of a direct object NP in the terminal string~ we have to' recognize passive-voice constructions as such.The passive recognition should preeeed (RI) and should discover the object of the main verb, otherwise (R2) treats sentences like (4) and (5) as intransitives, after which we have to eliminate the consequences of this misinterpretation. Presuming unique morphological marking of passive-voice the following recognition rule would be sufficient for the present purpose:(R4) NP -S Vnli i X transitiv rP ~VP ver~" pas.s i ve] ] -S Conditions: X~ V contains no NP V 4. n. . , transltlv~ ! I < 5 passiv~ ] i verD [ vP X may be O. (R4) preceeds (R1).For the further discussion note that the passi~v~ recognition uneseapably involves hierarchical structures 6~ . motion verbs: There are some subsidiary problems arising after the exchange of the transitive form of a multifunctional verb against its intransitive correspondence.Consider:(6) The temperature ha~ dropped rapidly.(6') Die Temperatur hat ~allen 8e£a~sen schnell.(6") Die Tamparatar hat gcfallen ~chnell.Inspire of the correct translation of the main verb, sentence (6") is still false, because fallen -fiel -~efallen counts as a motion verb, and as such it takes the ~ary verb sein in perfect tense. Therefore provision must be made to provide the proper auxiliaries for the motion verbs in perfext tense. The actual exchange of the auxiliary might be postponed to a later step, if the appropriate marking is provided by the lexicon.E.G. (L2') I-MOUNT ] FSTEIGEN ] ~--transitivell "'-~-I[-transitive]i --- L[+ motion] J (L4') transitive]J . rZU~OCKKEH~ I~-transitive~l !t,t ,,,otio jSince the auxiliary selection has to be done for all motion verbs in perfect tense, i.e. also for the non-homographic type like has arrived i.st angekommen has gone ist gegangen etc., it is practical to place the auxiliary ~election behind (RI) -(R3) and even after the insertion of the improved, intransitive entries ((L2'), But if we make the verb inversion rule (R6) also for (7) applicable the verb will be automatically inverted. Thus (7b') is preferable to (7a'), since (R6) yields the correct translation out of (7b'), but not out of (7a'). All what we have to do is to treat the infinitive zu bleiben as a unit: (R6) applies first vacuously, i.e. it effects only the marking, but not the physical sequence of the constituents, since V • .rest is an empty string. The variable zu bleiben is the remalnaer of the verb constituent after the separation of the last independent morphological unit. At the second application, there is a V t produeed by step 1, (R6) places the no, lastand only -m~er of the verb constituent after Vrest ~ .These considerations show also the crucial importance of the ordering of rules and subroutines. We can solve the sequencing of the newly inserted compound verbs with minimal effort, if we have the verb inversion routines run after the disambiguation of the transitive/intransitive verbs. theoretical implications: Finally I should like to point out a theoretical implication of the present problem which might have bearings on linguistic analysis in general.In a formal linguistic analysis it is unavoidable to set out from the information directly available in the terminal string. This primary information is organised necessarily in a linear fashion.As the analysis proceeds the possibility emerges to organize the linguistic information in a non-linear, hierarchical way. The results of the analysis steps can be used to find and mark the boundaries of the clauses, to recognize the individual constituents within the larger units, i.e. to build up a tentative constituent structure.As the disambiguation of verb with multiple meaning shows, we have to rely on hierarchically organized information in order to make the proper choice between transitive and intransitive usages even in this rather simple ease. To solve the polysemy of the verbs in questions, we have to be able to find out that an NP is dominated directly by a VP and that this VP dominates directly a verb, which is in turn marked in an specific way.Our transformation terminology, and the fact, that the systems lends itself readily for such a reinterpretation suggest our conclusion: We claim that the MT-system described above uses transformational rules.This claim is supported 1. by the hierarchical organization of grammatical information, which is the computational representation of phrase markers (trees).2. -and this is the main point -by our reinterpretation of the computational algorithms as mapping of phrase markers into phrase markers[2] [3]This implication seems interesting for the following reason. The MT-system of IBM Germany has grown out of the intention to create a translating tool for practical purposes. The system was developed in a basically empirical manner; problems were attacked at points, where we hoped to find the easiest way to the solution. A general frame of action has been invented, rules have been worked out and ordered coding considerations have been made, but it is only in retrospect that the theoretical evaluation of the system has become possible.On the whole the system, heterogeneous as it is, resembles the fulcrum type of MT-scheme, operating on a different language pair as Garvin's system [1] . We have tried to show, however, that even the empirical approach felt the need for multidimensional structuring, and certain rules set up on empirical considerations -e.g. those treating verb with multiple meaning -turn out to be mappings of phrase markers into phrase markers in the transformational sense.-......In the past years the Department of Basic Research IBM Germany developed an experimental Machine Translation System to translate technical texts on data processing and electronics from English into German. This MT-system displays some resemblances with the Fulcrum-type of systems, but at least some of its subroutines contain also transformational devices.The treatment of verbs with multiple meaning of the type CHANGE -VER~NDERN -SICH ~NDERN is one of these cases. The correct recognition of the passive voice and direct objects presuppose hierarchically organised and permanently accessible syntactic markers.While presenting this rather special case it has been attempted to show some further aspects as well as the general functioning of the system. In our notation symbols refer to code information, provided primarily by the word assignment routines, and -as we have already pointed out -improved by the syntactic component of the system. We differentiate F category symbols from features by putting the latter in [ The symbol X denotes strings having no relevance to the rules.Although the description of the voluminous analysis is beyond the scope of the present discussion, its concept influences also the treatment of verbs with multiple meaning. E.g. in (RI') we must know whether the constituent NP comprises initial adverbs on not. To dinstinguish an initial adverb dominated by an NP from an adverb merely preeeeding on NP (but no± dominated by it) is extremely complicated. If this distinction is not made, adverbs standing betw%en verbs and NP-s must be jumped over.We cannot deal here with all problems arising, while translating the sentences. We shall return to the problem how to generate the correct German word order in 3.2.The actual machine realisation does not affect the basic properties of the passive recognition (Rq); since in the actual system English verbs~ which may be transitive and J © ® intransitive, are provided uniformly with a transitive German eorrespondence in the first place, it is enough to recognize the passives and approve the given choice by taking no action. The actual recovery of the deep object of passive sentences seems to be redundant, since English passives correspond normally to passives in German.The rule appying to main clauses is more complicated. Therefore we have chosen the simpler rule for subordinate clauses, which suffices to illustrate the principle of verb conversion.A similar reasoning speaks for separating prefixes from main verbs and placing prefixes after the verb stems: BRINGEN ZUROCK, KEHREN ZUROCK usw. The entries in machine lexicon are actually in this sequence. 1.1: TheSystem iln glen eral Since The MT-System of IBM Germany has already been described elsewhere [5] [63, we shall confine us to the aspects of the system which are substantial to our problem.The system consists of two linguistically significant parts: a machine lexicon residing on a direct access device and a program package. The machine lexicon contains: I. English words, 2. a code providing grammatical information about each word, 3. German equivalents.In case of more Than one possible German equivalent of an English word, there are -roughly spoken -as many separate lexicon entries in the machine lexicon as German Translations.To put it in a more precise way: for an English homograph, redundant as it may be, there is a separate lexicon entry for each v of its syntactic funtions, i.e. an English word like CHANGE has been entered in the lexicon as a noun CHANGE, with the German translation: KNDERUNG and also as a verb -CHANGE, with the Translation VERXNDERN. The order of entries is fixed.While The machine lexicon provides each English word with one or more German equivalents, the program package operating on the lexicon develops the proper Translation for the English input text.The first step of processing is the lexical assignment, i.e. each English word is looked up in The machine lexicon and its first occurence in The lexicon is Taken as the provisional translation ~The following steps care for the proper choice of the German equivalents from Those other alternatives provided by the lexicon, which remain accessible throughout the entire subsequent processing. In order to choose the correct equivalent, The programs rely on the original word order of the English input text and the codes which are primarily drawn from the lexicon.Generally it can be said, that as the processing progresses, we have more and more information about The linguistic structures at our disposal, and we can make increasingly subtle decisions. It is obvious that to distinguish between verbs and nouns is logically prior to the distinction between, say, abstract nouns and concrete nouns. (In praxi, however, the sequence of subroutines may not always be as simple as that.) Similarly, we can distinguish between transitive, reflexive and intransitive verbs, only after having established the category of verbs in general, i.e. after the solution of homographs like:PLACE noun PLACE verb CHANGE noun CHANGE verbAlthough the disambiguation of syntactic homographs is by no means a trivial task, we shall not go into further details about its actual realisation.For the following discussion we shall therefore presume, that occurring syntactic homographs have been solved. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.003008
null
null
null
null
null
null
null
null
aae8936e2bbed5542724ce64b2a7fffb5fe5e015
17320298
null
Computer Aided Research on Synonymy and Antonymy
This research is a continuation of that reported in Axiomatic Characterization of Synonymy and Antonymy, which was presented at the 1967 International Conference on Computational Linguistics [3]. In that paper on mathematical linguistics the relations of synonymy and antonymy were regarded as ternary relations and their domains and ranges were discussed. Synonymy and antonymy were defined jointly and implicity by a system of eight axioms, which permitted the proofs of several intuitively satisfying theorems. The present paper on computational linguistics is a preliminary report which describes some computer programs that have been used to investigate the extent to which those axioms model an existing dictionary of synonyms and antonyms [9]. A set of computer programs is discussed that (i) input the dictionary data concerning synonyms and antonyms, (2) create a data structure in core memory to permit the manipulation of data, (3) query this data structure about words and relations, and (4) output the answers to queries or the entire data structure, if desired. Some examples of computer output are also given to indicate present directions of the computer-aided research.
{ "name": [ "Edmundson, H. P. and", "Epstein, Martin N." ], "affiliation": [ null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 58
1969-09-01
13
2
null
null
Before investigating axioms for synonymy and antonymy, we will recapitulate some notions and notations for the calculus of binary relations.Consider a set V of arbitrary elements, which will be called the universal set. A binary relation on V is defined as a set Under the assumption that synonymy and antonymy are ternary relations on the set of all words, the following definitions will be used:xSiY E word x is a synonym of word y with respect to the intension i (or word x is synonymous in sense i to word y)xAiY E word x is an antonym of word y with respect to the intension i (or word x is antonymous in sense i to word y)In addition to the synonymy and antonymy relations, it will be useful to introduce the following classes that are the images by these Axiom 8 (Nonempty):(Vx) [xSix] (Vx) (Vy) [xSiY => xS i ly] (Vx) (Vy) (Vz) [xSiY A YSiz --> xSiz] (VX) [xAix ] (Vx) (Vy) [xAiY --------> xA? l ly] (Vx) (Vy) (Vz) [xAiY A YAiz ~--> xSiz] (Vx) (Vy) (Vz) [xAiY A YSiz ------> xAiz] (Vy) (~x) [xAiY]The above eight axioms may be expressed more succinctly in the calculus of relations as follows:Axiom i (Reflexive): i = S i -i Axiom 2 (Symmetric): S i = S i Axiom 3 (Transitive): S 2 ~ S i i Axiom 4 (Irreflexive)': I = Ai Axiom 5 (Symmetric): A i ~ A i Axiom 6 (Antitransitive): A~ l ~ Si Axiom 7 (Right-identity): AII S i ~ A i Axiom 8 (Nonempty): (Vy)(~x)[xAiY ]As mentioned in [3], even though si(Y ) ~ @ since YSiY by Axiom i, it may be necessary to add the following axiom:Axiom 9: (Vy)(~x)[x # y A xSiY]to guarantee that the domain of the relation S i is not trivia], i.e., si(Y) -(y} # ¢ Axiom 9 is not necessary if si(Y ) is permitted to be a unit set for certain words. Thus, we might define si(Y) = {y} for any function word y, e.g., si(and) = {and}. But this will not work for antonymy since ai(Y ) might be considered empty for certain words such as function words, e.g., ai(and) = ~. The alternative of defining ai(Y ) = {y} is not reasonable since it produces more problems than it solves.Axiom 8: (Vy)~x)[xAiY], which is equivalent to (~y)[ai(Y) # ~] ,is reasonable if the contrary y of word y (e.g., "irrelevant", "impossible", "nonuse", etc.) is permitted, i.e., ~ ¢ ai(Y).The matrix form of output represents the relations by a matrix consisting of S's and A's according to whether the relation S or A holds between given pairs of words. A blank in such a matrix indicates that neither S nor A relates two words in the data structure. For example, the following matrix revealed four senses of the word "simple".. The superscript denotes the sense number to be associated with "simple". A "*" is placed to the left of those words that do not appear as main eutries in Webster's New Dictionary of
A -Antonymy M -word used in the description of another word but not itself a main entry.5. <word>,...,<word> is the set of words standing in the given relation to the main entry in the given sense.Thus, each input item consists of a main-entry word followed by a comma, a one-character grammar code, a one-digit sense number, a one-character relation, a comma, a list of words (separated by commas) that in the given sense stand in the given relation to the main entry, a comma, and a semicolon that denotes the end of an input item. A sample computer input is:Continuation cards may be appended to any item by placing a "+" in column 80 of subsequent cards.In the first phase of pro¢essing the program checks the wellformedness of the input entries, isolates words, records grammatical classes, and establishes relations between words.The data structure created in core provides for the construction of two tables. where "*" denotes that any value in the specified field is allowed and the sense i is not explicitiy denoted. Item 1 above operates in the verification mode, while items 2-5 operate in the selection mode.Simple query statements can be extended to allow compound expresions by means of the operators "not", "and", and "then". For example, ? if x,S,y,.and.y,S,z,.then.x,S,z,;? not x,A,x,;? if x,A,y,.then.y,A,x,;? if x,A,y,.and.y,A,z,.then.x,$,z,;In addition, the input format for the properties of right-identity and nonempty are as follows: words. A loop is detected if every word is preceded by another word and the algorithm cannot locate a word that has no predecessor. This algorithm may be useful in developing techniques for structuring the vocabulary of a synonym-antonym dictionary so that no word is used before it has been defined.The second algorithm determines whether selected groups of words form an equivalence class with respect to synonymy in a given sense.A binary relation R is said to be an equivalence relation if it is reflexive, symmetric, and transitive. For example, the routine found that, aside from reflexivity, the words "pure", "simple", and "absolute" formed an equivalence class in a particular sense i. On the other hand, the words "aft", "astern", "abaft", "after", and "behind" formed two equivalence classes {aft, astern, abaft} and {after, behind}. At present, the graphs of equivalence classes are drawn manually, rather than by computer.Appendix 2 outlines the structure of an input deck and lists a sample input including both input data and query statements. In general, this form of output consists of lists of the following two types: a list of all words synonymous or antonymous to agiven Word, and a list of all synonymy or antonymy relations holding among a given set of words.
The synonymy and antonymy relations possess interesting proper ~ ties, which can be treated mathematically to provide insight about semantic relations and connectivity among words in a natural language.One such model is the axiom system just stated. The immediate goal of the current research is to compile, in computer-accessible form, a dictionary containing all synonymy and antonymy relations holding between selected words. Such a dictionary is useful in gaining a better understanding of how the English lexicon is semantically structured since it can eventually enable the determination of the completeness of the descriptions in any synonym-antonym dictionary. Another objective is to assist the lexicographer in compiling such a dictionary so that all words are defined and related in a consistent manner.For the present research a test dictionary was compiled by selecting English words from Webster's New Dictionary of Synonyms [9] . Accordingly, a set of computer programs was written to do the following:i. Input, in a prescribed format, words selected from the above dictionary together with relevant data concerning their synonyms and antonyms.2. Create in core memory a suitable data structure (see [5] ) for the input, which permits the manipulation of the dictionary data. Future extensions to the system would make use of direct-access storage to enable the processing of more data.The test dictionary is analyzed with the aid of computer programs that were written to do the following:I. Query the data structure about words and relations. Two query modes are built into the system. The first mode allows the selection of words fulfilling an input request and the second mode permits the verification that certain relations hold between selected words.2. Output the answers to queries or output the entire data structure, if desired.3. Verify the consistency of word groupings, the degree of completeness of related subgroups, and the presence or absence of anomalies-in the data base.3. Input
It was noted that since the first book on English synonyms, which appeared in the second half of the 18th century, dictionaries of synonyms and antonyms have varied according to the particular explicit or implicit definitions of "synonym" and "antonym" that were used. The roles of grammatical class, word context, and substitutability in the same context were discussed.As was noted, synonymy traditionally has been regarded as a binary relation between two words, Graphs of these binary relations were drawn for several sets of words based on Webster's Dictionary of Synonyms [8] and matrices for these graphs were exhibited as an equivalent representation. These empirical results showed that the concepts of synonymy and antonymy required the use of ternary relations between two words in a specified sense rather than simply a binary relation between two words. The synonymy relation was then defined implicitly, rather than explicitly, by three axioms stating the properties of being reflexive, symmetric, and transitive. The antonymy relation was also defined by three axioms stating the properties of being irreflexive, symmetric, and antitransitive (the last term was coined for that study). It was noted that these six axioms could be expressed in the calculus of relations and that this relation algebra could be used to produce shorter proofs of theorems, even though no proofs were given. In addition, several geometrical and topological models of synonymy and antonymy were posed and examined.The characterizations of synonymy and antonymy initiated in Edmundson [2] were investigated more thoroughly in Edmundson [3] .Synonymy and antonymy were defined jointly and implicitly by a set of eight axioms rather than separately as before. First, it was noted that the original six axioms were insufficient to permit the proofs of certain theorems whose truth was strongly suggested by intuitive notions about synonymy and antonymy. In addition, it was discovered that certain fundamental assumptions about synonymy and antonymy must be made explicit as axioms. Some of these have to do with specifying the domain and range of the synonymy and antonymy relations.This is related to questions about whether function words, which linguistically belong to closed classes, should have synonyms and antonyms and whether content words, which linguistically belong to open classes, must have synonyms and antonyms. Several fundamental theorems of this axiom system were stated and proved. The informal interpretations of many of these theorems were intuitively satisfying. For example, it was proved that any even power of the antonymy relation is the synonymy relation, while any odd power is the antonymy relation.These results supported the belief that an algebraic characterization is .insightful and appropriate. For example, the assumption that synonymy is an equivalence relation also has been made, either directly or indirectly, by F. Kiefer and S. Abraham [4] , U. Welnreich[i0], and others. Since the axiom system defined the notions of syn-onymy and antonymy jointly and implicitly, it avoidedlcertain difficulties that are encountered when attempts are made to define these notions separately and explicitly.First, it is necessary to specify and format the input data so that a set of programs may process and query a test dictionary, which resides in core in the present version of the system. This is accomplished using the following input prototype:<word>,<grammar code><sense #><relatlon~,~word>,...,<word>,; where i.<word> is an entry in Webster's New Dictionary of Synonyms.<grammar code>makes use of the following coding mnemonics:N -Noun V -Verb J -Adjective B -Adverb 0 -Pronoun D -Determiner L -Auxiliary P -Preposition C -Conjunction 3.<=ense #> is a one-digit number representing a sense associated with a word in the dictionary.Several problems remain in fully attaining the above stated goals.On the one hand, it is difficult to select from a manual dictionary sufficiently small sets of words that are closed under the relations S and A, while on the other hand large segments of such a dictionary cannot be input at present. Programs have been written to stgucture and process small test dictionaries, to select words from the data structure using a query language, and to verify that certain relations hold between words.The programs were written almost completely in FORTRAN IV and have been run on the IBM 360 and the PDP i0. A flowchart, which stm~narizes these programs, appears as Appendix i. In addition, a SNOBOL 4 program has been written for the detection of chains and loops.Several problems in fully achieving the stated research goals have appeared.It was difficult to select small closed sets of words from Webster's New Dictionary of Synonyms and it was not feasible to keypunch the entire dictionary. Since the size of a truly suitable data base was too large to retain in core memory, several sample dictionaries have been selected to study the feasibility of the principles and techniques involved. Most of the current effort has been devoted to providing programming capability for the processing of small test dictionaries. Different words may be input with each run, thereby increasing the size of the sample data base to gain deeper insight into the properties of the entries listed in a manual dictionary. Further computer-aided research on synonyms and antonyms will help to validate or extend the axiomatic model proposed earlier.Also, future research could consider the additional relations "contrasting" and "analogous" cited in some manual dictionaries and the i automatic determination of the senses of words. S,FULL,COMPLETE, PLENAR¥,; PLENARY, S,FULL,COMPLETE,REPLETE,; PLENARY, A,LIMITED,; EMPTY, S,VACANT,BLANK,VOID,VACUOUS,; EMPTY, S,EMPTY,; INCOMPLETE, M,; LIMITED, ~,; SEVERE, S,STERN,AUSTERE,ASCETIC,; SEVERE, A,TOLERANT,; STERN, S,SEVERE,AUSTERE,ASCETIC,; STERN, A,SOFT,; AUSTERE, S,SEVERE,STERN,ASCETIC,; AUSTERE, A,LUSCIOUS,; ASCETIC, S,AUSTERE,SEVERE,STENN,;
Main paper: axioms: Before investigating axioms for synonymy and antonymy, we will recapitulate some notions and notations for the calculus of binary relations.Consider a set V of arbitrary elements, which will be called the universal set. A binary relation on V is defined as a set Under the assumption that synonymy and antonymy are ternary relations on the set of all words, the following definitions will be used:xSiY E word x is a synonym of word y with respect to the intension i (or word x is synonymous in sense i to word y)xAiY E word x is an antonym of word y with respect to the intension i (or word x is antonymous in sense i to word y)In addition to the synonymy and antonymy relations, it will be useful to introduce the following classes that are the images by these Axiom 8 (Nonempty):(Vx) [xSix] (Vx) (Vy) [xSiY => xS i ly] (Vx) (Vy) (Vz) [xSiY A YSiz --> xSiz] (VX) [xAix ] (Vx) (Vy) [xAiY --------> xA? l ly] (Vx) (Vy) (Vz) [xAiY A YAiz ~--> xSiz] (Vx) (Vy) (Vz) [xAiY A YSiz ------> xAiz] (Vy) (~x) [xAiY]The above eight axioms may be expressed more succinctly in the calculus of relations as follows:Axiom i (Reflexive): i = S i -i Axiom 2 (Symmetric): S i = S i Axiom 3 (Transitive): S 2 ~ S i i Axiom 4 (Irreflexive)': I = Ai Axiom 5 (Symmetric): A i ~ A i Axiom 6 (Antitransitive): A~ l ~ Si Axiom 7 (Right-identity): AII S i ~ A i Axiom 8 (Nonempty): (Vy)(~x)[xAiY ]As mentioned in [3], even though si(Y ) ~ @ since YSiY by Axiom i, it may be necessary to add the following axiom:Axiom 9: (Vy)(~x)[x # y A xSiY]to guarantee that the domain of the relation S i is not trivia], i.e., si(Y) -(y} # ¢ Axiom 9 is not necessary if si(Y ) is permitted to be a unit set for certain words. Thus, we might define si(Y) = {y} for any function word y, e.g., si(and) = {and}. But this will not work for antonymy since ai(Y ) might be considered empty for certain words such as function words, e.g., ai(and) = ~. The alternative of defining ai(Y ) = {y} is not reasonable since it produces more problems than it solves.Axiom 8: (Vy)~x)[xAiY], which is equivalent to (~y)[ai(Y) # ~] ,is reasonable if the contrary y of word y (e.g., "irrelevant", "impossible", "nonuse", etc.) is permitted, i.e., ~ ¢ ai(Y). research goals: The synonymy and antonymy relations possess interesting proper ~ ties, which can be treated mathematically to provide insight about semantic relations and connectivity among words in a natural language.One such model is the axiom system just stated. The immediate goal of the current research is to compile, in computer-accessible form, a dictionary containing all synonymy and antonymy relations holding between selected words. Such a dictionary is useful in gaining a better understanding of how the English lexicon is semantically structured since it can eventually enable the determination of the completeness of the descriptions in any synonym-antonym dictionary. Another objective is to assist the lexicographer in compiling such a dictionary so that all words are defined and related in a consistent manner.For the present research a test dictionary was compiled by selecting English words from Webster's New Dictionary of Synonyms [9] . Accordingly, a set of computer programs was written to do the following:i. Input, in a prescribed format, words selected from the above dictionary together with relevant data concerning their synonyms and antonyms.2. Create in core memory a suitable data structure (see [5] ) for the input, which permits the manipulation of the dictionary data. Future extensions to the system would make use of direct-access storage to enable the processing of more data.The test dictionary is analyzed with the aid of computer programs that were written to do the following:I. Query the data structure about words and relations. Two query modes are built into the system. The first mode allows the selection of words fulfilling an input request and the second mode permits the verification that certain relations hold between selected words.2. Output the answers to queries or output the entire data structure, if desired.3. Verify the consistency of word groupings, the degree of completeness of related subgroups, and the presence or absence of anomalies-in the data base.3. Input input specification: First, it is necessary to specify and format the input data so that a set of programs may process and query a test dictionary, which resides in core in the present version of the system. This is accomplished using the following input prototype:<word>,<grammar code><sense #><relatlon~,~word>,...,<word>,; where i.<word> is an entry in Webster's New Dictionary of Synonyms.<grammar code>makes use of the following coding mnemonics:N -Noun V -Verb J -Adjective B -Adverb 0 -Pronoun D -Determiner L -Auxiliary P -Preposition C -Conjunction 3.<=ense #> is a one-digit number representing a sense associated with a word in the dictionary.Several problems remain in fully attaining the above stated goals.On the one hand, it is difficult to select from a manual dictionary sufficiently small sets of words that are closed under the relations S and A, while on the other hand large segments of such a dictionary cannot be input at present. Programs have been written to stgucture and process small test dictionaries, to select words from the data structure using a query language, and to verify that certain relations hold between words. <relation> is denoted by s -synonymy: A -Antonymy M -word used in the description of another word but not itself a main entry.5. <word>,...,<word> is the set of words standing in the given relation to the main entry in the given sense.Thus, each input item consists of a main-entry word followed by a comma, a one-character grammar code, a one-digit sense number, a one-character relation, a comma, a list of words (separated by commas) that in the given sense stand in the given relation to the main entry, a comma, and a semicolon that denotes the end of an input item. A sample computer input is:Continuation cards may be appended to any item by placing a "+" in column 80 of subsequent cards.In the first phase of pro¢essing the program checks the wellformedness of the input entries, isolates words, records grammatical classes, and establishes relations between words.The data structure created in core provides for the construction of two tables. where "*" denotes that any value in the specified field is allowed and the sense i is not explicitiy denoted. Item 1 above operates in the verification mode, while items 2-5 operate in the selection mode.Simple query statements can be extended to allow compound expresions by means of the operators "not", "and", and "then". For example, ? if x,S,y,.and.y,S,z,.then.x,S,z,;? not x,A,x,;? if x,A,y,.then.y,A,x,;? if x,A,y,.and.y,A,z,.then.x,$,z,;In addition, the input format for the properties of right-identity and nonempty are as follows: words. A loop is detected if every word is preceded by another word and the algorithm cannot locate a word that has no predecessor. This algorithm may be useful in developing techniques for structuring the vocabulary of a synonym-antonym dictionary so that no word is used before it has been defined.The second algorithm determines whether selected groups of words form an equivalence class with respect to synonymy in a given sense.A binary relation R is said to be an equivalence relation if it is reflexive, symmetric, and transitive. For example, the routine found that, aside from reflexivity, the words "pure", "simple", and "absolute" formed an equivalence class in a particular sense i. On the other hand, the words "aft", "astern", "abaft", "after", and "behind" formed two equivalence classes {aft, astern, abaft} and {after, behind}. At present, the graphs of equivalence classes are drawn manually, rather than by computer.Appendix 2 outlines the structure of an input deck and lists a sample input including both input data and query statements. In general, this form of output consists of lists of the following two types: a list of all words synonymous or antonymous to agiven Word, and a list of all synonymy or antonymy relations holding among a given set of words. matrix form: The matrix form of output represents the relations by a matrix consisting of S's and A's according to whether the relation S or A holds between given pairs of words. A blank in such a matrix indicates that neither S nor A relates two words in the data structure. For example, the following matrix revealed four senses of the word "simple".. The superscript denotes the sense number to be associated with "simple". A "*" is placed to the left of those words that do not appear as main eutries in Webster's New Dictionary of concluding remarks: The programs were written almost completely in FORTRAN IV and have been run on the IBM 360 and the PDP i0. A flowchart, which stm~narizes these programs, appears as Appendix i. In addition, a SNOBOL 4 program has been written for the detection of chains and loops.Several problems in fully achieving the stated research goals have appeared.It was difficult to select small closed sets of words from Webster's New Dictionary of Synonyms and it was not feasible to keypunch the entire dictionary. Since the size of a truly suitable data base was too large to retain in core memory, several sample dictionaries have been selected to study the feasibility of the principles and techniques involved. Most of the current effort has been devoted to providing programming capability for the processing of small test dictionaries. Different words may be input with each run, thereby increasing the size of the sample data base to gain deeper insight into the properties of the entries listed in a manual dictionary. Further computer-aided research on synonyms and antonyms will help to validate or extend the axiomatic model proposed earlier.Also, future research could consider the additional relations "contrasting" and "analogous" cited in some manual dictionaries and the i automatic determination of the senses of words. S,FULL,COMPLETE, PLENAR¥,; PLENARY, S,FULL,COMPLETE,REPLETE,; PLENARY, A,LIMITED,; EMPTY, S,VACANT,BLANK,VOID,VACUOUS,; EMPTY, S,EMPTY,; INCOMPLETE, M,; LIMITED, ~,; SEVERE, S,STERN,AUSTERE,ASCETIC,; SEVERE, A,TOLERANT,; STERN, S,SEVERE,AUSTERE,ASCETIC,; STERN, A,SOFT,; AUSTERE, S,SEVERE,STERN,ASCETIC,; AUSTERE, A,LUSCIOUS,; ASCETIC, S,AUSTERE,SEVERE,STENN,; : It was noted that since the first book on English synonyms, which appeared in the second half of the 18th century, dictionaries of synonyms and antonyms have varied according to the particular explicit or implicit definitions of "synonym" and "antonym" that were used. The roles of grammatical class, word context, and substitutability in the same context were discussed.As was noted, synonymy traditionally has been regarded as a binary relation between two words, Graphs of these binary relations were drawn for several sets of words based on Webster's Dictionary of Synonyms [8] and matrices for these graphs were exhibited as an equivalent representation. These empirical results showed that the concepts of synonymy and antonymy required the use of ternary relations between two words in a specified sense rather than simply a binary relation between two words. The synonymy relation was then defined implicitly, rather than explicitly, by three axioms stating the properties of being reflexive, symmetric, and transitive. The antonymy relation was also defined by three axioms stating the properties of being irreflexive, symmetric, and antitransitive (the last term was coined for that study). It was noted that these six axioms could be expressed in the calculus of relations and that this relation algebra could be used to produce shorter proofs of theorems, even though no proofs were given. In addition, several geometrical and topological models of synonymy and antonymy were posed and examined.The characterizations of synonymy and antonymy initiated in Edmundson [2] were investigated more thoroughly in Edmundson [3] .Synonymy and antonymy were defined jointly and implicitly by a set of eight axioms rather than separately as before. First, it was noted that the original six axioms were insufficient to permit the proofs of certain theorems whose truth was strongly suggested by intuitive notions about synonymy and antonymy. In addition, it was discovered that certain fundamental assumptions about synonymy and antonymy must be made explicit as axioms. Some of these have to do with specifying the domain and range of the synonymy and antonymy relations.This is related to questions about whether function words, which linguistically belong to closed classes, should have synonyms and antonyms and whether content words, which linguistically belong to open classes, must have synonyms and antonyms. Several fundamental theorems of this axiom system were stated and proved. The informal interpretations of many of these theorems were intuitively satisfying. For example, it was proved that any even power of the antonymy relation is the synonymy relation, while any odd power is the antonymy relation.These results supported the belief that an algebraic characterization is .insightful and appropriate. For example, the assumption that synonymy is an equivalence relation also has been made, either directly or indirectly, by F. Kiefer and S. Abraham [4] , U. Welnreich[i0], and others. Since the axiom system defined the notions of syn-onymy and antonymy jointly and implicitly, it avoidedlcertain difficulties that are encountered when attempts are made to define these notions separately and explicitly. Appendix:
null
null
null
null
{ "paperhash": [ "knuth|the_art_of_computer_programming,_volume_i:_fundamental_algorithms,_2nd_edition", "edmundson|1967_international_conference_on_computational_linguistics_axiomatic_characterization_of_synonymy_and_antonymy", "knuth|the_art_of_computer_programming", "|axiomatic_characterization_of_synonymy_and_antonymy", "kiefer|some_problems_of_formalization_in_linguistics", "naess|synonymity_as_revealed_by_intuition" ], "title": [ "The Art of Computer Programming, Volume I: Fundamental Algorithms, 2nd Edition", "1967 International Conference on Computational Linguistics axiomatic characterization of synonymy and antonymy", "The Art of Computer Programming", "Axiomatic Characterization of Synonymy and Antonymy", "SOME PROBLEMS OF FORMALIZATION IN LINGUISTICS", "Synonymity as Revealed by Intuition" ], "abstract": [ "A container closure assembly for maintaining a sterile sealed container is provided having a ferrule having a top annular portion and a depending skirt portion for securing a resilient stopper for sealing the mouth of a container to the container, and a ring fitment for opening the closure assembly overlying and interlocking with the ferrule. The ferrule includes a central opening in the top annular portion, upwardly projecting locking portions about the periphery of the opening and a weakening line radially outwardly of and concentric with the locking portions. The fitment includes a disk portion and a concentric outer lifting ring hingedly connected to the inner disk portion for opening the closure assembly. The disk portion is secured within the opening of the top portion of the ferrule by the locking projections.", "Traditionally, synonymy has been regarded as a binary relation between two words. Graphs of these binary relations were drawn for several sets of words based on Webster's Dictionary of S~non~ms and matrices for these graphs were exhibited as an equivalent representation. These empirical results showed that the concepts of synonymy and entonymy required the use of ternary relations between two words in a specified sense rather than simply a binary relation between two words. The synonymy relation was then defined implicitly, rather than explicitly, by three axiams stating the properties of being reflexive, symmetriC, and t/~ansitive. The entonym¥ relation was also defined by three axioms stating the properties of being irreflexive, symmetric, and antit/~ansit~ve (the last term was coined for that study). It was noted that thes~ six axioms could be expressed in the calculus of relations and that this relation algebra could be used to produce shorter proofs of t~eorems. However, no proofs were given. In addition, several gec~aet~ical and topological models of synonymy and antonymy '..J~ were posed and examined. ,~ It was nOted that certain of these models were of more theoretical than practical interest. Each model was seen to be simple in that it\" could be expressed from mathematically elementary concepts, end each stressed certain aspects of the linguistic object being modeled at the expense of others. However, there seemed to be little theoreti~al preference among them. Their adequacy as models could be measured by their generality and predictive power. In terms of these criteria the algebraic model, whether expressed in terms of relations, graphs, or matrices, seamed to have the most usefulness. In part, this was due to the fact that one geametrical model, although highly suggestive, did not include a precise specification of the origin, axes, or coordinates for words in an n-dimensional space. Similarly, one topological model required a closure operation for each of the intensions or senses and had no linguistically interesting interpretation.", "A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.", "This work is a con t inua t ion o f research repor ted in the paper Mathematical Models o f S ~ n o n ~ , which was presented a t the 1965 I n t e r n a t i o n a l Conference on Computational L i n g u i s t i c s . That paper p resen ted a h i s t o r i c a l summary of the concepts of synonymy and antonyms. I t was noted t h a t s ince the f i r s t book on Engl ish synoD S , which appeared in the second h a l f of the l a t h cen tury , d i c t i o n a r i e s of synonyms and antonyms have va r i ed according t o the p a r t i c u l a r e x p l i c i t d e f i n i t i o n s o f \"synonym\" and \"antonym\" t h a t were used. The r o l e s of p a r t o f s p e e c h , contex t of a word, and s u b s t i t u t a b i l i t y in the same context were d iscussed .", "1. During the last decade, linguists have contributed much to an exact theory of language. The main characteristics of an exact theory are that it operates only on the basis of clear-cut notions and that its statements can be exactly inferred from within the theory. In other words, an exact theory may contain only primitive (i.e., undefined) notions and notions exactly defined on the basis of the primitive notions. On the other hand, some statements are postulated (axioms), whereas others are proved (i.e., are derived from the axioms by an exact method of inference). The first science within which methods for the construction of an exact theory were developed was mathematics. Therefore, it has become customary to speak of mathematical methods, although these methods are common to all exact theories. It would be more reasonable to speak of exact methodology and accordingly of EXACT LINGUISTICS instead of mathematical linguistics. As a starting point, we shall accept the plausible statement that each language forms a structure. By structure we mean a pair", "TN HIS \"Analytic Sentences,\" Benson Mates contends \"that one is justified in saying that there are 'intuitive' notions of analyticity and synonymy.\"' This empirical hypothesis about the existence of certain phenomena or kinds of phenomena is tenable, so far as I can see. Mates has an intuitive notion of synonymity; I have had several in my life, and there is reason to believe that all of them have much in common. On the other hand, there is no reason to believe that the various intuited entities are identical or near identical. If a notion is distinguished from a concept, a sentence of the kind \"The person P has an intuitive notion of synonymity\" may conveniently be made to imply nothing about P's definiteness of intention, whereas the use of the term \"concept\" instead of \"notion\" in that skeleton sentence may be made to imply a certain minimal definiteness of intention. Calling something \"an intuitive notion of x,\" where \"x\" is a word or series of words borrowed from the natural languages, suggests that a certain designation is appropriate as a sign vehicle responsible for designating x in particular communicational events. That is, one suggests, if not implicitly asserts, something about the nature of the intuited entity and something about the use of the designation. Suppose, for example, x is the designation \"synonymity.\" The intuited entity would then not be considered the same as, for example, the intuited notion of heteronymity. This much may be inferred on the basis of plausible premises. But since the choice of one designation does not imply a rejection of all others, the entity which one author in one situation tries to refer to by the word \"synonymity,\" he or others may in some other situations refer to by means of \"sameness of meaning\" or even \"simultaneity\"-for all we know. In the following, however, we shall leave out of consideration any use that seems awkward to us." ], "authors": [ { "name": [ "D. Knuth" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "H. P. Edmundson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Donald E. Knuth" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [], "affiliation": [] }, { "name": [ "F. Kiefer", "S. Abraham" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Naess" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null ], "s2_corpus_id": [ "7168822", "62719604", "267817888", "7596334", "143446912", "170137191" ], "intents": [ [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false ] }
Problem: The paper aims to investigate the extent to which a set of eight axioms model an existing dictionary of synonyms and antonyms. Solution: The hypothesis is that the axioms proposed in the paper accurately represent the relationships of synonymy and antonymy in the existing dictionary, as demonstrated through the use of computer programs for analysis and manipulation of the data.
665
0.003008
null
null
null
null
null
null
null
null
b222746e6ccf8ebb07f2c5f9a70b635bb8db0fb8
18533917
null
Uber Zeitkeferenz Und Tempus
On time reference and tense.
{ "name": [ "Wunderlich, Dieter" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 23
1969-09-01
5
0
null
The problems of time reference and tense in natural languages are discussed in their syntactic and ee::antic aspects. The syntactic description is based on the principles of the generative transformational grammar. Since tense is a deictic category, the semantics language must be a pragmatically extended system. Furthermore, it seems that the use of tense and time adverbials in languages of highly civilized cultures can be adequately described in terms of an extensional language. (In all these cultures, conflicts concerning time reference are solved by consulting an extensional clock and calendar system.) The semantic description is based on work done by the logicians Mcntague and Kamp from UCLA. In addition to the theoretical treatment of problems of time reference, the possible applications of the proposed system will be sketched. ~h~ syntactic s_ystem. Tense is considered not as a constituent of a base-phrase-marker, but as a syntactic feature as condition for inflexional rules. These features are attached to sentence constituents.-They are to be understood as binary relations [e rel s~ between reference indices which can receive a definite semantic interpretation; with rel = ~before(vor), after(nach) overlapping(Sbl)} ; one index is called the index of event time e, the other the index of utterance time s. Unlike the tense, time adverbials are introduced as constituents of a base-phrase-marker. They are subcategorized into features similar to the tense features. It is discriminated between ad-verbi~ls which are related to context ( Advb -~ Ce rel e] ) and K those which are related to the time of utterance (Advb ~ [e rel sJ) BSince the latter imply corresponding tenses they are treated as alternatives to the tenses. For the rules cf. p. 5/b. The semantic_s[s~em.The semantic system is a system of predicate logic with the following descriptive extension :a set of variables over time intervals t, a primitive ordering relation "before" , together with some axioms defining the properties of "before". Each variable might be understood as a possible argument of sentences in the ordinary predicate logic.-Several models for the time system are specified : the time intervals are represented as elements of the power set of the real numbers, or the time intervals are represented as real numbers with neighbourhcods (stating a topology over the set of real numbers). Sometimes an approximation of the minimal time intervals to time points is used, too. In this case time intervals simply are represented as intervals of time points.The translation from syntactic to semantic system is obtained by means of the reference indexes. given set of utterance times resp. event times. The symbol 'rel" in the feature causes some specified conditions which hold between T and t resp. T and t . The time structure of texts. Some preliminary considerations about possible basic time structures of texts are given. The following notion for the time structure TS of a text containing m sentences is proposed. = <UTV, EOR, Id, R> ,with UTV = i~ediate text prede-TS m cessor, E0R -events ordering relation, Id = Identity, R = matching from the totally ordered set of sentences into the ordered set of events.Mit den folgenden Ausf~hrungen eoll in ereter Linie ein Beitragl~ur Grammatiktheorie nat~rlicher ~prachen geleistet werden /. Erst in zweiter Linie und vorl~ufig noch sehr fragmentarisch werden Uberlegungen angestellt, wie auf der Grundlage der dargestellten Theorie Algorithmen entwickelt werden k~nnen zur Analyse yon Texten und zur Rekonstruktion der in ihnen beschriebenen Ereigniszusa~nenh~nge.Es wird versucht, ein formales System zur Repr~sentation yon zeitreferentiellen Beziehungen zu entwickeln. Diese werden sowohl unter syntaktischem wie auch unter semantischem Gesichtspunkt untersucht. Zeitbezug ist in den nat~rlichen Sprachen durch eine Anzahl verschiedenartiger Ausdr~cke mSglich, dutch Zeitadverbien, Zeitkonjunktionen und pr~positionale Zeitbestimmungen (sie werden zusammengefaBt unter dem Namen Zeitadverbial), ebenfalls durch verbals und nominale Ausdr~cke (die hier unber~cksichtigt bleiben, eoweit nicht die ncminalen Ausdr~cke als Reduktionsformen aus S~tzen aufzufassen sind). Au~erdem ist impliziter Zeitbezug z.B. durch Kausalbestimmungen mSglich. Ferner werden die Tempusmcrpheme zum Ausdruck yon Zeitbeziehungen verwendet (allerdings kSnnen sie auSerdem auch modale und aepektuelle Beziehungen ausdr~cken). Zeitadverbiale und Tempusmorpheme werden gemeinsam unter dem N~men Zeitaus-dr~cke angefiihrt.F~r die syntaktische Representation der Zeitausdr~cke wird ein System yon Regeln zugrundegelegt, das weitgehend den Postulaten der generativ-transformationellen Grammatiktheorie entspricht. Bei der Formulierung dee semantischen Systems wird auf Vorschl~ge der kalifornischen Logiker Montague und Kemp zur~ickgegriffen 2).Es werden durchweg Verh~Itnisse der deutschen Sprache zugrundegelegt; doch ist das Verfahren ohne Schwierlgkeit auf verwandte Sprachen zu Gbsrtragen. Es kann sogar vermutet werden, da~ das Beschreibungssystem auf alle die sprachen angewendet werden kann, in denen sin expliziter Bezug auf externe Zeitskalen (d.h. auf Uhren-und Kalendersysteme) mSglich ist, damit praktisch auf alle Zivilisationssprachen. Da in diesen Sprachen Konflikts-oder Zweifelssituationen hinsichtlich einer Zeitreferenz gemeinhin dadurch entschieden werden, dab auf das konventionelle Uhren-und Kalendersystem verwiesen wird (das yon extensionalem Charakter ist), scheint es gerechtfertigt zu sein, auch fGr das semantische Besohreibungssystem extensionale Modelle anzugeben.Anm. S) Die nachfolgenden Abschnitte 2 und 3 enthalten einen Auszug aus meiner Dissertation "Tempus und Zeitreferenz im Deutschen", TU Berlin 1969.2) Richard Montague, "Pragmatics and Intensional Logic". Southern California Logic Colloquium, 6. Januar 1967(Mimeogr. ). J.A.W.Kamp, "On tense logic".Logic Colloquium UCLA, 20.Febr.1967 (mlmeogr.).In der syntaktischen Kompomente einer Sprachbeschreibung werden die kombinatorischen Eigenschaften und Relationen der AusdrGcke der Sprache beschrieben. Es handelt sich dabel erstens darum, den Segmentationszusammenhang und die Distributlonselgenschaften der Satzkonstituenten auf mSglichst 6konomische Weise durch eine Anzahl generell formullerter Regeln herzulelten; dies geschieht, indem zun~chdt Basls-Phrase-Marker (~Tiefenstrukturen) mlttels geordneter kontextfreler Phrasenstrukturregeln, evtl. kontextsensitlver Subkategorislerungsregeln und Lexikonregeln hergestellt werden, und anschlieBend die Tiefenstrukturen sukzessive durch eine Folge yon Transformationsregeln in wohlgeformte abgeleitete Phrase-Marker (=Ober-fl~chenstrukturen) GbergefGhrt werden. Zweitens handelt es sich darum, den eingefGhrten Konstltuenten syntaktische Merkmale (oder Nerkmalskomplexe) zuzuordnen, die mehrere Funktionen (und oft einige zugleich) erfGllen kSnnen : -sie dienen als Kontextbedingung fGr die Anwendung yon Subkategorisierungsregeln, Dazu gehSren ein Sprechzeitindex, ferner mindestens Indlzlerungen der Personen des Sprechers und des Angesprochenen und lokale Indizlerungen. Mithilfe derartlger pragmatisoher Indizes lassen sich die Prozesse formulieren, die zu den Personmorphsmen oder den Personalpronomen der'1, und 2. Person fRhren, zu lokaldeiktischen AusdrGcken wie hierher, dort, komm, zur Unterscheidung yon direkter und indlrekter Reds, und ebenfalls zu den Tempusmorphemen und den sprechzeltrelativen Adverbialen wie neulich, morgen, jetzt, in elner Woche.Mit dem folgenden System yon Regeln wird nur sin illustrativer Ausschnitt aus der Syntaxbeschreibung des Deutschen gegeben; dabel werden zahlreiche Einzelheiten vernachlRssigt, z.B alle modalen Satzadverbiale, die indirekten Objekte und die Prapositionalobjekte. Ferner werden kelne Transformationsregeln formuliert. ( <tl,t2> ) ( tlvor t 2 v t1~oh t 2 v tlGbl t2)Aus Axl und Ax2 folgt die Nichtzyklizlt~t yon Z im Endlichen.Weiter lassen sich "teil" und "morn" (ftir Moment) definieren.DefS. tlteil t 2 =df (t) ( t Gbl t I ~ t Gbl t2) Def4. morn t I =df (t) ( t tell t I D t I teil t )Ein Spezialfall der 8berlappung ist die Gleichzeitigkeit(-"glz") DefS. t I glz t 2 =dr tlUbl t 2 A (t) ( t teil tin t tell t2)Jede Variable t l~St sich als mSgliches Argument einer pradikatenlogischen Formel verstehen. Formeln mit bie auf t konetanten Argumenten sind in der gewShnliohen Pradikatenlogik Satze ( bzw. Propositionen P ), sie repr~sentieren, sofern eie die Wahrheltsbedingungen erfG11en, Variable V Gber mSgliche Ereigniese ; bei Ersetzung yon t duroh eine Konstante z repr~sentieten sie mSgllche Ereignlsse E. (Erelgnisse brauchen durchaus nicht nur Saehverhalte zu sein, sie kSnnen auch fiktiv gedacht werden.) Mithilfe der Ordnungsrelation "vor" kann man zugleich much eine Ordnung der Varlablen Gber mSgliche Ereignlsse erreichert :Ax 4-E Pl(...,tl,...) A E P2(...,t2,...) tlvor t 2 A wahr P1 ~ wahr P2 ~ V 1 vor V 2FGr d&s Zeitsyetem lassen sich mithilfe d er Menge der reellen Zahlen R mehrere extensionale Modelle angeben. (I) Zeitintervalle werden ale Elemente der Potenzmenge P(R) repr~eentiert, und zwar kommen alle zusammenhangende Teilmengen yon R infrage. Theoretisch macht das Modell 3 die geringsten Schwlerigkeiten. Doch mu8, tun die VerhRltnlsse in den nat~rlichen Sprachen ad~quat zu beschreiben, auch Modell I oder Modell 2 herangezogen werden. Dabei scheint Modell 2 die grSSeren Vorteile zu bieten: (I) Ausdr~cke wie jetzt oder heute beschreiben Umgebungen der Sprechzeit, die -je nach Kontext -klein oder gro8 sein kSnnen: ich bin ~etzt m~de vs.es ist ~etzt Sommer vs. wir leben ~etzt in elner Zwischeneiszeit.(2) Ausdr~cken wie glelchvs, bal.___dd, v0rhi______~nvs, neulich kann keine prRzise Bedeutung gegeben werden. Auch die sie unterscheidende Distanz yon der Sprechzeit kann nur rage, d.h. als Abstand zwischen zwei Umgebungen angegeben werden.(3) Selbst Aussagen, die Ausdr~cke des 5ffentlichen Zeitsystems verwenden, haben nicht immer pr~zise Bedeutung, sle k~nnen aber pr~zislert werden: das Ungl~ck passierte um 9 Uhr --> das Ungl~ck passierte genau um 9 Uhr vor 3 Jahren wurde meine Tochter geboren ---~ vor 3 Jahren, 4 Tagen und 2 Stunden wurde meine Tochter geboren. 4o AnwendungsmSgllchkelten° Das in den Absohnitten 2 und 3 erl~uterte System der Zeit-ausdr~cke und Zeltreferenzen ist gegenw~rtig noch nicht gen~gend entwickelt worden, umbereits praktische Anwendungen zu ermSglichen. Deshalb kann ich nut skizzieren, in welcher Richtung Anwendungen mSglich erecheinen bzw. geplant sind.Die Hauptauf~be wird darin bestehen, einen Algorlthmus zu entwickeln, der die einem Text inh~rente Zeitstruktur auffinden kann, d.h. die zeitlichen Beziehungen zwischen den in den einzelnen Teilen des Textes beschriebenenSachverhalten. Dieser AlgorithmusmuB an den mSglichen Oberfl~chenstrukturen yon S~tzen orientiert sein; er muS sie zur~ckf~hren auf entsprechende Tiefenstrukturen. Allerdings let keine vollst~ndige Rekonstruktion der Tiefenstrukturen notvendig, es gen~gt sine partielle Rekonstruktion, n~mlich des zugrundeliegenden Systems yon Ereigniszeitindizes, damit die Abbildung einer Textsequenz auf die ausgedr~ckten Ereignissequenzen mSglich wird.(1) Allen Ausdr~cken, die -~Tob gesagt -auf Aktionen, Prozeese, oder Zust~nde verweisen, und daher tiefenstrukturell Satzstatus haben, werden Referenzindizes zugeordnet : primer allen flniten Verbkonstruktionen, ferner den Partizipial-und Infinitivkonetruktionen und den Nomina actionis, die als Nominallsierung yon zugrundeliegenden S~tzen verstanden werden kSnnen, auSerdem allen Uhrzeit-und Datumsangaben.(2) Die Relationen zwischen den Referenzindizes werden bestimmt. Darer sind zu analysieren F~r die Konstruktion sines derartigen Algorithmus m~ssen zun~chst noch verschiedene Voraussetzungen erf~llt sein : (I) Es mu~ sine vollst~ndige Liste aller zeitreferierenden Aus-dr~cke, zusammen mit ihren syntaktischen Umgebungen bzw° ihrem syntaktischen Verhalten, aufgestellt werden. Wenn nur die am h~ufigsten vorkommenden Ausdr~cke in der Liste verzeichnet wer. den sollen~ mu~ eine mechanische Sichtung yon Textproben vor~ genommen werden.(2) Es mu~ sin Teilalgorithmus entwickelt werden, der die Kom-patibilit~t von Zeitausdr~oken pr~ft, z.B. die Kompatibilit~t yon Tempusformen mit bsstimmten Klassen yon Zeitadverbien. Der PrGfalgorithmus operiert Gber die semantischen Repr~sentationen der AusdrGcke, z.B. mithilfe Durchschnlttsbildungen. Bei leeren Durchschnitten besteht Nichtkompatibilit~t. (J) deben der zeitreferentiellen Struktur yon Texten muBwenigstens teilweise -die koreferentielle Struktur yon Nominalphrasen NP einbezogen werden; vor allem mGssen identische NP markiert und die Form yon Artikeln und pronominalen Aus-drGcken untersucht werden. Erst im Vergleich der zeitreferentiellen Struktur eines Textes mit seiner nominalen Verweisungsstruktur kann festgestellt wsrden, ob in dem untersuchten Text verschiedene voneinander unabh~ngige Prozesse oder nur Teilprozesse eines einzigen Prozesses dargestellt werden. 4.2. Basisstrukturen. ~r die Klassifizierung (und damit auch sine ~bersichtliche Representation) yon zeitrefersntiellen Beziehungen in Texten ist es n~tzlich, die mSglichen Grundstrukturen zu untersuchen. Grunds~tzlioh wlrd angenommen, dab in Texten Sequenzen yon Ereignissen oder yon Teilprozessen E (bzw. Variable ~ber Ereignissequenzen) dargestellt werden. (Auch statlon~re Zust~nde werden ale Teilprozesse veretanden. ) Alle Sequenzen sind hinsichtlich der Zeitvariablen geordnet, d.h. mit der Sequenz (El, E2, ES,..., En_1, En,...) sind zugleich die Relationen tlvor t 2 , t2vor t 3 ,..., tn_iVor t n ,... gegeben.Folgende
null
null
null
null
Main paper: das s~ntaktische ~stem.: In der syntaktischen Kompomente einer Sprachbeschreibung werden die kombinatorischen Eigenschaften und Relationen der AusdrGcke der Sprache beschrieben. Es handelt sich dabel erstens darum, den Segmentationszusammenhang und die Distributlonselgenschaften der Satzkonstituenten auf mSglichst 6konomische Weise durch eine Anzahl generell formullerter Regeln herzulelten; dies geschieht, indem zun~chdt Basls-Phrase-Marker (~Tiefenstrukturen) mlttels geordneter kontextfreler Phrasenstrukturregeln, evtl. kontextsensitlver Subkategorislerungsregeln und Lexikonregeln hergestellt werden, und anschlieBend die Tiefenstrukturen sukzessive durch eine Folge yon Transformationsregeln in wohlgeformte abgeleitete Phrase-Marker (=Ober-fl~chenstrukturen) GbergefGhrt werden. Zweitens handelt es sich darum, den eingefGhrten Konstltuenten syntaktische Merkmale (oder Nerkmalskomplexe) zuzuordnen, die mehrere Funktionen (und oft einige zugleich) erfGllen kSnnen : -sie dienen als Kontextbedingung fGr die Anwendung yon Subkategorisierungsregeln, Dazu gehSren ein Sprechzeitindex, ferner mindestens Indlzlerungen der Personen des Sprechers und des Angesprochenen und lokale Indizlerungen. Mithilfe derartlger pragmatisoher Indizes lassen sich die Prozesse formulieren, die zu den Personmorphsmen oder den Personalpronomen der'1, und 2. Person fRhren, zu lokaldeiktischen AusdrGcken wie hierher, dort, komm, zur Unterscheidung yon direkter und indlrekter Reds, und ebenfalls zu den Tempusmorphemen und den sprechzeltrelativen Adverbialen wie neulich, morgen, jetzt, in elner Woche.Mit dem folgenden System yon Regeln wird nur sin illustrativer Ausschnitt aus der Syntaxbeschreibung des Deutschen gegeben; dabel werden zahlreiche Einzelheiten vernachlRssigt, z.B alle modalen Satzadverbiale, die indirekten Objekte und die Prapositionalobjekte. Ferner werden kelne Transformationsregeln formuliert. ( <tl,t2> ) ( tlvor t 2 v t1~oh t 2 v tlGbl t2)Aus Axl und Ax2 folgt die Nichtzyklizlt~t yon Z im Endlichen.Weiter lassen sich "teil" und "morn" (ftir Moment) definieren.DefS. tlteil t 2 =df (t) ( t Gbl t I ~ t Gbl t2) Def4. morn t I =df (t) ( t tell t I D t I teil t )Ein Spezialfall der 8berlappung ist die Gleichzeitigkeit(-"glz") DefS. t I glz t 2 =dr tlUbl t 2 A (t) ( t teil tin t tell t2)Jede Variable t l~St sich als mSgliches Argument einer pradikatenlogischen Formel verstehen. Formeln mit bie auf t konetanten Argumenten sind in der gewShnliohen Pradikatenlogik Satze ( bzw. Propositionen P ), sie repr~sentieren, sofern eie die Wahrheltsbedingungen erfG11en, Variable V Gber mSgliche Ereigniese ; bei Ersetzung yon t duroh eine Konstante z repr~sentieten sie mSgllche Ereignlsse E. (Erelgnisse brauchen durchaus nicht nur Saehverhalte zu sein, sie kSnnen auch fiktiv gedacht werden.) Mithilfe der Ordnungsrelation "vor" kann man zugleich much eine Ordnung der Varlablen Gber mSgliche Ereignlsse erreichert :Ax 4-E Pl(...,tl,...) A E P2(...,t2,...) tlvor t 2 A wahr P1 ~ wahr P2 ~ V 1 vor V 2FGr d&s Zeitsyetem lassen sich mithilfe d er Menge der reellen Zahlen R mehrere extensionale Modelle angeben. (I) Zeitintervalle werden ale Elemente der Potenzmenge P(R) repr~eentiert, und zwar kommen alle zusammenhangende Teilmengen yon R infrage. Theoretisch macht das Modell 3 die geringsten Schwlerigkeiten. Doch mu8, tun die VerhRltnlsse in den nat~rlichen Sprachen ad~quat zu beschreiben, auch Modell I oder Modell 2 herangezogen werden. Dabei scheint Modell 2 die grSSeren Vorteile zu bieten: (I) Ausdr~cke wie jetzt oder heute beschreiben Umgebungen der Sprechzeit, die -je nach Kontext -klein oder gro8 sein kSnnen: ich bin ~etzt m~de vs.es ist ~etzt Sommer vs. wir leben ~etzt in elner Zwischeneiszeit.(2) Ausdr~cken wie glelchvs, bal.___dd, v0rhi______~nvs, neulich kann keine prRzise Bedeutung gegeben werden. Auch die sie unterscheidende Distanz yon der Sprechzeit kann nur rage, d.h. als Abstand zwischen zwei Umgebungen angegeben werden.(3) Selbst Aussagen, die Ausdr~cke des 5ffentlichen Zeitsystems verwenden, haben nicht immer pr~zise Bedeutung, sle k~nnen aber pr~zislert werden: das Ungl~ck passierte um 9 Uhr --> das Ungl~ck passierte genau um 9 Uhr vor 3 Jahren wurde meine Tochter geboren ---~ vor 3 Jahren, 4 Tagen und 2 Stunden wurde meine Tochter geboren. 4o AnwendungsmSgllchkelten° Das in den Absohnitten 2 und 3 erl~uterte System der Zeit-ausdr~cke und Zeltreferenzen ist gegenw~rtig noch nicht gen~gend entwickelt worden, umbereits praktische Anwendungen zu ermSglichen. Deshalb kann ich nut skizzieren, in welcher Richtung Anwendungen mSglich erecheinen bzw. geplant sind. zeitalgorithmus.: Die Hauptauf~be wird darin bestehen, einen Algorlthmus zu entwickeln, der die einem Text inh~rente Zeitstruktur auffinden kann, d.h. die zeitlichen Beziehungen zwischen den in den einzelnen Teilen des Textes beschriebenenSachverhalten. Dieser AlgorithmusmuB an den mSglichen Oberfl~chenstrukturen yon S~tzen orientiert sein; er muS sie zur~ckf~hren auf entsprechende Tiefenstrukturen. Allerdings let keine vollst~ndige Rekonstruktion der Tiefenstrukturen notvendig, es gen~gt sine partielle Rekonstruktion, n~mlich des zugrundeliegenden Systems yon Ereigniszeitindizes, damit die Abbildung einer Textsequenz auf die ausgedr~ckten Ereignissequenzen mSglich wird.(1) Allen Ausdr~cken, die -~Tob gesagt -auf Aktionen, Prozeese, oder Zust~nde verweisen, und daher tiefenstrukturell Satzstatus haben, werden Referenzindizes zugeordnet : primer allen flniten Verbkonstruktionen, ferner den Partizipial-und Infinitivkonetruktionen und den Nomina actionis, die als Nominallsierung yon zugrundeliegenden S~tzen verstanden werden kSnnen, auSerdem allen Uhrzeit-und Datumsangaben.(2) Die Relationen zwischen den Referenzindizes werden bestimmt. Darer sind zu analysieren F~r die Konstruktion sines derartigen Algorithmus m~ssen zun~chst noch verschiedene Voraussetzungen erf~llt sein : (I) Es mu~ sine vollst~ndige Liste aller zeitreferierenden Aus-dr~cke, zusammen mit ihren syntaktischen Umgebungen bzw° ihrem syntaktischen Verhalten, aufgestellt werden. Wenn nur die am h~ufigsten vorkommenden Ausdr~cke in der Liste verzeichnet wer. den sollen~ mu~ eine mechanische Sichtung yon Textproben vor~ genommen werden.(2) Es mu~ sin Teilalgorithmus entwickelt werden, der die Kom-patibilit~t von Zeitausdr~oken pr~ft, z.B. die Kompatibilit~t yon Tempusformen mit bsstimmten Klassen yon Zeitadverbien. Der PrGfalgorithmus operiert Gber die semantischen Repr~sentationen der AusdrGcke, z.B. mithilfe Durchschnlttsbildungen. Bei leeren Durchschnitten besteht Nichtkompatibilit~t. (J) deben der zeitreferentiellen Struktur yon Texten muBwenigstens teilweise -die koreferentielle Struktur yon Nominalphrasen NP einbezogen werden; vor allem mGssen identische NP markiert und die Form yon Artikeln und pronominalen Aus-drGcken untersucht werden. Erst im Vergleich der zeitreferentiellen Struktur eines Textes mit seiner nominalen Verweisungsstruktur kann festgestellt wsrden, ob in dem untersuchten Text verschiedene voneinander unabh~ngige Prozesse oder nur Teilprozesse eines einzigen Prozesses dargestellt werden. 4.2. Basisstrukturen. ~r die Klassifizierung (und damit auch sine ~bersichtliche Representation) yon zeitrefersntiellen Beziehungen in Texten ist es n~tzlich, die mSglichen Grundstrukturen zu untersuchen. Grunds~tzlioh wlrd angenommen, dab in Texten Sequenzen yon Ereignissen oder yon Teilprozessen E (bzw. Variable ~ber Ereignissequenzen) dargestellt werden. (Auch statlon~re Zust~nde werden ale Teilprozesse veretanden. ) Alle Sequenzen sind hinsichtlich der Zeitvariablen geordnet, d.h. mit der Sequenz (El, E2, ES,..., En_1, En,...) sind zugleich die Relationen tlvor t 2 , t2vor t 3 ,..., tn_iVor t n ,... gegeben.Folgende : The problems of time reference and tense in natural languages are discussed in their syntactic and ee::antic aspects. The syntactic description is based on the principles of the generative transformational grammar. Since tense is a deictic category, the semantics language must be a pragmatically extended system. Furthermore, it seems that the use of tense and time adverbials in languages of highly civilized cultures can be adequately described in terms of an extensional language. (In all these cultures, conflicts concerning time reference are solved by consulting an extensional clock and calendar system.) The semantic description is based on work done by the logicians Mcntague and Kamp from UCLA. In addition to the theoretical treatment of problems of time reference, the possible applications of the proposed system will be sketched. ~h~ syntactic s_ystem. Tense is considered not as a constituent of a base-phrase-marker, but as a syntactic feature as condition for inflexional rules. These features are attached to sentence constituents.-They are to be understood as binary relations [e rel s~ between reference indices which can receive a definite semantic interpretation; with rel = ~before(vor), after(nach) overlapping(Sbl)} ; one index is called the index of event time e, the other the index of utterance time s. Unlike the tense, time adverbials are introduced as constituents of a base-phrase-marker. They are subcategorized into features similar to the tense features. It is discriminated between ad-verbi~ls which are related to context ( Advb -~ Ce rel e] ) and K those which are related to the time of utterance (Advb ~ [e rel sJ) BSince the latter imply corresponding tenses they are treated as alternatives to the tenses. For the rules cf. p. 5/b. The semantic_s[s~em.The semantic system is a system of predicate logic with the following descriptive extension :a set of variables over time intervals t, a primitive ordering relation "before" , together with some axioms defining the properties of "before". Each variable might be understood as a possible argument of sentences in the ordinary predicate logic.-Several models for the time system are specified : the time intervals are represented as elements of the power set of the real numbers, or the time intervals are represented as real numbers with neighbourhcods (stating a topology over the set of real numbers). Sometimes an approximation of the minimal time intervals to time points is used, too. In this case time intervals simply are represented as intervals of time points.The translation from syntactic to semantic system is obtained by means of the reference indexes. given set of utterance times resp. event times. The symbol 'rel" in the feature causes some specified conditions which hold between T and t resp. T and t . The time structure of texts. Some preliminary considerations about possible basic time structures of texts are given. The following notion for the time structure TS of a text containing m sentences is proposed. = <UTV, EOR, Id, R> ,with UTV = i~ediate text prede-TS m cessor, E0R -events ordering relation, Id = Identity, R = matching from the totally ordered set of sentences into the ordered set of events.Mit den folgenden Ausf~hrungen eoll in ereter Linie ein Beitragl~ur Grammatiktheorie nat~rlicher ~prachen geleistet werden /. Erst in zweiter Linie und vorl~ufig noch sehr fragmentarisch werden Uberlegungen angestellt, wie auf der Grundlage der dargestellten Theorie Algorithmen entwickelt werden k~nnen zur Analyse yon Texten und zur Rekonstruktion der in ihnen beschriebenen Ereigniszusa~nenh~nge.Es wird versucht, ein formales System zur Repr~sentation yon zeitreferentiellen Beziehungen zu entwickeln. Diese werden sowohl unter syntaktischem wie auch unter semantischem Gesichtspunkt untersucht. Zeitbezug ist in den nat~rlichen Sprachen durch eine Anzahl verschiedenartiger Ausdr~cke mSglich, dutch Zeitadverbien, Zeitkonjunktionen und pr~positionale Zeitbestimmungen (sie werden zusammengefaBt unter dem Namen Zeitadverbial), ebenfalls durch verbals und nominale Ausdr~cke (die hier unber~cksichtigt bleiben, eoweit nicht die ncminalen Ausdr~cke als Reduktionsformen aus S~tzen aufzufassen sind). Au~erdem ist impliziter Zeitbezug z.B. durch Kausalbestimmungen mSglich. Ferner werden die Tempusmcrpheme zum Ausdruck yon Zeitbeziehungen verwendet (allerdings kSnnen sie auSerdem auch modale und aepektuelle Beziehungen ausdr~cken). Zeitadverbiale und Tempusmorpheme werden gemeinsam unter dem N~men Zeitaus-dr~cke angefiihrt.F~r die syntaktische Representation der Zeitausdr~cke wird ein System yon Regeln zugrundegelegt, das weitgehend den Postulaten der generativ-transformationellen Grammatiktheorie entspricht. Bei der Formulierung dee semantischen Systems wird auf Vorschl~ge der kalifornischen Logiker Montague und Kemp zur~ickgegriffen 2).Es werden durchweg Verh~Itnisse der deutschen Sprache zugrundegelegt; doch ist das Verfahren ohne Schwierlgkeit auf verwandte Sprachen zu Gbsrtragen. Es kann sogar vermutet werden, da~ das Beschreibungssystem auf alle die sprachen angewendet werden kann, in denen sin expliziter Bezug auf externe Zeitskalen (d.h. auf Uhren-und Kalendersysteme) mSglich ist, damit praktisch auf alle Zivilisationssprachen. Da in diesen Sprachen Konflikts-oder Zweifelssituationen hinsichtlich einer Zeitreferenz gemeinhin dadurch entschieden werden, dab auf das konventionelle Uhren-und Kalendersystem verwiesen wird (das yon extensionalem Charakter ist), scheint es gerechtfertigt zu sein, auch fGr das semantische Besohreibungssystem extensionale Modelle anzugeben.Anm. S) Die nachfolgenden Abschnitte 2 und 3 enthalten einen Auszug aus meiner Dissertation "Tempus und Zeitreferenz im Deutschen", TU Berlin 1969.2) Richard Montague, "Pragmatics and Intensional Logic". Southern California Logic Colloquium, 6. Januar 1967(Mimeogr. ). J.A.W.Kamp, "On tense logic".Logic Colloquium UCLA, 20.Febr.1967 (mlmeogr.). Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
16a29a1194dc67c25e4983a73dfbcf66336eea9e
7129409
null
Network of Binary Relations in Natural Language
It is an esmontl~, cha~acteristic of nat~al languages that one uox~l oan be concatenated mt~h certain others to form a et~Ing tha~ enters in correct phrases of the language, whale it cannot be eo~atamated ui~h othe~s. The same holds true for e~rlng~ of words. Such eoncatenable elements axe also "mutually eompatible elements" in the sense used b 7 KVAL in a paper dos-o~ib~n8 an aleori~ for formi~K naxlnum classes of such elements~ Mathematically, a set of ordered pa~s of such "s~tvall~ ¢ompa-t£ble elements" forms a x~lat£~. Ever 7 string Of ~ords belonging to a langua~ can be ~esax~d as being obtained by euoceesive oo~oatenatton of ordered pair8 of mttuall~ oompat£ble elements, i.e. as formed by eueoessive oonoatenation of elements of binary relations be~ to the languase, Some of these string e~e the phrases of the la~ua6e. It is thu possible to define a ~a~ mar of ~elation-and 6ene~ate by it all phrases of the lan~m~. If ue tx~ to describe by a ~ this gene~atlon~ the ~aph will be a network desc~Iblz~ the mhole system of lan~uaKe under oonsi- The equivalence between a g~am~a~ of relations and an lO-~a~ma~ and the equivalence between a g£~m~r of ~ela~ions and a oategortal 8x~mmu~ ~ a]~mst self-evident. B~ usi~ the notations for union, interseotion and Oartesiam prod~ct it is imssible to urite one single formula~ however
{ "name": [ "Birbanescu, Adrian" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 54
1969-09-01
8
0
null
null
null
null
null
shall call string also a sequence oontainlng one sln~le word.The empty word ~ is cbsx~ote~tsed b~ ail = ~ a i = a i fox' ovory ai~ Ve ~me B1;r~lJ U ~33 be oa33edrBontoD~0BeThe set of all sentences generated on V is t~ 7 definition the ~ Lo By ~ax~ar G we shall undez~tand a set of rules by which it is possible' to generate the language L. The sOt ~iS thus oomposod of 12 ordered pai~s and triplets. conve~asl~.
Main paper: : shall call string also a sequence oontainlng one sln~le word.The empty word ~ is cbsx~ote~tsed b~ ail = ~ a i = a i fox' ovory ai~ Ve ~me B1;r~lJ U ~33 be oa33edrBontoD~0BeThe set of all sentences generated on V is t~ 7 definition the ~ Lo By ~ax~ar G we shall undez~tand a set of rules by which it is possible' to generate the language L. The sOt ~iS thus oomposod of 12 ordered pai~s and triplets. conve~asl~. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
0adb9eb9fa4e990a8be2adbf15ba226ab1f83f38
18605821
null
Computational Analysis of Interference Phenomena on the Lexical Level
This contribution presents the results of comparison of Dutch texts written by bilinguals I) (speaking French and Dutch), with Dutch texts regarded as STANDARD WRITTEN DUTCH. The attention was focussed on French loan-words appearing in both types of texts and the differences in their use. Certain generalizations as to the mechanisms of interference are suggested. I. Mater~ls The materials used for the present contribution belong to two groups : r Group A : texts written by francophones with ca. 6 years of Dutch training. These texts represent what we call Francophone Written Dutch (below FWD). -group B : Texts from recent contemporary Dutch literature by both Dutch and Flemish authors. They will here represent Standard Written Dutch (SWD). ============================================================== (~ We are greatly indebted for the assistance of oul colleages Mr.L.DE BUSSCHERE, who prepared all computer programs needed in this investigatfon, Mr.R.EECKHOUT, who helped us with many suggestions as to the possibilities of information processing techniques and with critical remarks concerning the linguistic aspects of our problem, and -last but not least -the Direction of the MATHEMATICAL CENTRE of the University of Louvain, who put at our disposal the IBM-360 computer.
{ "name": [ "Skalmowski, W. and", "Van Overbeke, M." ], "affiliation": [ null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 16
1969-09-01
4
0
null
The materials used for the present contribution belong to (~ We are greatly indebted for the assistance of oul colleages Mr.L.DE BUSSCHERE, who prepared all computer programs needed in this investigatfon, Mr.R.EECKHOUT, who helped us with many suggestions as to the possibilities of information processing techniques and with critical remarks concerning the linguistic aspects of our problem, and -last but not least -the Direction of the MATHEMATICAL CENTRE of the University of Louvain, who put at our disposal the IBM-360 computer.The texts of group A were written by 400 francophone 18 yearold pupils in the highest classes at the 61 private secundary schools in Brussels and its suburbs. This sample represents one fifth of the total population. From every pupil we obtained two Dutch compositions, one of them a piece of homework writtenin November ]967, another an examination composition fromDecember of the same year. The reasons for this choice are evident, since the pupils can call in their parents' and their dictionaries' assistance in the first situation but not in the second.From every composition the first 125 words were put on punchcards together with coded information as to their source. In this way a corpus of ca. 100,000 words was compiled. In order to allow for comparison of relative parameters such as wordspread, vocabulary-growth etc., it was later divided into two parts each containing ca. 50,000 words (parts I and 2 below).The texts of group B, i.e. the SWD, were obtained by putting together extracts from literary work by I0 contemporary authors.This anthology gave us a corpus of some ]O,0OO words.The first part of group A reflects ca. 50 different subjectmatters, whereas the SWD-anthology reflects only ]O subjectmatters or "themes". So the disproportion of corpora is outweighed by a themes/tokens ratio which is I/ 10 in both corpora.
null
null
As a first approximation test the percentage of foreign words in the vocabulary in both FWD-and SWD-texts was established. In other words, the "conceptual symbols" do not represent separate pieces of the univers de disoour8 taken at random, but are probably ordered by some classificational system, resembling the biological classification.
To test this hypothesis we divided the FWD material into
Main paper: lexical mter~renceand word-~ngth: As a first approximation test the percentage of foreign words in the vocabulary in both FWD-and SWD-texts was established. In other words, the "conceptual symbols" do not represent separate pieces of the univers de disoour8 taken at random, but are probably ordered by some classificational system, resembling the biological classification. word content and entropy: To test this hypothesis we divided the FWD material into i. mater~ls: The materials used for the present contribution belong to (~ We are greatly indebted for the assistance of oul colleages Mr.L.DE BUSSCHERE, who prepared all computer programs needed in this investigatfon, Mr.R.EECKHOUT, who helped us with many suggestions as to the possibilities of information processing techniques and with critical remarks concerning the linguistic aspects of our problem, and -last but not least -the Direction of the MATHEMATICAL CENTRE of the University of Louvain, who put at our disposal the IBM-360 computer.The texts of group A were written by 400 francophone 18 yearold pupils in the highest classes at the 61 private secundary schools in Brussels and its suburbs. This sample represents one fifth of the total population. From every pupil we obtained two Dutch compositions, one of them a piece of homework writtenin November ]967, another an examination composition fromDecember of the same year. The reasons for this choice are evident, since the pupils can call in their parents' and their dictionaries' assistance in the first situation but not in the second.From every composition the first 125 words were put on punchcards together with coded information as to their source. In this way a corpus of ca. 100,000 words was compiled. In order to allow for comparison of relative parameters such as wordspread, vocabulary-growth etc., it was later divided into two parts each containing ca. 50,000 words (parts I and 2 below).The texts of group B, i.e. the SWD, were obtained by putting together extracts from literary work by I0 contemporary authors.This anthology gave us a corpus of some ]O,0OO words.The first part of group A reflects ca. 50 different subjectmatters, whereas the SWD-anthology reflects only ]O subjectmatters or "themes". So the disproportion of corpora is outweighed by a themes/tokens ratio which is I/ 10 in both corpora. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
b85bb7066877714774e53a9562eb32d9ad490320
20631784
null
On Saturated Partitions
Stephan-Yl ~n Solomon Let L be a lan~'uage over m vocmb~Ja~y V ~nd let de~ot~. by E (V) the s,t of ml! enuSv~lence relmtlon~ (partitions) on V. If ~ ~ E (V) and x ~ V then we sb~ll denote by ~(v) the cell of ~ containing the elmment 7. Defini%ion I. A p~rtition ~GE(V) is s~id to be smturated ev~-VY i (I ~imn) there exlst such elements x~ (J=l,...,
{ "name": [ "Solomon, Stephan-Ylan" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 62: Collection of Abstracts of Papers
1969-09-01
0
0
null
null
null
null
null
The connection between the ~sterisk ~nd the ~erivative of ~ pm~t~t~on is given by:In orQer that #'=~,~ it is necessary ~nd sufficient thmt 2 be saturated.o.By u~ing the no%~tlon:~ -J, and . we hav.e : ~Theorem E. Thmrm mxlsts ? nmtur~ number n so that Z--~ (where ~ im the improper p~rt~tion of V).
Main paper: : The connection between the ~sterisk ~nd the ~erivative of ~ pm~t~t~on is given by:In orQer that #'=~,~ it is necessary ~nd sufficient thmt 2 be saturated.o.By u~ing the no%~tlon:~ -J, and . we hav.e : ~Theorem E. Thmrm mxlsts ? nmtur~ number n so that Z--~ (where ~ im the improper p~rt~tion of V). Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
b7b5a183bdb8caab50012cdb433d3f3d8bca6ee8
28847083
null
Le Projet De Traduction Automatique a L{'}universite De {M}ontreal
I -HISTORIQUE DU PROJET Nos recherches ont d6but6 il y a bientOt quatre ans. Ce projet fut d'abord int6gr6 aux activit6s g6n6rales de la Facult6 des Lettres de l'Universit6 de Montr6al; maintenant, il d6pend directement du vice-recteur ~ la recherche de la mSme universit6.
{ "name": [ "Dugas, Andre and", "Gopnik, Myrna and", "Harris, Brian and", "Paillet, Jean-Pierre" ], "affiliation": [ null, null, null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 55
1969-09-01
11
0
null
En ce qui concerne ces recherches, des rapports p~riodiques continuent de para~tre et le dernier d'entre eux ajoute consid~rablement aux notes trop succintes qui vont suivre.Nous ne donnerons ni une histoire ni une description formelle du syst~me-W; nos coll~gues informaticiens le font dans une communication au Congr~s de I'A. C.M., San Francisco, Aoflt 1969 [DE CHASTELLIER et COLMERAUER (1969 ]. Nous voulons plutOt expliquer rapidement l'usage linguistique que nous raisons des caract~ristiques de ce syst~me.Celui-ci consiste en un parseur-interpr6teur (P.I.) et un synth6tiseur, tous deux de puissance transformationnelle, Le synth6tiseur est l'inverse du P.I. ~ quelques d~tails pros, et nous ne consider rons que le P.I. dans ce qui suit. Les entr~es de dictionnaire sont 6crites darts le m~me format et avec le m~me st@tut que les autres r~ gles, et jusqu'ici il n'y a pas de traitement s~par~ pour le dictionnaire [cf. section V].Le syst~me est essentiellement destin~ ~ tra~ter des cha~nes, quoiqu'il permette aussi, comme nous le verrons, de traiter des arbres. La donn~e d'entr~e pour le P.I. est constitute d'une cha~ne et d'une gram~aire; la sortie est une (ou des) chaSne(s) "interpr~-t6e(s)".La difference entre chaSne et cha~ne interpr~t6e est la suivante: la premiere est une s~quence non concat~n~e, que l'on peut concat~ner par la suite en lui appliquant des r~gles appropri~es; la seconde a subi la concat6nation et forme un symbole complexe unique. La consequence, parfois g6nante, en est que le P.I. ne peut traiter une chalne interpr6t6e que dans sa totalitY. Par exemple, si on a appliqu6 une r~gle: THE MAN . SPOKE --~ THE~MAN~POKE il sera impossible d'appliquer des r~gles subs~quentes ~ SPOKE, moins que la r~gle ne sp~cifie THE'~IAN aussi.Les structures d'arbre peuvent 8tre indiqu6es par des cha~nes parenth6s6es. De m6me on peut introduire des 6tiquettes de noeuds ou de fonctiens dans les cha~nes; il n'y a pas d'indices souscrits.Dans la description formelle, les chaYnes interpr6t~es finales sont appel6es "chaines axiomatiques". (Le synth6tiseur commence sa d6rivation ~ partir de ces "chalnes axiomatiques".) Dans notre syst~me de traduction, la donn6e de P.l. comporte une chaine anglaise [cf. section VI]; les "chaines axiomatiques" sont des chalnes de notre langage pivot [cf. section IV]; la sortie du synth6tiseur est une traduction en un fran §ais "restreint" [cf. section VII]. Le P.l. et le synth6tiseur peuvent 6tre enchaln6s pour passage direct de l'anglais au fran~ais.Une grammaire West faite de deux parties, i.e. deux ensembles de r~gles disjoints, qui sont d6crits ci-dessous. Le syst~me applique chacune des deux parties l'une apr~s l'autre, ~ chaque 6tape du traitement. Cette alternance continuelle est peut 6tre le trait le plus inhahituel de la grammaire W, pour un linguiste, et les commen~ qants prennent en g6n6ral quelques semaines pour s'y faire. ---> :this arc completes the graph but is not used in the PS structure specified by the metarules.Line des parties de la grammaire est constitu6e de "m~tar~gles". Celles-ci sont C.F. Le syst~me les utilise pour ~tiqueter certains noeuds du r~seau. Les r~gies unaires attachent plusieurs ~tiquettes au m~me noeud. Nous pouvons consid6rer que les autres fabriquent des arbres de structures constituants, si nous ~crivons la grammaire dans ce sens.Une cha~ne ainsi trait6e peut d6sormais ~tre d6crite par le profil d'une section ~ travers le r~seau que la m6tagrammaire lui a at-tachS. Pour cela, il fait ~noncer, de gauche ~ droite, les noms des noeuds que le profil traverse -et qui doivent donc avoir ~t~ introduits par les m6tar~gles -Ainsi la chaine de la fig. 1 peut se d~crire par "DETERMINER MAN SPOKE" ou "NP SPOKE", ou tout autre profil correct. Enfin, tout un "arbre" peut ~tre d~not~ simplement par le nom de son noeud sup~rieur. Sur chaque segment, le syst~me effectue toutes les analyses possibles scion les m6tar~gies. Comme dans d)autres algorithmes (par exemple celui de Cocke) ceci produit des analyses de sous-chalnes qui stav~reront abortives avant que le traitement de route la chalne dTentr6e soit achev~. I1 faut admettre que cette strategic charge la m~moire de l'ordinateur, et pour le moment nos chaSnes sont limit~es ~ environ 30 symboles. [mais voir Section VIII] La description par profils s'est av6r6e tr~s utile pour l'6criture des autres r~gles de la grammaire, appel6es "pseudor~gles" (ce nom n'est pas tr~s r6v~lateur quant ~ l'usage que les linguistes font de ces r~gles).Dans la partie transformationnelle, on peut donc 6crire des "pseudor~gles" qui, jointes aux m~tar~gles, g~n~rent un langage de type 0. II n'y a pas de restriction sur le nombre de symboles ni les types d'op6rations dans ces r~gles. Comme un symbole peut nommer un arbre -grace aux m~tar~gles -le r~sultat pratique est un pouyoir con-sid6rable dans le traitement des arbres. En fait nous pensons que le syst~me W pourrait ~tre utilis~ comme testeur de graF~aires transforma. tionnelles, moyennant l'introduction de certaines am61iorations, par exemple ordre des r~gles.La fig. 3 donne un exemple de grammaire W -linguistiquement 616mentaire mais compl~te -ainsi qu'une sortie correspondante. Dans le P.l., les r~gles doivent se life "<membre droit> est r6~crit <membre gauche> ". Pour la clart6 de l'expos6, nous avons quelque peu chang~ le format de la sortie machine r6elle. On peut se representer id~alement la "facult~ de langage" dtun home come une machine non d~terministique qui a pour fonc~ tion dteffectuer une correspondance entre certaines chaSnes sonores et certaines representations s~mantiques tongues comme une collection de relations choisies (par une operation que nous pouvons qualifier d t "abstraction") parmi celles qui existeraient entre des ~l~ments de la perception.La traduction id~ale, comme chacun salt, consiste ~ "comprendre" un texte dans une langue -ctest-~-dire ~ construire 1~ oa les representations s~mantiques correspondantes -et ~ "parler" dans une autre langue -c'est-~-dire effectuer dans lVautre langue les operations conduisant de la representation s~mantique ~ la cha~ne sonore. Id~alement, la seconde phase devrait ~tre fortement conditionn~ par la premiere, et nous devrons examiner les points de correspondanee.I1 est utile dVimaginer les structures s~mantiques comme des graphes (qui nVont a priori aueune raison dV~tre des arborescenees) dont les noeuds -ou les ar~tes -sont ~tiquet~s par des indices de raf~rence et des noms de relations existant entre certains de ees indices. Le "probl~me" du locuteur est de representer ces structures sous forme de chaSnes parenth~s~es, c'est-~-dire, en premiere approximation sous forme de structures de constituants. Ce probl~me a deux aspects: dWune part, la representation de la structure du graphe d'autre part la representation sonore (ou graphique) des ~l~ments "substantifs" (i.e. des ~tiquettes relationnelles de ee graphe).La premiere pattie est commun~ment appel~e syntaxe. Dans les termes employ~s ici, elle consiste ~ crier des structures de eonstituants pour representer une partie des configurations du graphe s~mantique. Nous nous repr~senterons cette partie de la fa-eult~ de langage eomme une collection de modules op~ratoires, ehacurt effectuant une seule operation sous le contr6le de param~tres d~erivant les structures s~mantiques ~ transformer. [Voir HOF~N 2. 3.Nous avons donn~ il y a quelque temps dans un article [Harris (1968) ] une description formelle des z-mots. Nous consid~rions qu'un ~l~ment de ce vocabulaire de z-mots ~tait un symbole pour une variable s~mantique dont les valeurs ~taient des sens de lex~mes et franqais. En fait, nous posions g ~ creation d'un z-mot la condition qu'il existait dans chaque langue au moins un lex~me dont un sens appartiendrait au domaine de variation de la variable ~ repr6senter par ce z-mot. Les valeurs peuvent 8tre des racines lexicales sujettes ~ d~clinaison et affixation, ou des formes compl~tes de surface, ou des morphemes de pro~ fondeur sujets ~ des transformations de la grammaire. Exemple:Pivot Franqais { (I) would (ask) ~ f(Je) voudrais (demander Tousles ~l~ments de lVensemble de sens anglais sont synonymes; les ~l~mentsde l'ensemble fran~ais sont synonymes aussi et traduisent lVensemble anglais. Ainsi un z-mot-~tait consid~r~ comme une ~tiquette pour une fonction de traduction dent les arguments ~taient un ensemble de synonymes en anglais et un en fran~ais. I1 n'est peut 8tre pas compl~tement inutile ici de mettre en garde contre des pr~somptions dWuniversalit~: jusqu'ici notre recherche a ~t~ strictement limit~e ~ 11anglais et au fran~ais. Toutefois, le module pent th~oriquement ~tre ~tendu pour comprendre plus de deux langues et construire les expansions complexes de ce qulAndreyev a appel~ "anneaux de traduction" [ANDREYEV (1965) ].Depuis la publication de cet article, rien n'a chang~ dans le statut formel des z-mots; en ce qui concerne le traitement, tou~ tefois beaucoup de changements ont eu et ont encore lieu. Pour gar~ dernos chances des respecter les d6lais de notre contrat, nous avons r~duit la pottle de notre module. Pour le langage source, il nous faut nous preparer ~ accepter autant que possible du vocabulaire anglais -si nous ne voulons pas restreindre notre syst~me ~ une classe ~troite de textes techniques -Par contre il nous suffira pour le moment d'utiliser g la synth~se une portion bien plus r6dui, te du vocabulaire fran~ais. Dans l'exemple ci-dessus, cette restriction temporaire nous permet de nous dispenser de deux des trois r~alisations franqaises de "zrespectfully ~. Nous avons de plus ac~ cept~ une hypoth~se suppl~mentaire: qu'il existe un vocabulaire "de base" du fran~ais, servant ~ exprimer routes idles communes, et qu'il suffit d'enrichir de micro-glossaires techniques pour per~ mettre une traduction des textes anglais. Autrement dit, un vocabu~ laire correspondant au Basic English. Nous avons la chance de pos~ s~der une bonne approximation d'un tel vocabulaire dans les deux livres Le Franqais fondamental (F.f.) [GOUGENHEIM (s.d.)], Jusqu'ici [a technique la plus rapide que nous, ayons trouv6e pour la compilation du dictionnaire est la "traduetion ~ rebours" du F.f. vers l'anglais.Nous avons consid~r~ diverses mani~res dt~tiqueter la fonction ZWORD. Les semoglyphes d'Andreyev devaient 6tre num~riques. Mais nous n'avons pas accept6 l'id~e d'un langage pivot qui ne serait pas directement lisible par les linguistes qui l'utilisent.Au d6but, nous utilisions des mots fran~ais que nous distinguions du vocabulaire du fran~ais proprement dit par le pr~fixe "z" (D'o~ le terme "z-mot"). R~cemment, nous avons adopt6 un ~tiquetage dont les symboles sont plus longs, mais plus explicites. Ayant 6tabli un "ensemble de synonymes" anglais, nous attachons une ~tiquette anglaise cet ensemble. Cela est ais~, puisque nous avons un bon dictionnaire anglais des synonymes qui donne une ~tiquette convenable pour chaque collection de synonymes [LEWIS (1981) ]. Du cSt~ fran~ais nous n'avons pas de collections de synonymes pour le moment, mais un seul lex~me F.f..Le z-mot est compos~ par composition de l'6tiquette anglaise avec le lex~me F.f. Par exemple, certains usages de "give up", "renounce", "surrender", "abandon", etc., sont tous couverts par l'~tiquette RELINQUISH dans le dictionnaire de synonymes, et tous traduisibles par 'abandonner', lex~me P.f. . Nous composons donc un z-mot "zrelinquish/abandonner", pour marquer la 'variable de traduction' dont les valeurs sont les mots cit6s ci-dessus. Par la suite d'autres valeurs, synonymes de "abandonner" seraient introduites.Nous avons conscience d'un certain danger: en op6rant par "traduction ~ rebours" ~ partir des lex~mes F.f. nous pourrions avoir des difficult~s si le F.f. contient d6j~ des synonymes: une collection de synonymes anglais pourrait se trouver marquee de plusieurs z-mots diff6rents; cela cr6erait de l'ambigu~t6 dans le langage pivot, ce que nous d6sirons vivement 6viter. Si une telle ambigu~-t6 apparaSt, nous aurons ~ raffiner la notion de z-mot pour l'61iminer. Mais nous avons d6j~ introduit des raffinements, puisque nous conservons dans le pivot, outre les z-mots, les 616ments 6tudi6s plus bas, soit: SEMANTIC PARAMETERS, STYLE~ ATTITUDE, THESAURUS CATEGORY.Par la mani~re dont les z-mots sont produits, nous obtenons une d~finition op~rationnelle de la valeur cognitive des lex~mes du pivot. Chaque collection de sens de lex~mes constitue ce que Sparck Jones, dans son ouvrage sur les synonymes anglais, appelle un "row"; nous consid~rons que nous avons ~tendu son module ~ la traduction lexicale ou, disons-nous, ~ la "synonymic interlinguale" [SPARCK JONES (1965) ].Le traitementactuel se fair: phrase par phrase; le systame pr0pes~ tiendrait compte de ce que l'analyse s~mantique n'est possible qu'~ partir d'un texte entier et non de phrases isol~es.2. Recherche daus un lexique s~mantiquement structur~ selon le principe de correspondance maximale Le lexique propos~ assignera aux cha~nes dVentr~e des ensembles de marqueurs s~mantiques et syntaxiques (ces cha~nes ne seront pas limit~es ~ des mots isol~s). La sortie du lexique, pour une chaSne donn~e comporterait l'ensemble de ces marqueurs. Les sorties du lexique actuel sont constitutes de mots du langage pivot assortis de tels marqueurs [cf. section V].-Au niveau de la phrase, un lexique s~mantiquement structur~ nous permettrait d'~liminer les sens de la phrase satisfaisant aux r~gles de la syntaxe, mais non celles de la s~mantique. Au niveau des relations entre phrases, il nous permettrait d'~tablir des relations entre les domaines s~man~ tiques de diverses parties du texte.3. et 4., 14. et 15. Programmes palliatifs Ces programmes permettent de produire un r~sultat dans les cas o~ le syst~me ne pourrait pas traiter un ~l~ment lexical ou syntaxique.La deuxi~me pattie est le lexique. Celui-ci peut 8tre vu comme une relation binaire dont les premiers arguments sont des paquets de traits s~mantiques et les seconds arguments des matrices phonologiques (-Bans le cas d'un syst~me operant sur des textes ~crits, le lexique sera une relation entre des paquets de traits s~mantiques et des chaSnes de caract~res). La relation "lexique" n'est pas une fonction, en ce qu'un paquet de traits s~mantiques donn~ peut avoir un correspondant phonique different selon des circonstances paralinguistiques.I1 est int~ressant d'introduire 1~ des param~tres d' "attitude" , "style" etc. [voir section V] Mentionnons en passant qu'il est s~duisant (et apr~s tout raisonnable) de faire l'hypoth~se suivante: la forme des divers modules syntaxiques et 1'ensemble de tousles traits s~mantiques semblent 8tre universels. Par contre les valeurs des param~tres de contr81e des modules syntaxiques, ainsi que la relation "lexique" avec tous ses param~tres, semblent ~tre acquis par ~ducation dans une soci~t~ donn~e.Une traduction consistera done en deux lois deux operations. Premi~rement, des formes phoniques Pl -ou ~crites -seront identifi~es et appliqu~es sur des paquets de traits s~mantiques ~ par la relation "lexique 1" tandis que les structures de constituants seront analys~es et appliqu~es sur un graphe -~tiquet~es par les ~ -par 1'operation des modules syntaxiques (les param~tres ayant les valeurs ~). Deuxi~mement les graphes ainsi obt~/lus seront trait~s par les m~mes modules syntaxiques, dont les param~tres auront pris les va-1curs P2 correspondant ~ la langue cible, tandis que les ~tiquet~ tes ~ g~ront appliqu~es sur des chatnes de caract~res ou des matrices phonologiques P2 par la relation "lexique 2".(il n'est pas possible de parler de v~ritable "justification") l'approche que nous avons adopt~e en ce qui concerne le traitement automatique du probl~me de traduction, telle qu'elle est pr~sent~e dans les sections suivantes.Notons un fair important. Les param~tres syntaxiques ou lexicaux acquis par ~ducation dans une certaine soci~t~ sont fortement variables ~ l'int~rieur m~me d'une langue donn~e (variations "dialectales", "stylistiques" etc.) L'attitude et la sensi-bilit~ du traducteur envers ces variations peut diff~rer ~norm~ment. L'id~al serait que la traduction respecte routes les nuances. Bans le cas de la traduction automatique, toutefois, ceci impliquerait une "analyse culturelle" qui reste ~ faire. Voici une d~cision possible: on choisira d'accepter autant de structures possi-bles darts la lamgue source, e'est-~-dire qu'on permettra aux param~tres la plus large variation compatible avec l'intelligibi-lit~ (61argir le domaine de variation d'un param~tre de contr81e diminue naturellement l'information apport~e par l'op~ration correslmndante}. On essaiera tout de mSme de corr~ler autant que possible les valeurs des param~tres -surtout dans le lexiqueavec des niveaux de style, etc. Du cOt~ de la langue cible, on se contentera pour un temps de transmettre 1'information n~cessaire, e'est-~-dire qu'on restreindra fortement la variation des pa-ram~tres autour de la valeur correspondant approximativement au "standard" du langage. (On pourra m6me aller au del~ en restreignant chaque structure ~ une seule expression dans la langue cible.Le langage pivot est un langage formel apte ~ d6finir des relations s~mantiques. Les ~l~ments qui le composent sont des mots -appel~s z-mots darts notre syst~me -qui correspondent d'une fa~on univoque ~ des configurations s~mantiques d'un type particulier (restreintes darts notre syst~me aetuel ~ la paire anglais-fran §ais); ces mots ne sont done pas ambigus et n'ont pas de synonymes. L'ordre canonique darts lequel ces z-mots sont disposes indique les relations s~mantiques qu'il entretiennent.Par rapport au module de traduction, une chaSne du langage pivot ne fournit qu'une repr6sentation s~mantique pour chaque chaSne de lex~mes de la langue source et pour une ou plusieurs chaSnes de lex~mes dans la langue cible. Tel qu'utilis~ dans mtre syst~me, le langage pivot fournit des cha~nes qui constituent la sortie de l'analyseur de l'anglais et deviennent en m~me temps l'entr~e pour le g~-n~rateur du fran~ais. Plus explicitement, l'analyseur transforme une suite de mots et de signes de ponetuation de l'anglais en autant de chaSnes canoniques du langage pivot qu'il y a de sens diff6rents at-tribu6s ~ eette suite. Le g~n6rateur va traiter cette chaSne canonique et la transformer ~ son tour en une ou plusieurs suites de lex~mes du fran~ais dont une au moins dolt correspondre ~ la configuration s~mantique exprim~e dans la cha~ne du langage pivot.Le z-mot auquel correspond un ensemble d~fini de lex~mes appartient au lexique. On obtient autant de z-mots pour un lex~me que celui-ci a de sens diff~rents d'ambiguit6; par contre, un seul z-mot recouvre touteune classe de lex~mes synonymes. Ces z-mots ne constituent pas des traductions fran~aises de mots anglais mais plutSt des entit~s abstraites qui recouvrent une ou plusieurs confi, gurations s~mantiques particuli~res qui prendront dans diff~rentes langues naturelles diff~rentes configurations graph~miques ou phon6miques.Les relations qu'entretiennent les ~l~ments du langage pivot sont exprim~es par des structures de d6pendance. Celles-ci ont 6t6 imagin~es tout d'abord par Tesni~re, et l'usage que nous en raisons est inspir6 des ~tudes effectu~es ~ Grenoble; toutefois la forme dans laquelle nous les utilisons est conditionn~e par nos besoins particuliers. Le tableau suivant illustre la structure de d6pendance caract~ristique de notre module. X I X 2 X 3 X n 0.14 L'adaptation des structures de d~pendance se fait naturellement ~ mesure que la grammaire devient plus complexe ou que le formalisme re~oit des modifications. Par exemple, nous voulons introduire sous peu des sp~cificateurs de phrases comme "interrogatif", "imp~ratif", etc. et inclure dans cette classe d'op~rateurs ceux du temps, de la n~gation, du modal et de l'adverbe.La sortie des chaSnes de ce module de d~pendance correspond actuellement ~ une representation lin~aire des ~l~ments. Le gouverneur precede la cha~ne gouvern~e elle-m~me entre parentheses. Dans le cas o~ la fonction s~mantique r~v~l~e dans chacune des cha~nes gouvern~es par un mSme gouverneur n'est pas la m~me, comme cela se produit pour les premier, deuxi~me, troisi~me actants, on ajoute des ~tiquettes pour indiquer le type de d~pendance dans chacun des cas. *L'6tiquette "epi-" s'emploie pour marquer des eha~nes qui modifient la chaYne qui les gouverne. Par exemple, les relatives sont pr~c~d~es de cette ~tiquette parce qu'elles modifient le groupe nominal qui les gouverne.Jusqu'~ la rbdaction de cet article, notre travail lexicographi~ que s'est surtout occup6 de la "conversion" des mots anglais en lex~mes de notre langage intermgdiaire (pivot).La figure 1 illustre une entree de dictionnaire pour un verbe anglais. La ligne 2 de l'exemple pr~sente ce que nous appelons l'entr~e primaire: elle contient toute l'information no-t6e par notre lexicographe, y compris une citation. La citation est encadr~e par des ast~risques, et trait~e comme un "commentaire" par les programmes de traitement. Au-dessus de chaque symbole de l'entr~e primaire apparaYt la variable lexicale dont le symbole est une valeur. Dans le syst~me W que nous utilisons [voir section II] ces variables sont d~finies dans les "m6tar~gles", soit par listes de'valeurs, soit par un schema de la forme: ZWORD =~ Z X / X repr~sente n'importe quelle valeun Les entr6es de dictionnaire proprement dites sont ~crites dans le format g~n6ral des "pseudor~gles" de la grammaire-W, et ont le m6me statut que les autres r~gles de ce type. £n cons~ quence elles peuvent 6tre soumises directement aux r~gles * On pourrait se contenter de l'ordre des chaSnes. d'interpr~tation de la grammaire, et peu importe le format choisi pour les entr~es primaires, tant qu'elles satisfont au format g6-n~ral des grammaires-W. En effet on peut 6crire facilement les r~gles n6eessaires pour les transcrire dans un format compatible avee la partie "syntaxique" de la grammaire. Ainsi on peut introduire beaucoup de d~tails descriptifs dans les entr~es de dictionnaire m6me si nous n'en avons pas l'usage pour le moment. Malheureusement, chaque r~gle de traitement du dietionnaire accroSt la grammaire et par cons6quent le temps de traitement, paree que le programme d'application de la grammaire W essaie chaque r~gle chaque phase du traitement. I1 est ~vident qu'il nous faudra 6crire sous peu un programme particulier de consultation du dictionnaire, m~me si nous pouvons nous en tirer pour le moment en encha~nant deux grammaires-W, dont la premiere ne se compose que de r~gles de ee niveau.La ligne 3 dans la figure est la r~gle qui r~crit l'entr~e primaire dans le format requis ~ present dans notre grammaire. La ligne 4 est le r6sultat de l'application de cette r~gle. Toutes les valeurs encadr~es par des crochets sont "persistantes"c'est-~-dire sont conserv~es dans le pivot -Les lex~mes du pivot sont done des symboles complexes limit,s par des crochets, La ligne 5 illustre une r~gle "morphologique", qui explicite le temps du verbe et le met sous la forme RACINE/TEMPS correspondant ~ la ligne 4. Cette r~gle fair ~galement partie des "pseudor~gles", et n'a pas de statut particulier.Nous allons ~ present examiner l'importance linguistique des diverses variables.Certains membres de l'6quipe de recherche aimeraient adopter une approche plus analytique dans la d~finition du voca* bulaire pivot [cf. section IIi.2] et ce groupe de param~treS peut ~tre consid6r6 comme un pas dans cette'direction. Mais l'id~e de ce type deparam~tre, que 'Melchuk a 6tudi6e en d~tail [MELCHUK (1967) .] n'a 6t~ jusqu'ici adopt6e chez nous qu'avec des restrictions. Melchuk fair l'hypoth~se que ces param~tres sont des universaux: nous ne les introduisons que lorsqu'ils sont justifies ouvertement par des synonymes ou des t raductions. Ainsi dans les exemp-les suivants: .("causatif") (i) Eng. inform --Fr. faire savoir (ii)Eng, inform --Eng. le__t_tknow; ("inchoatif")(iii)Eng. o~_o_t_osleep --Fr. s'endormir, les 616ments soulign6s sont une justification acceptable de la pr6sence'du param~tre en question.II s'agit-respectivement de "niveau de style" et de "jugement port6 par le locuteur sur l'information cognitive qu'il transmet". Nous pensons que ces deux 616ments font pattie de la "signification" totale d'un lex~me puisqu'elles sort refl6t6es dans le choix du locuteur parmi des lex~mes comportant la m~me valeur cognitive. Nous sommes conduits ~ cette extension de "signification" d~s que nous voulons de bonnes traductions. Nous reconnaissons par exemple la diff6rence entre "in future" (familier) et "henceforth" (rh~torique); ou entre "leave one's country" (attitude neutre) et "abandon one's country" (attitude de condamnation).On peut se demander s'il existe des synonymes templets. Ainsi, lorsque l'on groupe des synonymes pour la traduction il est important de pouvoir d6crire d'une part la "pattie du sens" qui cr6e synonymie, et d'autre part la "pattie de sens" qui distingue des synonymes partiels. La synonymie, pensons-nous, est fond6e sur le contenu cognitif des lex~mes, tandis que STYLE, ATTITUDE et THESAURUS CATEGORY (voir ci-dessous) sont des param~tres de diff6rentiation qui peuvent rendre une synonymic partielle sans la d6truire enti~rement.I1 nous faut situer chaque usage d'un lex~me dans un thesaurus structur6. Comme d'autres avant nous en T.A., nous nous sommes adress6s ~ Roget, ou du moins ~ une collation moderne de son magnum opus [MAWSON (1946) ]. Sa hi6rarchie date du 19 e si~cle, et au moins en pattie est extra-linguistique et refl~te une 6poque et une culture donn6es. Elle est toutefois utile pour une description partielle des contextes d6terminants pour les polys~mes, et m~me pour la reconnaissance des 616ments principaux ("actants") dans notre grammaire de d6pendance. La distinction th6orique jus-tifi6e entre "facteurs linguistiques" et "facteurs culturels" dans la production du langage est trop vague pour 8tre marqu6e en pratique.Malheureusement, il nTy a pas de correspondant fran~ais au Roget, et nous n'avons pas les ressources et le temps pour compilerun nouveau thesaurus.Ces d6tails n'apparaissent pas en ligne 4, c'est-~-dire qu'ils ne sont pas "persistants" dans le pivot. Les lex~mes du pivot ne comportent pas d'indications syntaxiques, car un concept exprim6 en anglais par un verbe, par exemple, peut se trouver ex-prim~ par un nom en fran~ais, etc. Mais d'autre part, les r6alisations lexicales en anglais ou fran~ais doivent ~tre class6es en par~ ties du discours pour que la syntaxe puisse 8tre trait6e. De plus, il nous faut attacher ~ chaque lex~me la description des structures qu'il peut gouverner. Nous Voposons le terme REGIME pour d6noter ces structures. Jusqu'ici, nous n'avons 6labor6 la description ad6quate des r6gimes que pour les verbes et leurs "actants" (selgn Tesni~re et le CETA, avec des modifications). Mais la partie lexicographique permet l'61aboration future de la description des r6gimes pour les autres parties du discours.Nous avons trouv6 que les r~gles syntaxiques peuvent souvent s'appliquer ~ des cat6gories de thesaurus d'un niveau 61ev6 clans la hidrarchie de Roget. Par exemple, il y a des formes de phrases caract6ristiques des verbes de "communication humaine". Ces trouvailles sont importantes pour l'6conomie des grammaires, puisqu'elles tendent ~ confirmer l'hypoth~se que la syntaxe n'est pas ind6pendante de la s6mantique.La description d'un r~gime a un but double: en plus de son utilit~ dans l'analyse de la phrase, elle constitue un contexte g~n~ralis~ sp~cifiant un certain usage d'un lex~me. Nous nous ef-for~ons donc de g~n~raliser de la sorte tousles contextes particuliers -c'est-~-dire les citations -que notre dictionnaire de base [HARRAPS (1967) ], nous offre. Nous n'avons pas assez de place ici pour ~tudier le d~tail des sous-param~tres commandos par REGIME.En conclusion, remarquons qu'il est tr~s bien d'~crire une grammaire explicitant les relations entre les concepts d'une phrase ou d'un texte, mais que la moiti~ seulement du travail de T.A. est faite rant que les concepts eux-mSmes ne sont pas formellement d~finis.La phase de consultation lexicale fournit ~ l'entr~e de la grammaire d'analyse une s6quence de mots du langage pivot assortis chacun d'une ou plusieurs categories gr~aticales. La sortie de la grammaire d'analyse est une s~quenee -munie d'un ordre hi~rarchique -de mots du langage pivot qui repr6sente les d6pendances s~mantiques de la phrase d'origine. Ii y a une eorrespondance biunivoque entre les configurations s~mantiques et les chaTnes du langage pivot. Par consequent les paraphrases seront repr~sent~es par une mSme chaTne du langage pivot, et les phrases n-ambigu~t~s auront n representations dans le fangage pivot.Les r~gles de la grammaire sont de deux types fondamentaux: I. R~gles qui assignent une cat~gorie grammaticale sup6rieure ~ une s6quence de categories grammaticales.2. R~gles qui permettent, effacent ou ajoutent des ~16ments dans une s~quence.Dans le formalisme Wces deux types de r~gles apparaissent dans deux parties s~par~es de la grammaire,Les r~gles du type 1 effectuent une analyse en constituants i~diats de la cha~ne d'entr~e (ou de r~arrangements de celle-ci effec-tu~s par des r~gles du type 2.) Pour qu'une chaSne pivot r~sulte de ce traitement, il est n~cessaire que la cat~gorie "phrase compl~te" soit attribute ~ la chaSne d'entr~e dans son ensemble. Les r~gles de type 1 "essaient" d'assigner une cat6gorie ~ chaque sous chaSne de la cha~ne d'entr~e, mais seules seront conserv~es les attributions de categories eonduisant ~ 1'attribution de la cat6gorie "phrase complete" ~ la chaSne d'entr~e. The witness to the accident that occurred at the corner et The witness to the accident that spoke to thereporter s Les r~gles du type 2 qui 61iminent les variantes paraphrastiques le font par r~criture des variantes en une m6me cha~ne. Ainsi, les variantes:The man to whom I gave itThe man whom I gave it toThe man who I gave it to The man that I gave it to sont toutes r~crites dans la mSme forme canonique, et trait~es ~ partit de ce moment de la m6me fa~on. De cette mani~re, les variantes syntaxiques sans signification s~mantique ne sont pas conserv6es jusqu'~ la cha~ne en langage ~vot.Le but de la g~n~ration du fran~ais est dlobtenir une expression adfiquate des structures s6mantiques cod~es dans les cha~nes du langage pivot, qui soit aussi proche que possible du fran §ais (technique) standard. II est ~vident que les raffinements stylistiques ne sont pas encore -et ne seront pas pour longtemps -~ l'ordre du jour.Nous recherchons essentiellement qu'une forme correcte de l~expression, on peut diviser la t~che en trois parties. Le fran §ais comporte des marques dtaccord obligatoires et certaines contraintes d'ordre des ~l~ments. De plus, il faut g~n~rer les formes correctes des lex~mes pourvus de leurs marques grammaticales.Nous avons da pour cela diviser la g~n~ration du fran~ais en quatre phases.La premiere (I) d6tache de la structure s~mantique cod~e dans les cha~nes -pivot les 61~ments abstraits codant les lexDmes, et les remplaee par des ~x~mes fran~ais accompagn~s de leur marqueurs grammaticaux inh~rents (genre du nom, classes s6mantiques et pr~positions r~gies par le verbe, etc.)La seconde (II) effectue une recomposition de la structure s~mantique et une copie des marqueurs introduits en phase I en toutes les positions off ils sont requis par les r~gles d'accord du fran §ais.La troisi~me (III) donne aux 61~ments l'ordre de surface du fran §ais. L'importance de cette phase est r6duite dans une grande mesure par la d~cision de n~gliger pour l'instant toutes sortes de d~tails secondaires. C'est sur elle que nos efforts futurs devront porter si nous voulons am61iorer la fid61it6 "stylistique" de la traduction.Nous pr~voyons la n~cessit~ d'une quatri~me phase s6par~e de la troisi~me. Cette phase IV serait proprement appel~e "morphologie". Elle correspond grossi~rement ~ la partie phonologique d'une graranaire g~n~rative, et n'est pour l'instant repr~sent~e que par quelques r~gles plac6es "en appendice" ~ la phase III. Des travaux effectu~s il y a quelques ann6es par A. Dugas serviront de base ~ un traitement relativement simple de la morphologie.A la lumi~re des experiences pass6es, il est apparu que le module math6matique de traduction que nous utilisions (grammaires-W) pr6sentait certaines lacunes: 1. Difficult~ de fractionner la phase d'analyse ou de ~6n6ration en plusieurs phases.2. Manque de souplesse pour manipuler certaines informations structur6es sous forme d'arbre, en partieulier lors de la phase de g~n6ratien. A. Colmerauer a donc commenc~ l'~tude d'un nouveau type de grmmaire plus adapt~ au but que nous nous proposons d'atteindre.Ces grammaires (syst~mes-Q) seront essentiellement constitu6es de r~gles de r~6criture g~n~rales pouvant non seulement s'appliquer ~ des cha$nes mais aussi ~ des arbres. Un programme est en cours d'61aboration, qui permettra 6tant donn~ un texte ou une information structur~e sous forme d'arbre, de lui appliquer un certain nombrede transformations d6crites par une grammaire et d'obtenir un nouveau texte ou une nouvelle information structur~e. En utilisant plusieurs lois ce m6me programme avec des grammaires diff~rentes, on pourra alors enchaSner. plusieurs phases d'analyse de l'anglais et plusieurs phases de g~n~ ration du fran~ais. I1 faut remarquer que, contrairement aux grammai~ res W, ce sera le mSme programme qui sera utilis6 pour l'analyse et la g~n6ration. Ceci donnera plus de possibilit6s aux linguistes ~crivant les grammaires: en effet, nous nous sommes aper~us que durant la phase d'analyse il ~tait parfois n~cessaire d'utiliser certains processus propres ~ la phase de g~n~ration et inversement durant la phase de g6n6ration, de r~analyser certaines parties afin de v6rifier la grammaticalit~ du fran §ais g~n~r6.Nous avons d6j~ commenc6 l'6criture d'une partie de ce nouveau syst~me, qui est op~rationnelle depuis juin 69. Acette date, nous avons donc pu commencer l'~tude de la traduction automatique une plus vaste ~chelle. Les gra~maires-W qui sent d6j~ 6crites seront tr~s facilement r~utilisables, le nouveau formalisme ~tant surtout une extension de ce que nous avons fair jusqu'~ maintenant.Les procedures de traduction d~crites dans les sections pr~c~dentes comportent un certain nombre de limitations. Celles-ci nous ont conduits ~ examiner quelles seraient les extensions n6cessaires de notre syst~me. La figure 5 indique les types de traitement dent nous pr~voyons la n~cessit6, ainsi que l'organisation du traitement. Paute de place, nous ne ferons pas de commentaires sur les parties qui sent d~j~ en cours de d~veloppement et n'ont qu'~ @tre adapt~es au sys-t~me, par exemple 9: Programme d'analyse syntaxique. Nous ne parlerons donc que des sections qui en sent encore au stade th~orique, mais pour lesquelles nous entrevoyons une r6alisatien possible, par exemple 5: Analyse de texte.Ce programme comportera des r~gles concernant les relations entre les phrases d'un texte. Celles-ci assureront la coherence du texte dans son ensemble. Parmi les t~ches sp~cifiques que ce programme pourrait assurer, citons: 1. Clarification des r~f~rences pronominales. 2. D~sambiguation des ~l~ments dWune phrase dVapr~s d'autres ~l~ments du texte. 5. Restauration de portions ~lid6es du texte (par exemple restitution de 1'agent effac~ d'un passif). On peut pr~voir deux niveaux de r~gles diff~rents:1. Celles qui agissent sur des traits s~mantiques pour ~liminer des ambigu~t~s ou ~tablir des relations d'inclusion .2. Celles qui traitent des interd~p~endances syntaxiques entre les phrases d'un mSme texte.6., 10. et 11. Options de recyclageOn peut s'attendre que la complexit~ des relations entre choix lexical, structures de phrase et structures de texte requi~re parfois un traitement cyclique.Ce programme choisirait entre divers sens possibles qui n'auraient pas ~t~ ~limin~s par les progranunes precedents, pour ~viter la production de multiples traductions.
null
null
null
null
Main paper: la facult~ de langage: I1 est utile dVimaginer les structures s~mantiques comme des graphes (qui nVont a priori aueune raison dV~tre des arborescenees) dont les noeuds -ou les ar~tes -sont ~tiquet~s par des indices de raf~rence et des noms de relations existant entre certains de ees indices. Le "probl~me" du locuteur est de representer ces structures sous forme de chaSnes parenth~s~es, c'est-~-dire, en premiere approximation sous forme de structures de constituants. Ce probl~me a deux aspects: dWune part, la representation de la structure du graphe d'autre part la representation sonore (ou graphique) des ~l~ments "substantifs" (i.e. des ~tiquettes relationnelles de ee graphe).La premiere pattie est commun~ment appel~e syntaxe. Dans les termes employ~s ici, elle consiste ~ crier des structures de eonstituants pour representer une partie des configurations du graphe s~mantique. Nous nous repr~senterons cette partie de la fa-eult~ de langage eomme une collection de modules op~ratoires, ehacurt effectuant une seule operation sous le contr6le de param~tres d~erivant les structures s~mantiques ~ transformer. [Voir HOF~N 2. 3.Nous avons donn~ il y a quelque temps dans un article [Harris (1968) ] une description formelle des z-mots. Nous consid~rions qu'un ~l~ment de ce vocabulaire de z-mots ~tait un symbole pour une variable s~mantique dont les valeurs ~taient des sens de lex~mes et franqais. En fait, nous posions g ~ creation d'un z-mot la condition qu'il existait dans chaque langue au moins un lex~me dont un sens appartiendrait au domaine de variation de la variable ~ repr6senter par ce z-mot. Les valeurs peuvent 8tre des racines lexicales sujettes ~ d~clinaison et affixation, ou des formes compl~tes de surface, ou des morphemes de pro~ fondeur sujets ~ des transformations de la grammaire. Exemple:Pivot Franqais { (I) would (ask) ~ f(Je) voudrais (demander Tousles ~l~ments de lVensemble de sens anglais sont synonymes; les ~l~mentsde l'ensemble fran~ais sont synonymes aussi et traduisent lVensemble anglais. Ainsi un z-mot-~tait consid~r~ comme une ~tiquette pour une fonction de traduction dent les arguments ~taient un ensemble de synonymes en anglais et un en fran~ais. I1 n'est peut 8tre pas compl~tement inutile ici de mettre en garde contre des pr~somptions dWuniversalit~: jusqu'ici notre recherche a ~t~ strictement limit~e ~ 11anglais et au fran~ais. Toutefois, le module pent th~oriquement ~tre ~tendu pour comprendre plus de deux langues et construire les expansions complexes de ce qulAndreyev a appel~ "anneaux de traduction" [ANDREYEV (1965) ].Depuis la publication de cet article, rien n'a chang~ dans le statut formel des z-mots; en ce qui concerne le traitement, tou~ tefois beaucoup de changements ont eu et ont encore lieu. Pour gar~ dernos chances des respecter les d6lais de notre contrat, nous avons r~duit la pottle de notre module. Pour le langage source, il nous faut nous preparer ~ accepter autant que possible du vocabulaire anglais -si nous ne voulons pas restreindre notre syst~me ~ une classe ~troite de textes techniques -Par contre il nous suffira pour le moment d'utiliser g la synth~se une portion bien plus r6dui, te du vocabulaire fran~ais. Dans l'exemple ci-dessus, cette restriction temporaire nous permet de nous dispenser de deux des trois r~alisations franqaises de "zrespectfully ~. Nous avons de plus ac~ cept~ une hypoth~se suppl~mentaire: qu'il existe un vocabulaire "de base" du fran~ais, servant ~ exprimer routes idles communes, et qu'il suffit d'enrichir de micro-glossaires techniques pour per~ mettre une traduction des textes anglais. Autrement dit, un vocabu~ laire correspondant au Basic English. Nous avons la chance de pos~ s~der une bonne approximation d'un tel vocabulaire dans les deux livres Le Franqais fondamental (F.f.) [GOUGENHEIM (s.d.)], Jusqu'ici [a technique la plus rapide que nous, ayons trouv6e pour la compilation du dictionnaire est la "traduetion ~ rebours" du F.f. vers l'anglais.Nous avons consid~r~ diverses mani~res dt~tiqueter la fonction ZWORD. Les semoglyphes d'Andreyev devaient 6tre num~riques. Mais nous n'avons pas accept6 l'id~e d'un langage pivot qui ne serait pas directement lisible par les linguistes qui l'utilisent.Au d6but, nous utilisions des mots fran~ais que nous distinguions du vocabulaire du fran~ais proprement dit par le pr~fixe "z" (D'o~ le terme "z-mot"). R~cemment, nous avons adopt6 un ~tiquetage dont les symboles sont plus longs, mais plus explicites. Ayant 6tabli un "ensemble de synonymes" anglais, nous attachons une ~tiquette anglaise cet ensemble. Cela est ais~, puisque nous avons un bon dictionnaire anglais des synonymes qui donne une ~tiquette convenable pour chaque collection de synonymes [LEWIS (1981) ]. Du cSt~ fran~ais nous n'avons pas de collections de synonymes pour le moment, mais un seul lex~me F.f..Le z-mot est compos~ par composition de l'6tiquette anglaise avec le lex~me F.f. Par exemple, certains usages de "give up", "renounce", "surrender", "abandon", etc., sont tous couverts par l'~tiquette RELINQUISH dans le dictionnaire de synonymes, et tous traduisibles par 'abandonner', lex~me P.f. . Nous composons donc un z-mot "zrelinquish/abandonner", pour marquer la 'variable de traduction' dont les valeurs sont les mots cit6s ci-dessus. Par la suite d'autres valeurs, synonymes de "abandonner" seraient introduites.Nous avons conscience d'un certain danger: en op6rant par "traduction ~ rebours" ~ partir des lex~mes F.f. nous pourrions avoir des difficult~s si le F.f. contient d6j~ des synonymes: une collection de synonymes anglais pourrait se trouver marquee de plusieurs z-mots diff6rents; cela cr6erait de l'ambigu~t6 dans le langage pivot, ce que nous d6sirons vivement 6viter. Si une telle ambigu~-t6 apparaSt, nous aurons ~ raffiner la notion de z-mot pour l'61iminer. Mais nous avons d6j~ introduit des raffinements, puisque nous conservons dans le pivot, outre les z-mots, les 616ments 6tudi6s plus bas, soit: SEMANTIC PARAMETERS, STYLE~ ATTITUDE, THESAURUS CATEGORY.Par la mani~re dont les z-mots sont produits, nous obtenons une d~finition op~rationnelle de la valeur cognitive des lex~mes du pivot. Chaque collection de sens de lex~mes constitue ce que Sparck Jones, dans son ouvrage sur les synonymes anglais, appelle un "row"; nous consid~rons que nous avons ~tendu son module ~ la traduction lexicale ou, disons-nous, ~ la "synonymic interlinguale" [SPARCK JONES (1965) ].Le traitementactuel se fair: phrase par phrase; le systame pr0pes~ tiendrait compte de ce que l'analyse s~mantique n'est possible qu'~ partir d'un texte entier et non de phrases isol~es.2. Recherche daus un lexique s~mantiquement structur~ selon le principe de correspondance maximale Le lexique propos~ assignera aux cha~nes dVentr~e des ensembles de marqueurs s~mantiques et syntaxiques (ces cha~nes ne seront pas limit~es ~ des mots isol~s). La sortie du lexique, pour une chaSne donn~e comporterait l'ensemble de ces marqueurs. Les sorties du lexique actuel sont constitutes de mots du langage pivot assortis de tels marqueurs [cf. section V].-Au niveau de la phrase, un lexique s~mantiquement structur~ nous permettrait d'~liminer les sens de la phrase satisfaisant aux r~gles de la syntaxe, mais non celles de la s~mantique. Au niveau des relations entre phrases, il nous permettrait d'~tablir des relations entre les domaines s~man~ tiques de diverses parties du texte.3. et 4., 14. et 15. Programmes palliatifs Ces programmes permettent de produire un r~sultat dans les cas o~ le syst~me ne pourrait pas traiter un ~l~ment lexical ou syntaxique. semantic parameters: Certains membres de l'6quipe de recherche aimeraient adopter une approche plus analytique dans la d~finition du voca* bulaire pivot [cf. section IIi.2] et ce groupe de param~treS peut ~tre consid6r6 comme un pas dans cette'direction. Mais l'id~e de ce type deparam~tre, que 'Melchuk a 6tudi6e en d~tail [MELCHUK (1967) .] n'a 6t~ jusqu'ici adopt6e chez nous qu'avec des restrictions. Melchuk fair l'hypoth~se que ces param~tres sont des universaux: nous ne les introduisons que lorsqu'ils sont justifies ouvertement par des synonymes ou des t raductions. Ainsi dans les exemp-les suivants: .("causatif") (i) Eng. inform --Fr. faire savoir (ii)Eng, inform --Eng. le__t_tknow; ("inchoatif")(iii)Eng. o~_o_t_osleep --Fr. s'endormir, les 616ments soulign6s sont une justification acceptable de la pr6sence'du param~tre en question.II s'agit-respectivement de "niveau de style" et de "jugement port6 par le locuteur sur l'information cognitive qu'il transmet". Nous pensons que ces deux 616ments font pattie de la "signification" totale d'un lex~me puisqu'elles sort refl6t6es dans le choix du locuteur parmi des lex~mes comportant la m~me valeur cognitive. Nous sommes conduits ~ cette extension de "signification" d~s que nous voulons de bonnes traductions. Nous reconnaissons par exemple la diff6rence entre "in future" (familier) et "henceforth" (rh~torique); ou entre "leave one's country" (attitude neutre) et "abandon one's country" (attitude de condamnation).On peut se demander s'il existe des synonymes templets. Ainsi, lorsque l'on groupe des synonymes pour la traduction il est important de pouvoir d6crire d'une part la "pattie du sens" qui cr6e synonymie, et d'autre part la "pattie de sens" qui distingue des synonymes partiels. La synonymie, pensons-nous, est fond6e sur le contenu cognitif des lex~mes, tandis que STYLE, ATTITUDE et THESAURUS CATEGORY (voir ci-dessous) sont des param~tres de diff6rentiation qui peuvent rendre une synonymic partielle sans la d6truire enti~rement. thesaurus category: I1 nous faut situer chaque usage d'un lex~me dans un thesaurus structur6. Comme d'autres avant nous en T.A., nous nous sommes adress6s ~ Roget, ou du moins ~ une collation moderne de son magnum opus [MAWSON (1946) ]. Sa hi6rarchie date du 19 e si~cle, et au moins en pattie est extra-linguistique et refl~te une 6poque et une culture donn6es. Elle est toutefois utile pour une description partielle des contextes d6terminants pour les polys~mes, et m~me pour la reconnaissance des 616ments principaux ("actants") dans notre grammaire de d6pendance. La distinction th6orique jus-tifi6e entre "facteurs linguistiques" et "facteurs culturels" dans la production du langage est trop vague pour 8tre marqu6e en pratique.Malheureusement, il nTy a pas de correspondant fran~ais au Roget, et nous n'avons pas les ressources et le temps pour compilerun nouveau thesaurus.Ces d6tails n'apparaissent pas en ligne 4, c'est-~-dire qu'ils ne sont pas "persistants" dans le pivot. Les lex~mes du pivot ne comportent pas d'indications syntaxiques, car un concept exprim6 en anglais par un verbe, par exemple, peut se trouver ex-prim~ par un nom en fran~ais, etc. Mais d'autre part, les r6alisations lexicales en anglais ou fran~ais doivent ~tre class6es en par~ ties du discours pour que la syntaxe puisse 8tre trait6e. De plus, il nous faut attacher ~ chaque lex~me la description des structures qu'il peut gouverner. Nous Voposons le terme REGIME pour d6noter ces structures. Jusqu'ici, nous n'avons 6labor6 la description ad6quate des r6gimes que pour les verbes et leurs "actants" (selgn Tesni~re et le CETA, avec des modifications). Mais la partie lexicographique permet l'61aboration future de la description des r6gimes pour les autres parties du discours.Nous avons trouv6 que les r~gles syntaxiques peuvent souvent s'appliquer ~ des cat6gories de thesaurus d'un niveau 61ev6 clans la hidrarchie de Roget. Par exemple, il y a des formes de phrases caract6ristiques des verbes de "communication humaine". Ces trouvailles sont importantes pour l'6conomie des grammaires, puisqu'elles tendent ~ confirmer l'hypoth~se que la syntaxe n'est pas ind6pendante de la s6mantique.La description d'un r~gime a un but double: en plus de son utilit~ dans l'analyse de la phrase, elle constitue un contexte g~n~ralis~ sp~cifiant un certain usage d'un lex~me. Nous nous ef-for~ons donc de g~n~raliser de la sorte tousles contextes particuliers -c'est-~-dire les citations -que notre dictionnaire de base [HARRAPS (1967) ], nous offre. Nous n'avons pas assez de place ici pour ~tudier le d~tail des sous-param~tres commandos par REGIME.En conclusion, remarquons qu'il est tr~s bien d'~crire une grammaire explicitant les relations entre les concepts d'une phrase ou d'un texte, mais que la moiti~ seulement du travail de T.A. est faite rant que les concepts eux-mSmes ne sont pas formellement d~finis.La phase de consultation lexicale fournit ~ l'entr~e de la grammaire d'analyse une s6quence de mots du langage pivot assortis chacun d'une ou plusieurs categories gr~aticales. La sortie de la grammaire d'analyse est une s~quenee -munie d'un ordre hi~rarchique -de mots du langage pivot qui repr6sente les d6pendances s~mantiques de la phrase d'origine. Ii y a une eorrespondance biunivoque entre les configurations s~mantiques et les chaTnes du langage pivot. Par consequent les paraphrases seront repr~sent~es par une mSme chaTne du langage pivot, et les phrases n-ambigu~t~s auront n representations dans le fangage pivot.Les r~gles de la grammaire sont de deux types fondamentaux: I. R~gles qui assignent une cat~gorie grammaticale sup6rieure ~ une s6quence de categories grammaticales.2. R~gles qui permettent, effacent ou ajoutent des ~16ments dans une s~quence.Dans le formalisme Wces deux types de r~gles apparaissent dans deux parties s~par~es de la grammaire,Les r~gles du type 1 effectuent une analyse en constituants i~diats de la cha~ne d'entr~e (ou de r~arrangements de celle-ci effec-tu~s par des r~gles du type 2.) Pour qu'une chaSne pivot r~sulte de ce traitement, il est n~cessaire que la cat~gorie "phrase compl~te" soit attribute ~ la chaSne d'entr~e dans son ensemble. Les r~gles de type 1 "essaient" d'assigner une cat6gorie ~ chaque sous chaSne de la cha~ne d'entr~e, mais seules seront conserv~es les attributions de categories eonduisant ~ 1'attribution de la cat6gorie "phrase complete" ~ la chaSne d'entr~e. The witness to the accident that occurred at the corner et The witness to the accident that spoke to thereporter s Les r~gles du type 2 qui 61iminent les variantes paraphrastiques le font par r~criture des variantes en une m6me cha~ne. Ainsi, les variantes:The man to whom I gave itThe man whom I gave it toThe man who I gave it to The man that I gave it to sont toutes r~crites dans la mSme forme canonique, et trait~es ~ partit de ce moment de la m6me fa~on. De cette mani~re, les variantes syntaxiques sans signification s~mantique ne sont pas conserv6es jusqu'~ la cha~ne en langage ~vot.Le but de la g~n~ration du fran~ais est dlobtenir une expression adfiquate des structures s6mantiques cod~es dans les cha~nes du langage pivot, qui soit aussi proche que possible du fran §ais (technique) standard. II est ~vident que les raffinements stylistiques ne sont pas encore -et ne seront pas pour longtemps -~ l'ordre du jour.Nous recherchons essentiellement qu'une forme correcte de l~expression, on peut diviser la t~che en trois parties. Le fran §ais comporte des marques dtaccord obligatoires et certaines contraintes d'ordre des ~l~ments. De plus, il faut g~n~rer les formes correctes des lex~mes pourvus de leurs marques grammaticales.Nous avons da pour cela diviser la g~n~ration du fran~ais en quatre phases.La premiere (I) d6tache de la structure s~mantique cod~e dans les cha~nes -pivot les 61~ments abstraits codant les lexDmes, et les remplaee par des ~x~mes fran~ais accompagn~s de leur marqueurs grammaticaux inh~rents (genre du nom, classes s6mantiques et pr~positions r~gies par le verbe, etc.)La seconde (II) effectue une recomposition de la structure s~mantique et une copie des marqueurs introduits en phase I en toutes les positions off ils sont requis par les r~gles d'accord du fran §ais.La troisi~me (III) donne aux 61~ments l'ordre de surface du fran §ais. L'importance de cette phase est r6duite dans une grande mesure par la d~cision de n~gliger pour l'instant toutes sortes de d~tails secondaires. C'est sur elle que nos efforts futurs devront porter si nous voulons am61iorer la fid61it6 "stylistique" de la traduction.Nous pr~voyons la n~cessit~ d'une quatri~me phase s6par~e de la troisi~me. Cette phase IV serait proprement appel~e "morphologie". Elle correspond grossi~rement ~ la partie phonologique d'une graranaire g~n~rative, et n'est pour l'instant repr~sent~e que par quelques r~gles plac6es "en appendice" ~ la phase III. Des travaux effectu~s il y a quelques ann6es par A. Dugas serviront de base ~ un traitement relativement simple de la morphologie.A la lumi~re des experiences pass6es, il est apparu que le module math6matique de traduction que nous utilisions (grammaires-W) pr6sentait certaines lacunes: 1. Difficult~ de fractionner la phase d'analyse ou de ~6n6ration en plusieurs phases.2. Manque de souplesse pour manipuler certaines informations structur6es sous forme d'arbre, en partieulier lors de la phase de g~n6ratien. A. Colmerauer a donc commenc~ l'~tude d'un nouveau type de grmmaire plus adapt~ au but que nous nous proposons d'atteindre.Ces grammaires (syst~mes-Q) seront essentiellement constitu6es de r~gles de r~6criture g~n~rales pouvant non seulement s'appliquer ~ des cha$nes mais aussi ~ des arbres. Un programme est en cours d'61aboration, qui permettra 6tant donn~ un texte ou une information structur~e sous forme d'arbre, de lui appliquer un certain nombrede transformations d6crites par une grammaire et d'obtenir un nouveau texte ou une nouvelle information structur~e. En utilisant plusieurs lois ce m6me programme avec des grammaires diff~rentes, on pourra alors enchaSner. plusieurs phases d'analyse de l'anglais et plusieurs phases de g~n~ ration du fran~ais. I1 faut remarquer que, contrairement aux grammai~ res W, ce sera le mSme programme qui sera utilis6 pour l'analyse et la g~n6ration. Ceci donnera plus de possibilit6s aux linguistes ~crivant les grammaires: en effet, nous nous sommes aper~us que durant la phase d'analyse il ~tait parfois n~cessaire d'utiliser certains processus propres ~ la phase de g~n~ration et inversement durant la phase de g6n6ration, de r~analyser certaines parties afin de v6rifier la grammaticalit~ du fran §ais g~n~r6.Nous avons d6j~ commenc6 l'6criture d'une partie de ce nouveau syst~me, qui est op~rationnelle depuis juin 69. Acette date, nous avons donc pu commencer l'~tude de la traduction automatique une plus vaste ~chelle. Les gra~maires-W qui sent d6j~ 6crites seront tr~s facilement r~utilisables, le nouveau formalisme ~tant surtout une extension de ce que nous avons fair jusqu'~ maintenant.Les procedures de traduction d~crites dans les sections pr~c~dentes comportent un certain nombre de limitations. Celles-ci nous ont conduits ~ examiner quelles seraient les extensions n6cessaires de notre syst~me. La figure 5 indique les types de traitement dent nous pr~voyons la n~cessit6, ainsi que l'organisation du traitement. Paute de place, nous ne ferons pas de commentaires sur les parties qui sent d~j~ en cours de d~veloppement et n'ont qu'~ @tre adapt~es au sys-t~me, par exemple 9: Programme d'analyse syntaxique. Nous ne parlerons donc que des sections qui en sent encore au stade th~orique, mais pour lesquelles nous entrevoyons une r6alisatien possible, par exemple 5: Analyse de texte. programme d'analyse de texte: Ce programme comportera des r~gles concernant les relations entre les phrases d'un texte. Celles-ci assureront la coherence du texte dans son ensemble. Parmi les t~ches sp~cifiques que ce programme pourrait assurer, citons: 1. Clarification des r~f~rences pronominales. 2. D~sambiguation des ~l~ments dWune phrase dVapr~s d'autres ~l~ments du texte. 5. Restauration de portions ~lid6es du texte (par exemple restitution de 1'agent effac~ d'un passif). On peut pr~voir deux niveaux de r~gles diff~rents:1. Celles qui agissent sur des traits s~mantiques pour ~liminer des ambigu~t~s ou ~tablir des relations d'inclusion .2. Celles qui traitent des interd~p~endances syntaxiques entre les phrases d'un mSme texte.6., 10. et 11. Options de recyclageOn peut s'attendre que la complexit~ des relations entre choix lexical, structures de phrase et structures de texte requi~re parfois un traitement cyclique. (1968) pour un exemple d'une telle operation]: La deuxi~me pattie est le lexique. Celui-ci peut 8tre vu comme une relation binaire dont les premiers arguments sont des paquets de traits s~mantiques et les seconds arguments des matrices phonologiques (-Bans le cas d'un syst~me operant sur des textes ~crits, le lexique sera une relation entre des paquets de traits s~mantiques et des chaSnes de caract~res). La relation "lexique" n'est pas une fonction, en ce qu'un paquet de traits s~mantiques donn~ peut avoir un correspondant phonique different selon des circonstances paralinguistiques.I1 est int~ressant d'introduire 1~ des param~tres d' "attitude" , "style" etc. [voir section V] Mentionnons en passant qu'il est s~duisant (et apr~s tout raisonnable) de faire l'hypoth~se suivante: la forme des divers modules syntaxiques et 1'ensemble de tousles traits s~mantiques semblent 8tre universels. Par contre les valeurs des param~tres de contr81e des modules syntaxiques, ainsi que la relation "lexique" avec tous ses param~tres, semblent ~tre acquis par ~ducation dans une soci~t~ donn~e.Une traduction consistera done en deux lois deux operations. Premi~rement, des formes phoniques Pl -ou ~crites -seront identifi~es et appliqu~es sur des paquets de traits s~mantiques ~ par la relation "lexique 1" tandis que les structures de constituants seront analys~es et appliqu~es sur un graphe -~tiquet~es par les ~ -par 1'operation des modules syntaxiques (les param~tres ayant les valeurs ~). Deuxi~mement les graphes ainsi obt~/lus seront trait~s par les m~mes modules syntaxiques, dont les param~tres auront pris les va-1curs P2 correspondant ~ la langue cible, tandis que les ~tiquet~ tes ~ g~ront appliqu~es sur des chatnes de caract~res ou des matrices phonologiques P2 par la relation "lexique 2".(il n'est pas possible de parler de v~ritable "justification") l'approche que nous avons adopt~e en ce qui concerne le traitement automatique du probl~me de traduction, telle qu'elle est pr~sent~e dans les sections suivantes.Notons un fair important. Les param~tres syntaxiques ou lexicaux acquis par ~ducation dans une certaine soci~t~ sont fortement variables ~ l'int~rieur m~me d'une langue donn~e (variations "dialectales", "stylistiques" etc.) L'attitude et la sensi-bilit~ du traducteur envers ces variations peut diff~rer ~norm~ment. L'id~al serait que la traduction respecte routes les nuances. Bans le cas de la traduction automatique, toutefois, ceci impliquerait une "analyse culturelle" qui reste ~ faire. Voici une d~cision possible: on choisira d'accepter autant de structures possi-bles darts la lamgue source, e'est-~-dire qu'on permettra aux param~tres la plus large variation compatible avec l'intelligibi-lit~ (61argir le domaine de variation d'un param~tre de contr81e diminue naturellement l'information apport~e par l'op~ration correslmndante}. On essaiera tout de mSme de corr~ler autant que possible les valeurs des param~tres -surtout dans le lexiqueavec des niveaux de style, etc. Du cOt~ de la langue cible, on se contentera pour un temps de transmettre 1'information n~cessaire, e'est-~-dire qu'on restreindra fortement la variation des pa-ram~tres autour de la valeur correspondant approximativement au "standard" du langage. (On pourra m6me aller au del~ en restreignant chaque structure ~ une seule expression dans la langue cible.Le langage pivot est un langage formel apte ~ d6finir des relations s~mantiques. Les ~l~ments qui le composent sont des mots -appel~s z-mots darts notre syst~me -qui correspondent d'une fa~on univoque ~ des configurations s~mantiques d'un type particulier (restreintes darts notre syst~me aetuel ~ la paire anglais-fran §ais); ces mots ne sont done pas ambigus et n'ont pas de synonymes. L'ordre canonique darts lequel ces z-mots sont disposes indique les relations s~mantiques qu'il entretiennent.Par rapport au module de traduction, une chaSne du langage pivot ne fournit qu'une repr6sentation s~mantique pour chaque chaSne de lex~mes de la langue source et pour une ou plusieurs chaSnes de lex~mes dans la langue cible. Tel qu'utilis~ dans mtre syst~me, le langage pivot fournit des cha~nes qui constituent la sortie de l'analyseur de l'anglais et deviennent en m~me temps l'entr~e pour le g~-n~rateur du fran~ais. Plus explicitement, l'analyseur transforme une suite de mots et de signes de ponetuation de l'anglais en autant de chaSnes canoniques du langage pivot qu'il y a de sens diff6rents at-tribu6s ~ eette suite. Le g~n6rateur va traiter cette chaSne canonique et la transformer ~ son tour en une ou plusieurs suites de lex~mes du fran~ais dont une au moins dolt correspondre ~ la configuration s~mantique exprim~e dans la cha~ne du langage pivot.Le z-mot auquel correspond un ensemble d~fini de lex~mes appartient au lexique. On obtient autant de z-mots pour un lex~me que celui-ci a de sens diff~rents d'ambiguit6; par contre, un seul z-mot recouvre touteune classe de lex~mes synonymes. Ces z-mots ne constituent pas des traductions fran~aises de mots anglais mais plutSt des entit~s abstraites qui recouvrent une ou plusieurs confi, gurations s~mantiques particuli~res qui prendront dans diff~rentes langues naturelles diff~rentes configurations graph~miques ou phon6miques.Les relations qu'entretiennent les ~l~ments du langage pivot sont exprim~es par des structures de d6pendance. Celles-ci ont 6t6 imagin~es tout d'abord par Tesni~re, et l'usage que nous en raisons est inspir6 des ~tudes effectu~es ~ Grenoble; toutefois la forme dans laquelle nous les utilisons est conditionn~e par nos besoins particuliers. Le tableau suivant illustre la structure de d6pendance caract~ristique de notre module. X I X 2 X 3 X n 0.14 L'adaptation des structures de d~pendance se fait naturellement ~ mesure que la grammaire devient plus complexe ou que le formalisme re~oit des modifications. Par exemple, nous voulons introduire sous peu des sp~cificateurs de phrases comme "interrogatif", "imp~ratif", etc. et inclure dans cette classe d'op~rateurs ceux du temps, de la n~gation, du modal et de l'adverbe.La sortie des chaSnes de ce module de d~pendance correspond actuellement ~ une representation lin~aire des ~l~ments. Le gouverneur precede la cha~ne gouvern~e elle-m~me entre parentheses. Dans le cas o~ la fonction s~mantique r~v~l~e dans chacune des cha~nes gouvern~es par un mSme gouverneur n'est pas la m~me, comme cela se produit pour les premier, deuxi~me, troisi~me actants, on ajoute des ~tiquettes pour indiquer le type de d~pendance dans chacun des cas. *L'6tiquette "epi-" s'emploie pour marquer des eha~nes qui modifient la chaYne qui les gouverne. Par exemple, les relatives sont pr~c~d~es de cette ~tiquette parce qu'elles modifient le groupe nominal qui les gouverne.Jusqu'~ la rbdaction de cet article, notre travail lexicographi~ que s'est surtout occup6 de la "conversion" des mots anglais en lex~mes de notre langage intermgdiaire (pivot).La figure 1 illustre une entree de dictionnaire pour un verbe anglais. La ligne 2 de l'exemple pr~sente ce que nous appelons l'entr~e primaire: elle contient toute l'information no-t6e par notre lexicographe, y compris une citation. La citation est encadr~e par des ast~risques, et trait~e comme un "commentaire" par les programmes de traitement. Au-dessus de chaque symbole de l'entr~e primaire apparaYt la variable lexicale dont le symbole est une valeur. Dans le syst~me W que nous utilisons [voir section II] ces variables sont d~finies dans les "m6tar~gles", soit par listes de'valeurs, soit par un schema de la forme: ZWORD =~ Z X / X repr~sente n'importe quelle valeun Les entr6es de dictionnaire proprement dites sont ~crites dans le format g~n6ral des "pseudor~gles" de la grammaire-W, et ont le m6me statut que les autres r~gles de ce type. £n cons~ quence elles peuvent 6tre soumises directement aux r~gles * On pourrait se contenter de l'ordre des chaSnes. d'interpr~tation de la grammaire, et peu importe le format choisi pour les entr~es primaires, tant qu'elles satisfont au format g6-n~ral des grammaires-W. En effet on peut 6crire facilement les r~gles n6eessaires pour les transcrire dans un format compatible avee la partie "syntaxique" de la grammaire. Ainsi on peut introduire beaucoup de d~tails descriptifs dans les entr~es de dictionnaire m6me si nous n'en avons pas l'usage pour le moment. Malheureusement, chaque r~gle de traitement du dietionnaire accroSt la grammaire et par cons6quent le temps de traitement, paree que le programme d'application de la grammaire W essaie chaque r~gle chaque phase du traitement. I1 est ~vident qu'il nous faudra 6crire sous peu un programme particulier de consultation du dictionnaire, m~me si nous pouvons nous en tirer pour le moment en encha~nant deux grammaires-W, dont la premiere ne se compose que de r~gles de ee niveau.La ligne 3 dans la figure est la r~gle qui r~crit l'entr~e primaire dans le format requis ~ present dans notre grammaire. La ligne 4 est le r6sultat de l'application de cette r~gle. Toutes les valeurs encadr~es par des crochets sont "persistantes"c'est-~-dire sont conserv~es dans le pivot -Les lex~mes du pivot sont done des symboles complexes limit,s par des crochets, La ligne 5 illustre une r~gle "morphologique", qui explicite le temps du verbe et le met sous la forme RACINE/TEMPS correspondant ~ la ligne 4. Cette r~gle fair ~galement partie des "pseudor~gles", et n'a pas de statut particulier.Nous allons ~ present examiner l'importance linguistique des diverses variables. et 18. r6duction des chasnes pivot: Ce programme choisirait entre divers sens possibles qui n'auraient pas ~t~ ~limin~s par les progranunes precedents, pour ~viter la production de multiples traductions. : En ce qui concerne ces recherches, des rapports p~riodiques continuent de para~tre et le dernier d'entre eux ajoute consid~rablement aux notes trop succintes qui vont suivre.Nous ne donnerons ni une histoire ni une description formelle du syst~me-W; nos coll~gues informaticiens le font dans une communication au Congr~s de I'A. C.M., San Francisco, Aoflt 1969 [DE CHASTELLIER et COLMERAUER (1969 ]. Nous voulons plutOt expliquer rapidement l'usage linguistique que nous raisons des caract~ristiques de ce syst~me.Celui-ci consiste en un parseur-interpr6teur (P.I.) et un synth6tiseur, tous deux de puissance transformationnelle, Le synth6tiseur est l'inverse du P.I. ~ quelques d~tails pros, et nous ne consider rons que le P.I. dans ce qui suit. Les entr~es de dictionnaire sont 6crites darts le m~me format et avec le m~me st@tut que les autres r~ gles, et jusqu'ici il n'y a pas de traitement s~par~ pour le dictionnaire [cf. section V].Le syst~me est essentiellement destin~ ~ tra~ter des cha~nes, quoiqu'il permette aussi, comme nous le verrons, de traiter des arbres. La donn~e d'entr~e pour le P.I. est constitute d'une cha~ne et d'une gram~aire; la sortie est une (ou des) chaSne(s) "interpr~-t6e(s)".La difference entre chaSne et cha~ne interpr~t6e est la suivante: la premiere est une s~quence non concat~n~e, que l'on peut concat~ner par la suite en lui appliquant des r~gles appropri~es; la seconde a subi la concat6nation et forme un symbole complexe unique. La consequence, parfois g6nante, en est que le P.I. ne peut traiter une chalne interpr6t6e que dans sa totalitY. Par exemple, si on a appliqu6 une r~gle: THE MAN . SPOKE --~ THE~MAN~POKE il sera impossible d'appliquer des r~gles subs~quentes ~ SPOKE, moins que la r~gle ne sp~cifie THE'~IAN aussi.Les structures d'arbre peuvent 8tre indiqu6es par des cha~nes parenth6s6es. De m6me on peut introduire des 6tiquettes de noeuds ou de fonctiens dans les cha~nes; il n'y a pas d'indices souscrits.Dans la description formelle, les chaYnes interpr6t~es finales sont appel6es "chaines axiomatiques". (Le synth6tiseur commence sa d6rivation ~ partir de ces "chalnes axiomatiques".) Dans notre syst~me de traduction, la donn6e de P.l. comporte une chaine anglaise [cf. section VI]; les "chaines axiomatiques" sont des chalnes de notre langage pivot [cf. section IV]; la sortie du synth6tiseur est une traduction en un fran §ais "restreint" [cf. section VII]. Le P.l. et le synth6tiseur peuvent 6tre enchaln6s pour passage direct de l'anglais au fran~ais.Une grammaire West faite de deux parties, i.e. deux ensembles de r~gles disjoints, qui sont d6crits ci-dessous. Le syst~me applique chacune des deux parties l'une apr~s l'autre, ~ chaque 6tape du traitement. Cette alternance continuelle est peut 6tre le trait le plus inhahituel de la grammaire W, pour un linguiste, et les commen~ qants prennent en g6n6ral quelques semaines pour s'y faire. ---> :this arc completes the graph but is not used in the PS structure specified by the metarules.Line des parties de la grammaire est constitu6e de "m~tar~gles". Celles-ci sont C.F. Le syst~me les utilise pour ~tiqueter certains noeuds du r~seau. Les r~gies unaires attachent plusieurs ~tiquettes au m~me noeud. Nous pouvons consid6rer que les autres fabriquent des arbres de structures constituants, si nous ~crivons la grammaire dans ce sens.Une cha~ne ainsi trait6e peut d6sormais ~tre d6crite par le profil d'une section ~ travers le r~seau que la m6tagrammaire lui a at-tachS. Pour cela, il fait ~noncer, de gauche ~ droite, les noms des noeuds que le profil traverse -et qui doivent donc avoir ~t~ introduits par les m6tar~gles -Ainsi la chaine de la fig. 1 peut se d~crire par "DETERMINER MAN SPOKE" ou "NP SPOKE", ou tout autre profil correct. Enfin, tout un "arbre" peut ~tre d~not~ simplement par le nom de son noeud sup~rieur. Sur chaque segment, le syst~me effectue toutes les analyses possibles scion les m6tar~gies. Comme dans d)autres algorithmes (par exemple celui de Cocke) ceci produit des analyses de sous-chalnes qui stav~reront abortives avant que le traitement de route la chalne dTentr6e soit achev~. I1 faut admettre que cette strategic charge la m~moire de l'ordinateur, et pour le moment nos chaSnes sont limit~es ~ environ 30 symboles. [mais voir Section VIII] La description par profils s'est av6r6e tr~s utile pour l'6criture des autres r~gles de la grammaire, appel6es "pseudor~gles" (ce nom n'est pas tr~s r6v~lateur quant ~ l'usage que les linguistes font de ces r~gles).Dans la partie transformationnelle, on peut donc 6crire des "pseudor~gles" qui, jointes aux m~tar~gles, g~n~rent un langage de type 0. II n'y a pas de restriction sur le nombre de symboles ni les types d'op6rations dans ces r~gles. Comme un symbole peut nommer un arbre -grace aux m~tar~gles -le r~sultat pratique est un pouyoir con-sid6rable dans le traitement des arbres. En fait nous pensons que le syst~me W pourrait ~tre utilis~ comme testeur de graF~aires transforma. tionnelles, moyennant l'introduction de certaines am61iorations, par exemple ordre des r~gles.La fig. 3 donne un exemple de grammaire W -linguistiquement 616mentaire mais compl~te -ainsi qu'une sortie correspondante. Dans le P.l., les r~gles doivent se life "<membre droit> est r6~crit <membre gauche> ". Pour la clart6 de l'expos6, nous avons quelque peu chang~ le format de la sortie machine r6elle. On peut se representer id~alement la "facult~ de langage" dtun home come une machine non d~terministique qui a pour fonc~ tion dteffectuer une correspondance entre certaines chaSnes sonores et certaines representations s~mantiques tongues comme une collection de relations choisies (par une operation que nous pouvons qualifier d t "abstraction") parmi celles qui existeraient entre des ~l~ments de la perception.La traduction id~ale, comme chacun salt, consiste ~ "comprendre" un texte dans une langue -ctest-~-dire ~ construire 1~ oa les representations s~mantiques correspondantes -et ~ "parler" dans une autre langue -c'est-~-dire effectuer dans lVautre langue les operations conduisant de la representation s~mantique ~ la cha~ne sonore. Id~alement, la seconde phase devrait ~tre fortement conditionn~ par la premiere, et nous devrons examiner les points de correspondanee. Appendix:
null
null
null
null
{ "paperhash": [ "chastellier|w-grammar", "jones|experiments_in_semantic_classification" ], "title": [ "W-grammar", "Experiments in semantic classification" ], "abstract": [ "A new type of grammars is presented here, called W-grammars. It is shown how they can be used in translation processes. Examples are taken from the fields of algebraic manipulation and computational linguistics.", "It is argued that a thesaurus, or semantic classification, may be required in the resolution of multiple meaning for machine translation and allied purposes. The problem of constructing a thesaurus is then considered; this involves a method for defining the meanings or uses of words, and a procedure for classifying them. It is suggested that word uses may be defined in terms of their \"semantic relations\" with other words, and that the classification may be based on these relations; the paper then shows how the uses of words may be defined by synonyms to give \"rows\" or sets of synonymous word uses, which can then be grouped by their common words, to give thesauric classes. A discussion of the role of synonymy in language is followed by an examination of the way in which multiple meaning may be resolved by the use of a thesaurus of the kind described." ], "authors": [ { "name": [ "Guy de Chastellier", "A. Colmerauer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Karen Spärck Jones" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "208965016", "10687710" ], "intents": [ [], [ "background" ] ], "isInfluential": [ false, false ] }
null
665
0
null
null
null
null
null
null
null
null
bc500000253584c7bc503401da2952d832cb1fc4
6251721
null
Automatic Processing of Foreign Language Documents
Experiments conducted over the last few years with the SMART document retrieval system have shown that fully automatic text processing methods using relatively simple linguistic tools are as effective for purposes of document indexing, classification, search, and retrieval as the more elaborate manual methods normally used in practice. Up to now, all experiments were carried out entirely with English language queries and documents. The present study describes an extension of the SMAKT procedures to German language materials. A multi-lingual thesaurus is used for the analysis of documents and search requests, and tools are provided which make it possible to process English language documents against German queries, and vice versa. The methods are evaluated, and it is shown that the effectiveness of the mixed language processing is approximately equivalent to that of the standard process operating within a single language only.
{ "name": [ "Salton, G." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 4
1969-09-01
0
88
null
null
null
SMART is a fully-automatic document retrieval system operating on the IBM 7094 and 360 model 65. Unlike other computer-based retrieval systems, SMART is thus designed as an experimental automatic retrieval system of the kind that may become current in operational environments some years hence.The following facilities, incorporated into the SMART system for purposes of document analysis may be of principal interest: a) a system for separating English words into stems and affixes (the so-called suffix 's' and stem thesaurus methods) which can be used to construct document identifications consisting of the stems of words contained in the documents; b) a synonym dictionary, or thesaurus, which can be used to recognize synonyms by replacing each word stem by one or more "concept" numbers; these concept numbers then serve as content identifiers instead of the original word stems; c) a hierarchical arrangement of the concepts included in the thesaurus which makes it possible, given any concept number, to find its "parents" in the hierarchy, its "sons", its g) a dictionary u~datln~ system, designed to revise the several dictionaries included in the system: i) word stem dictionary ii)word suffix dictionary iii) common word dictionary (for words to be deleted duping analysis) iv) thesaurus (synonym dictionary) v) concept hierarchy vi) statistical phrase dictionary vii) syntactic ("criterion") phmase dictionary.The operations of the system are built around a supemvisory system which decodes the input instructions and arranges the processing sequence in accordance with the instructions received. The SMART systems organization makes it possible to evaluate the effectiveness of the various processing methods by comparing the outputs produced by a variety of different runs. This is achieved by processing the same search requests against the same document collections several times, and making judicious changes in ~e analysis procedures between runs. In each case, the search effectiveness is evaluated by presenting paired comparisons of the average perfommance over many search requests for two given search and retrieval methodologies. A typical thesaurus excerpt is shown in Fig. 3 Query QB 13 in Three Languages Fig. 4 into German by a native German speaker.The English queries were then processed against both the English and the German collections (runs E-E and E-G), and the same was done for the translated German queries (runs G-E and G-G, respectively). Relevance assessments were made for each English document abstract with respect to each English query by a set of eight American students in library science, and the assessors were not identical to the users who originally submitted the search requests. language analysis is summarized in Table 3 .
One of the major objections to the praetical utilization of the automatic text processing methods has been the inability automatically to handle foreign language texts of the kind normally stored in documentation and library systems. Recent experiments performed with document abstracts and search requests in French and German appear to indicate that these objections may be groundless.In the present study~ the SMART documsnt retrieval system is used to carry out experlments using as input foreign language documents and queries. The foreign language texts are automatically processed using a thesaurus (synonym dictionary) translated directly from a previously available English version. Foreign language query and document texts are lookedup in the foreign language thesaurus and the analyzed forms of the queries and documents are then compared in the standard manner before retrieving the highly matching items. The language analysis methods incorporated into the SMART system are first briefly reviewed. Thereafter, the main procedures used to process the foreign language documents are described, and the retrieval effectiveness of the English text processing methods is compared with that of the foreign language material.
Since the query processing operates equally well in both languages, while the German document collection produces a degraded performance, it becomes worthwhile to examine the principal differences between the two document collections. These are summarized in Table 4 The other thesaurus characteristic -that is its completenessappears to present a more serious problem. Table 4 shows that only approx- .-27to produce a document content analysis which is equally effective in English as in German.In particular, differences in morphology (for example, in the suffix cut-off rules], and in language ambiguities do not seem to cause a substantial degradation when moving from one language to another.For these reasons, the automatic retrieval methods used in the SMART system for English appear to be applicable also to foreign language material.Future experiments with foreign language documents should be carried out using a thesaurus that is reasonably complete in all languages, and with identical query and document collections for which the same relevance judgments may then be applicable across all runs.
Main paper: the smart system: SMART is a fully-automatic document retrieval system operating on the IBM 7094 and 360 model 65. Unlike other computer-based retrieval systems, SMART is thus designed as an experimental automatic retrieval system of the kind that may become current in operational environments some years hence.The following facilities, incorporated into the SMART system for purposes of document analysis may be of principal interest: a) a system for separating English words into stems and affixes (the so-called suffix 's' and stem thesaurus methods) which can be used to construct document identifications consisting of the stems of words contained in the documents; b) a synonym dictionary, or thesaurus, which can be used to recognize synonyms by replacing each word stem by one or more "concept" numbers; these concept numbers then serve as content identifiers instead of the original word stems; c) a hierarchical arrangement of the concepts included in the thesaurus which makes it possible, given any concept number, to find its "parents" in the hierarchy, its "sons", its g) a dictionary u~datln~ system, designed to revise the several dictionaries included in the system: i) word stem dictionary ii)word suffix dictionary iii) common word dictionary (for words to be deleted duping analysis) iv) thesaurus (synonym dictionary) v) concept hierarchy vi) statistical phrase dictionary vii) syntactic ("criterion") phmase dictionary.The operations of the system are built around a supemvisory system which decodes the input instructions and arranges the processing sequence in accordance with the instructions received. The SMART systems organization makes it possible to evaluate the effectiveness of the various processing methods by comparing the outputs produced by a variety of different runs. This is achieved by processing the same search requests against the same document collections several times, and making judicious changes in ~e analysis procedures between runs. In each case, the search effectiveness is evaluated by presenting paired comparisons of the average perfommance over many search requests for two given search and retrieval methodologies. A typical thesaurus excerpt is shown in Fig. 3 Query QB 13 in Three Languages Fig. 4 into German by a native German speaker.The English queries were then processed against both the English and the German collections (runs E-E and E-G), and the same was done for the translated German queries (runs G-E and G-G, respectively). Relevance assessments were made for each English document abstract with respect to each English query by a set of eight American students in library science, and the assessors were not identical to the users who originally submitted the search requests. language analysis is summarized in Table 3 . failure analysis: Since the query processing operates equally well in both languages, while the German document collection produces a degraded performance, it becomes worthwhile to examine the principal differences between the two document collections. These are summarized in Table 4 The other thesaurus characteristic -that is its completenessappears to present a more serious problem. Table 4 shows that only approx- .-27to produce a document content analysis which is equally effective in English as in German.In particular, differences in morphology (for example, in the suffix cut-off rules], and in language ambiguities do not seem to cause a substantial degradation when moving from one language to another.For these reasons, the automatic retrieval methods used in the SMART system for English appear to be applicable also to foreign language material.Future experiments with foreign language documents should be carried out using a thesaurus that is reasonably complete in all languages, and with identical query and document collections for which the same relevance judgments may then be applicable across all runs. : One of the major objections to the praetical utilization of the automatic text processing methods has been the inability automatically to handle foreign language texts of the kind normally stored in documentation and library systems. Recent experiments performed with document abstracts and search requests in French and German appear to indicate that these objections may be groundless.In the present study~ the SMART documsnt retrieval system is used to carry out experlments using as input foreign language documents and queries. The foreign language texts are automatically processed using a thesaurus (synonym dictionary) translated directly from a previously available English version. Foreign language query and document texts are lookedup in the foreign language thesaurus and the analyzed forms of the queries and documents are then compared in the standard manner before retrieving the highly matching items. The language analysis methods incorporated into the SMART system are first briefly reviewed. Thereafter, the main procedures used to process the foreign language documents are described, and the retrieval effectiveness of the English text processing methods is compared with that of the foreign language material. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.132331
null
null
null
null
null
null
null
null
11a30e659eccbb1ab41b879cf9eadf81bf3f7355
1645754
null
Automated Processing of Medical {E}nglish
The present interest of the scientific community in automated language processing has been awakened by the enormous capabilities of the high speed digltal computer. It was recognized that the computer which has the capacity to handle symbols effectively can also treat words as symbols and language as a string of symbols. Automated language processing as exemplified by current research, had its origin in machine translation. The first attempt to use the computer for automatic language processing took place in 1954. It is known as the "IBM-Georgetown Experiment" in machine translation from Russian into English. (I ,2) The experiment revealed the following facts: a. the digital computer can be used for automated language processing 2 but b. much deeper knowledge about the structure and semantics of language will be required for the determination and semantic interpretation of sentence structure. The field of automated language processing is quite broad; it includes machine translation, automatic information retrieval (if based on language data), production of computer generated abstracts, indexes and catalogs, development of artificial languages, question answering systems, automatic speech analysis and synthesis, and others.
{ "name": [ "Pratt, Arnold W. and", "Pacak, Milos G." ], "affiliation": [ null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 11
1969-09-01
17
21
null
Approaches to automatic information retrieval, quantitative studies of generic relations between languages and style analysis~ have been based to a great extent on statistical considerations~ such as frequency counts of linguistic units (phomemes, morphemes~ words, fixed phrases). In each of these approaches linguistic analysis was considered to be a useful but insufficient method for automated information processing because of the many unresolved problems in language analysis. Implementation of statistical techniques for automated indexing, classification and abstracting has proved useful despite certain limitations caused by our lack of knowledge of language. In recent years mathematically oriented studies'of the nature of natural languages have been directed to the development of formal models of grammars, such as context free r context sensitive and transformational grammars. Formal characteristics of these models of grammars, their generative power, and their adequacies and inadequacies may be found in the literature (3, 4, 5, 6, 7) .Several noted scientists, such as , have expressed a pessimistic view in regard to practical implementation of machine translation.Nevertheless, there is merit in continuing efforts for more fundamental research in the area of formal and applied linguistics and computer applications. Even if we are not able to resolve all the problems in language processing at once, limited goals can be attained and tested for validity by design of a model for language processing within a restricted language domain, such as medicine.Aware of the many problems associated with automated processing of natural language, we have limited our efforts, for the present, to the language domain used in pathology diagnoses, a subset of Medical English. Medical diagnosis may be described as the process used by the physician to determine the nature of disease, or as the art of distinguishing one disease from another. The name which is assigned to a disease ~plies the unique configuration of signs and symptoms believed to be characteristic of the condition which has been diagnosed. The diagnosis can be regarded as a summary of the more complete medleal document in a conventionalized medical style.Medical diagnoses are characteristically free of verb phrases.The copulative verb "to be" is frequently implied by the use of comma. Often~ the pseudosentence structures appear to be grammatically illogical. Nevertheless~ these structures carry semantic meaning and are generally understood by others in medicine. Modifiers frequently occur in discontinuous sequence with the nouns they modify.Anaphoric expressions are eo~auonplace.The terminology consists of a mixture of Latin~ Greek and English derivatives.Not uncommonly~ diagnostic statements exhibit features of all three languages. Evidence of heterogeneous linguistic origin is also found in single word forms. The language is rich in the use of compound word forms which are segmentable into single constituents.The distinctive semantic features of diagnostic statements may be categorized as follows:• anatomic site affeeted~ or body system involved in the disease process;• disease condition, including structural changes ranging from gross observations to intracellular ultrastructural changes;• causative agent of the abnormality;• disease manifestations~ including physiological and chemical changes~ observable manifestations, and symptoms reported by the patient;• therapeutic agents or processes used;• causal relationships among disease entities;• method or souree of diagnosis.Two or more of these distinctive semantic features may be combined.in a single conceptual unit, e.g., "measles" implies both the specific infectious disease manifested, and the etiology, the rubeola virus; while "pneumonia" describes the inflammatory disease process or condition, as well as the anatomic site affected,, lung.On the other hand, the precise designation of the loeation at which a disease entity has manifested itself may require a complex statement for adequate description of the semantics relative to anatomic site affected, e.g., a lesion may be found in the "apicoposteriot segment in the upper division of the upper lobe of the lung."After mentioning some of the pecularities of Medical English we will turn to the description of the system for automated processing In our work we have been using as a lexicon base the Systematized Nomenclature of Pathology (SNOP) 14, the structure of which is described below:SNOP is a special purpose lexicon created by pathologists to assist them in the organization and retrieval of information. The SNOP language consists of a relatively rich word vocabulary and a primitive grammar.A term or conceptual unit is listed in only one of the four semantic categories of the vocabulary and is assigned a unique numerical code within the given information class.The four semantic categories of the SNOP are: The problem of paraphrasing is closely related with automatic recognition of synonymous or nearly-synonymous expressions which are not found in the lexicon. Even if we would be able to assign an approximative value to so-called "unknow~ terms" by the implementation of deductive rules, the physician o~ the expert in the related field will have to make the decision about the synonymity of the term in question. Topography (T) -In both cases it is assumed that the semantic content of the message is understood. For example, the statement: "Pneumonia~ due to staphylococcus" can be formalized as[(RI) (M,E) 1where R 1 is the causative relational predicate "due to," "pneumonia"is the kernel phrase belonging to the semantic category 'M' (morphology) and "staphylococcus" being a member of the category 'E ~ (causative agent). The relational expression "due to" can be substituted by other equivalent expressions such as "caused by" or "resulting from," since they designate a similar relationship between the same The most amazing aspect of language is the fact that despite its enormous complexity human beings are able to use it with success as a communication tool. If we are ever able to discover and describe the process of human thought, we will be closer to the resolution of many problems associated with the formalization and subsequent automatization of natural language. It is not our intention to tackle all the problems inherent in natural language. We believe that we will be able to refine our algorithms and further develop a system which will process medical text by applying the formalized linguistic analytic procedures for the storage of data in such a way that the users' cequirements can be met.
null
null
null
null
Main paper: : Approaches to automatic information retrieval, quantitative studies of generic relations between languages and style analysis~ have been based to a great extent on statistical considerations~ such as frequency counts of linguistic units (phomemes, morphemes~ words, fixed phrases). In each of these approaches linguistic analysis was considered to be a useful but insufficient method for automated information processing because of the many unresolved problems in language analysis. Implementation of statistical techniques for automated indexing, classification and abstracting has proved useful despite certain limitations caused by our lack of knowledge of language. In recent years mathematically oriented studies'of the nature of natural languages have been directed to the development of formal models of grammars, such as context free r context sensitive and transformational grammars. Formal characteristics of these models of grammars, their generative power, and their adequacies and inadequacies may be found in the literature (3, 4, 5, 6, 7) .Several noted scientists, such as , have expressed a pessimistic view in regard to practical implementation of machine translation.Nevertheless, there is merit in continuing efforts for more fundamental research in the area of formal and applied linguistics and computer applications. Even if we are not able to resolve all the problems in language processing at once, limited goals can be attained and tested for validity by design of a model for language processing within a restricted language domain, such as medicine.Aware of the many problems associated with automated processing of natural language, we have limited our efforts, for the present, to the language domain used in pathology diagnoses, a subset of Medical English. Medical diagnosis may be described as the process used by the physician to determine the nature of disease, or as the art of distinguishing one disease from another. The name which is assigned to a disease ~plies the unique configuration of signs and symptoms believed to be characteristic of the condition which has been diagnosed. The diagnosis can be regarded as a summary of the more complete medleal document in a conventionalized medical style.Medical diagnoses are characteristically free of verb phrases.The copulative verb "to be" is frequently implied by the use of comma. Often~ the pseudosentence structures appear to be grammatically illogical. Nevertheless~ these structures carry semantic meaning and are generally understood by others in medicine. Modifiers frequently occur in discontinuous sequence with the nouns they modify.Anaphoric expressions are eo~auonplace.The terminology consists of a mixture of Latin~ Greek and English derivatives.Not uncommonly~ diagnostic statements exhibit features of all three languages. Evidence of heterogeneous linguistic origin is also found in single word forms. The language is rich in the use of compound word forms which are segmentable into single constituents.The distinctive semantic features of diagnostic statements may be categorized as follows:• anatomic site affeeted~ or body system involved in the disease process;• disease condition, including structural changes ranging from gross observations to intracellular ultrastructural changes;• causative agent of the abnormality;• disease manifestations~ including physiological and chemical changes~ observable manifestations, and symptoms reported by the patient;• therapeutic agents or processes used;• causal relationships among disease entities;• method or souree of diagnosis.Two or more of these distinctive semantic features may be combined.in a single conceptual unit, e.g., "measles" implies both the specific infectious disease manifested, and the etiology, the rubeola virus; while "pneumonia" describes the inflammatory disease process or condition, as well as the anatomic site affected,, lung.On the other hand, the precise designation of the loeation at which a disease entity has manifested itself may require a complex statement for adequate description of the semantics relative to anatomic site affected, e.g., a lesion may be found in the "apicoposteriot segment in the upper division of the upper lobe of the lung."After mentioning some of the pecularities of Medical English we will turn to the description of the system for automated processing In our work we have been using as a lexicon base the Systematized Nomenclature of Pathology (SNOP) 14, the structure of which is described below:SNOP is a special purpose lexicon created by pathologists to assist them in the organization and retrieval of information. The SNOP language consists of a relatively rich word vocabulary and a primitive grammar.A term or conceptual unit is listed in only one of the four semantic categories of the vocabulary and is assigned a unique numerical code within the given information class.The four semantic categories of the SNOP are: The problem of paraphrasing is closely related with automatic recognition of synonymous or nearly-synonymous expressions which are not found in the lexicon. Even if we would be able to assign an approximative value to so-called "unknow~ terms" by the implementation of deductive rules, the physician o~ the expert in the related field will have to make the decision about the synonymity of the term in question. Topography (T) -In both cases it is assumed that the semantic content of the message is understood. For example, the statement: "Pneumonia~ due to staphylococcus" can be formalized as[(RI) (M,E) 1where R 1 is the causative relational predicate "due to," "pneumonia"is the kernel phrase belonging to the semantic category 'M' (morphology) and "staphylococcus" being a member of the category 'E ~ (causative agent). The relational expression "due to" can be substituted by other equivalent expressions such as "caused by" or "resulting from," since they designate a similar relationship between the same The most amazing aspect of language is the fact that despite its enormous complexity human beings are able to use it with success as a communication tool. If we are ever able to discover and describe the process of human thought, we will be closer to the resolution of many problems associated with the formalization and subsequent automatization of natural language. It is not our intention to tackle all the problems inherent in natural language. We believe that we will be able to refine our algorithms and further develop a system which will process medical text by applying the formalized linguistic analytic procedures for the storage of data in such a way that the users' cequirements can be met. Appendix:
null
null
null
null
{ "paperhash": [ "garvin|the_georgetown-ibm_experiment_of_1954:_an_evaluation_in_retrospect", "slagle|experiments_with_a_deductive_question-answering_program", "pratt|identification_and_transformation_of_terminal_morphemes_in_medical_englishi)" ], "title": [ "The Georgetown-IBM Experiment of 1954: An Evaluation in Retrospect", "Experiments with a deductive question-answering program", "Identification and Transformation of Terminal Morphemes in Medical Englishi)" ], "abstract": [ "Enough time has elapsed and sufficient other work has been attempted in machine translation since 1954 to allow an appraisal of this much-talked-about demonstration in the light of the experience since gained. Whatever its implications may have been in terms of publicizing and stirring up interest in the problem, from a research standpoint the purpose of the verbal program underlying the Georgetown-IBM experiment of 7 January 1954 was to test the feasibility of machine translation by devising a maximally simple but realistic set of translation rules that were also programmable. The actual execution of the program on the 701 computer turned out to be an interesting exercise in nonmathematical programming, but showed nothing about translation beyond what was already contained in the verbal rules. The verbal program was simple because the translation algorithm consisted of a few severely limited rules, each containing a simple recognition routine with one or two simple commands. It was realistic because the rules dealt with genuine decision problems, based on the identification of the two fundamental types of translation decisions: selection decisions and arrangement decisions. The limitations of the translation algorithm were dual: the search span of the recognition routine was restricted to the immediately adjacent item to the left or right; the command routine was restricted, for selection decisions, to a choice from among two equivalents, for arrangement decisions, to a rearrangement of the translations of two immediately adjacent items. The translation program was applied to one Russian sentence at a time: the lookup would bring the glossary entries corresponding to the items of the sentence into the working storage, where the algorithm would go into effect. The requirements of simplicity and realism were reconciled on the basis of an analy-", "As an investigation in artificial intelligence, computer experiments on deductive question-answering were run with a LISP program called DEDUCOM, an acronym for DEDUctive COMmunicator. When given 68 facts, DEDUCOM answered 10 questions answerable from the facts. A fact tells DEDUCOM either some specific information or a method of answering a general kind of question. Some conclusions drawn in the article are: (1) DEDUCOM can answer a wide variety of questions. (2) A human can increase the deductive power of DEDUCOM by telling it more facts. (3) DEDUCOM can write very simple programs (it is hoped that this ability is the forerunner of an ability to self-program, which is a way to learn). (4) DEDUCOM is very slow in answering questions. (5) DEDUCOM's search procedure at present has two bad defects: some questions answerable from the given facts cannot be answered and some other answerable questions can be answered only if the relevant facts are given in the \"right\" order. (6) At present, DEDUCOM's method of making logical deductions in predicate calculus has two bad defects: some facts have to be changed to logically equivalent ones before being given to DEDUCOM, and some redundant facts have to be given to DEDUCOM.", "The system for the identification and subsequent transformation of terminal morphemes in medical English is a part of the information system for processing pathology data which was developed at the National Institutes of Health. The recognition and transformation of terminal morphemes is restricted to classes of adjectivals including the -ING and -ED forms, nominals and homographic adjective/noun forms. The adjective-to-noun and noun-to-noun transforms consist basically of a set of substitutions of adjectival and certain nominal suffixes by a set of suffixes which indicate the corresponding nominal form(s). The adjectival/nominal suffix has a polymorphosyntactic transformational function if it has the property of being transformed into more than one nominalizing suffix (e.g., the adjectival suffix -IC can be substituted by a set of nominalizing suffixes -0, -A, -E, -Y, -IS, -IA, -ICS): the adjectival suffix has a monomorphosyntactic transformational property if there is only one admissible transform (e.g., -CIC-X). The morphological segmentation and the subsequent transformations are based on the following principles: a. The word form is segmented according to the principle of »double consonant cut,« i.e., terminal characters following the last set of double consonants are analyzed and treated as a potential suffix. For practical purposes only such terminal suffixes of a maximum length of four have been analyzed. b. The principle that the largest segment of a word form common to both, adjective and noun or to both noun stems is retained as a word base for transformational operations, and the non-iden, tical segment is considered to be a »suffix.« The backward right-to-left character search is initiated by the identification of the terminal grapheme of the given word form and is extended to certain admissible sequences of immediately preceding graphemes. The nodes which represent fixed sequences of graphemes are labeled according to their recognition and/or transformation properties. The tree nodes are divided into two groups: a. productive or activated b. non-productive or non-activated The productive (activated) nodes are sequences of sets of graphemes which possess certain properties, such as the indication about part-of-speech class membership, the transformation properties, or both. The non-productive (non-activated) nodes have the function of connectors, i.e., they specify the admissible path to the productive nodes. The computer program for the identification and transformation of the terminal morphemes is openended and is already operational. It will be extended to other sub-fields of medicine in the near future." ], "authors": [ { "name": [ "P. Garvin", "William Austin" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Slagle" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. W. Pratt", "M. Pacak" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "63090255", "12211223", "257258863" ], "intents": [ [], [], [] ], "isInfluential": [ false, false, false ] }
null
665
0.031579
null
null
null
null
null
null
null
null
3f309c24cc21887aa3ed2ab6b739ff0ff7212ebb
31206583
null
The Lexicon: A System of Matrices of Lexical Units and Their Properties
Uriel Weinreich /I/~ in discussing the fact that at one time many American scholars relied on either the discipline of psychology or sociology for the resolution of semantic problems~ comments: In Soviet lexicology, it seems, neither the tra-ditionalists~ who have been content to work with the categories of classical rhetoric and 19thcentury historical semantics~ nor the critical lexicologists in search of better conceptual tools, have ever found reason to doubt that linguistics alone is centrally responsible for the investigation of the vocabulary of languages. /2/ This paper deals with a certain conceptual tool, the matrix, which linguists can use for organizing a lexicon to insure that words will be described (coded) with consistency, that is~ to insure that questions which have been asked about certain words will be asked for all words in the same class, regardless of the fact that they may be more difficult to answer for some than for others. The paper will also discuss certain new categories~ beyond those of classical rhetoric~ which have been introduced into lexicology. i.
{ "name": [ "Josselson, Harry H." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 36
1969-09-01
null
null
null
The research in automatic translation brought about by the introduction of computers into the technology has ~The research described herein has b&en supported by the Information Systems Branch of the Office of Naval Research. The present work is an amplification of a paper~ "The Lexicon: A Matri~ of Le~emes and Their Properties"~ contributed to the Conference on Mathematical Linguistics at Budapest-Balatonszabadi, September 6-i0~ 1968. engendered a change in linguistic thinking, techniques, and " output.The essence of this change is that vague generalizations cast into such phrases as 'words which have this general meaning are often encountered in these and similar structures' have been replaced by the precise definition of rules and the enumeration of complete sets of words defined by a given property. Whereas once it was acceptable to say (e.g., about Russian) that 'certain short forms which are modals tend to govern a UTO6~ clause', now it is required that: (a) the term 'modal' be defined, either bY criteria so precise that any modal could be easily identified, or if that is not possible, by a list containing all of the modals of the language, and (b) the 'certain short forms which are modals' which actually do govern a UTOOM clause be likewise identified, either by precise criteria, or by a list.Linguistic research into Russian has led to and will continue to yield many discoveries about the language, and the problem of recording and recalling the content of these discoveries is not trivial.A system is required to organize the information which has been ascertained, so that this information can be conveniently retrieved when it is required; such a system is realized as a lexicon.Fillmore /3/ has defined a lexicon as follows:I conceive of a lexicon as a list of minimally redundant descriptions of the syntactic, semantic, and phonological properties of lexical items, accompanied by a system ot redundancy rules, the latter conceivable as a set Of instructions on how to interpret the lexical entries.2.The steps in the construction of a lexicon may be de- words are gradually dropped from usage, while others are continually being formed and added to the lexical stock. The words to be entered in the lexicon could be obtained from existing sources,i.e., lexicons and technical dictionaries, and be supplemented by neologisms found in written works.The lexicographer must also be alert for new meanings and contexts in which 'old' words may appear.The lexical stock of Russian may be subdivided into word classes, i.e., words having certain properties in common.These properties may be morphological and/or functional. In has the following meanings with the following complements: i) 'to go after' with 3a + instr.2) 'to ensue' with H3 + gen.3) 'to be guided by something' with dative without prep.She recommended that the stems with different meanings be treated as different, and that a model be composed separately for each item.Rakhmankulova /6/ has written 12 models of complements for sentences containing any of ten different German verbs denoting position in space, and she illustrates, in a matrix, which verbs can appear in which models. Furthermore, i% can be seen that the patterns with K+da% can be extended so that that phrase is replaced by B+aCC or even by an adverbial ~OMOR -'home'. New information will always be added.One transformationalist technique is to specify a syntactic construction along with a list of (all of the) lexemes which can occur in a certain position of that construction.The set of lexemes which Can be tokens in a certain position of a construction is the domain of that construction with respect to the position. The transition from purely syntactic coding (i.e., specifying the complements and their morphological cases if applicable)to semantic coding has been made by Fillmore /13/ with his grammatical cases (e.g., agent, instrument, object). When using the matrix format, with its demands for consistency, one faces the problem of how to get the information to fill its intersections. Naturally, if the information is already in a dictionary, or if the lexicographer has an example, from some text, of the phenomenon to be coded, there is no problem in filling the intersection. However, if the example is lacking, this is not always sufficient ground for coding the non-existence of the property.Sometimes~ despite the absence of an example, the lexicographer feels that the property holds, and he may consult with a native informant, using the caution offered by Zellig Harris /14/:If the linguist has in his corpus ax, bx, but not cx (where a, b, c are elements with general distributional similarity)9 he may wish to check with the informant as to whether cx occurs at all. The eliciting of forms from an informant has to be planned with care because of suggestibility in certain interpersonal and intercultural relations and because it may not always be possible for the informant to say whether a form which is proposed by the linguist occurs in his language. Rather than constructing a form cx and asking the informant 'Do you say cx?' or the like, the linguist can in most cases ask questions which should lead the informant to use cx if the form occurs in the informant's speech.At its most innocent, eliciting consists of devising situations in which the form in question is likely to occur in the informant's speech. In Figure 2 , the coding form for Russian verb government, separate fields are denoted by double slash marks, with single slash marks used for separation within a given field. The codes are explained once again with the verb ~O6~TB -'to obtain', which appears on the first coding l~ne.~ ~ " '~ o o o . . . . . I= N N N N ~ N N ~ ~ N • °1 • !o • "' I~ • 18 . .i I~ I [~ f )).. to the entry heads, they can be stems, canonical forms, or all of the forms that exist in the language. Note that a canonical form could be a particular form of a paradigm such as the masculine singular form of an adjective or the infinitive form of a verb, or it could be a certain verb from which other verbs are derived by certain rules. Binnick /19/ has illustrated the latter by suggesting that be could be an entry head having, as part of its contents, ~ and nated inside a ~ord.The entry heads of the lexicon were designed to correspond to the segments, and therefore are ~rds or sequences of words (idioms). The entry heads could be canonical forms or stems, but this would require automatic procedures for transforming any inflected form into its canonical form, and for finding the stem of any form in text.Space can be saved in a full form lexicon by entering only once~ perhaps under the canonical form, the information which all members of a paradigm share 9 and cross referencing this information under the related entry heads.In the Wayne State University machine translation research~ sets of complementation patterns are stored in an auxiliary dictionary and any set can be referenced by any verbal form.The sequence of entry heads in the lexicon is alphabetical, since the shape of the text word to be looked up is its only identification. Naturally, if the set of Russian words could be put into a one-to-one correspondence with some subset of the positive integers by a function whose value on any word in its domain could be determined only by information deducible from the graphemic structure of that word~ then the entry heads of the lexicon would not have to be in alphabetic i order; in this case, the lookup would be simpler and faster, since the entries could be randomly accessed.The number of columns in the matrix of any word class should be without limit so that new information can be entered. Similarly, the number of rows should be without limit to allow additions as the lexical stock of the language grows.
null
null
null
Lexical information is the consummation and thereby also
Main paper: conclusion: Lexical information is the consummation and thereby also introduction: The research in automatic translation brought about by the introduction of computers into the technology has ~The research described herein has b&en supported by the Information Systems Branch of the Office of Naval Research. The present work is an amplification of a paper~ "The Lexicon: A Matri~ of Le~emes and Their Properties"~ contributed to the Conference on Mathematical Linguistics at Budapest-Balatonszabadi, September 6-i0~ 1968. engendered a change in linguistic thinking, techniques, and " output.The essence of this change is that vague generalizations cast into such phrases as 'words which have this general meaning are often encountered in these and similar structures' have been replaced by the precise definition of rules and the enumeration of complete sets of words defined by a given property. Whereas once it was acceptable to say (e.g., about Russian) that 'certain short forms which are modals tend to govern a UTO6~ clause', now it is required that: (a) the term 'modal' be defined, either bY criteria so precise that any modal could be easily identified, or if that is not possible, by a list containing all of the modals of the language, and (b) the 'certain short forms which are modals' which actually do govern a UTOOM clause be likewise identified, either by precise criteria, or by a list.Linguistic research into Russian has led to and will continue to yield many discoveries about the language, and the problem of recording and recalling the content of these discoveries is not trivial.A system is required to organize the information which has been ascertained, so that this information can be conveniently retrieved when it is required; such a system is realized as a lexicon.Fillmore /3/ has defined a lexicon as follows:I conceive of a lexicon as a list of minimally redundant descriptions of the syntactic, semantic, and phonological properties of lexical items, accompanied by a system ot redundancy rules, the latter conceivable as a set Of instructions on how to interpret the lexical entries.2.The steps in the construction of a lexicon may be de- words are gradually dropped from usage, while others are continually being formed and added to the lexical stock. The words to be entered in the lexicon could be obtained from existing sources,i.e., lexicons and technical dictionaries, and be supplemented by neologisms found in written works.The lexicographer must also be alert for new meanings and contexts in which 'old' words may appear.The lexical stock of Russian may be subdivided into word classes, i.e., words having certain properties in common.These properties may be morphological and/or functional. In has the following meanings with the following complements: i) 'to go after' with 3a + instr.2) 'to ensue' with H3 + gen.3) 'to be guided by something' with dative without prep.She recommended that the stems with different meanings be treated as different, and that a model be composed separately for each item.Rakhmankulova /6/ has written 12 models of complements for sentences containing any of ten different German verbs denoting position in space, and she illustrates, in a matrix, which verbs can appear in which models. Furthermore, i% can be seen that the patterns with K+da% can be extended so that that phrase is replaced by B+aCC or even by an adverbial ~OMOR -'home'. New information will always be added.One transformationalist technique is to specify a syntactic construction along with a list of (all of the) lexemes which can occur in a certain position of that construction.The set of lexemes which Can be tokens in a certain position of a construction is the domain of that construction with respect to the position. The transition from purely syntactic coding (i.e., specifying the complements and their morphological cases if applicable)to semantic coding has been made by Fillmore /13/ with his grammatical cases (e.g., agent, instrument, object). When using the matrix format, with its demands for consistency, one faces the problem of how to get the information to fill its intersections. Naturally, if the information is already in a dictionary, or if the lexicographer has an example, from some text, of the phenomenon to be coded, there is no problem in filling the intersection. However, if the example is lacking, this is not always sufficient ground for coding the non-existence of the property.Sometimes~ despite the absence of an example, the lexicographer feels that the property holds, and he may consult with a native informant, using the caution offered by Zellig Harris /14/:If the linguist has in his corpus ax, bx, but not cx (where a, b, c are elements with general distributional similarity)9 he may wish to check with the informant as to whether cx occurs at all. The eliciting of forms from an informant has to be planned with care because of suggestibility in certain interpersonal and intercultural relations and because it may not always be possible for the informant to say whether a form which is proposed by the linguist occurs in his language. Rather than constructing a form cx and asking the informant 'Do you say cx?' or the like, the linguist can in most cases ask questions which should lead the informant to use cx if the form occurs in the informant's speech.At its most innocent, eliciting consists of devising situations in which the form in question is likely to occur in the informant's speech. In Figure 2 , the coding form for Russian verb government, separate fields are denoted by double slash marks, with single slash marks used for separation within a given field. The codes are explained once again with the verb ~O6~TB -'to obtain', which appears on the first coding l~ne.~ ~ " '~ o o o . . . . . I= N N N N ~ N N ~ ~ N • °1 • !o • "' I~ • 18 . .i I~ I [~ f )).. to the entry heads, they can be stems, canonical forms, or all of the forms that exist in the language. Note that a canonical form could be a particular form of a paradigm such as the masculine singular form of an adjective or the infinitive form of a verb, or it could be a certain verb from which other verbs are derived by certain rules. Binnick /19/ has illustrated the latter by suggesting that be could be an entry head having, as part of its contents, ~ and nated inside a ~ord.The entry heads of the lexicon were designed to correspond to the segments, and therefore are ~rds or sequences of words (idioms). The entry heads could be canonical forms or stems, but this would require automatic procedures for transforming any inflected form into its canonical form, and for finding the stem of any form in text.Space can be saved in a full form lexicon by entering only once~ perhaps under the canonical form, the information which all members of a paradigm share 9 and cross referencing this information under the related entry heads.In the Wayne State University machine translation research~ sets of complementation patterns are stored in an auxiliary dictionary and any set can be referenced by any verbal form.The sequence of entry heads in the lexicon is alphabetical, since the shape of the text word to be looked up is its only identification. Naturally, if the set of Russian words could be put into a one-to-one correspondence with some subset of the positive integers by a function whose value on any word in its domain could be determined only by information deducible from the graphemic structure of that word~ then the entry heads of the lexicon would not have to be in alphabetic i order; in this case, the lookup would be simpler and faster, since the entries could be randomly accessed.The number of columns in the matrix of any word class should be without limit so that new information can be entered. Similarly, the number of rows should be without limit to allow additions as the lexical stock of the language grows. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
null
null
null
null
null
null
null
null
null
7b24f68d5399872d9529fd60b614ae1f2d02cf1d
14918516
null
SOME REMARKS ON {J}. {L}. {M}EY{'}s PAPER (Preprint No. 20)
Mey's criticism of the functional approac~ to generative description concerns (1) the formal properties of the system proposed by Sgall et el. (its weak generative power, recursivit$~), and (2) some itl~or!~al questions connected with the mentioned approach. (1) From the formal point of view, Mey's paper contains many quite unclear points and errors, which make his cl~Jms unfounded. Some of those may be due to a mere unpreciseness and carelessness Jn formulations (cf. for instance P.7, where he speaks ~,bout "e language that is not CF, or may be not even regular", w!ich is as if one says "This mineral is not found in Europe, not even in whole Switzerland") but others hsve a more
{ "name": [ "Sgall, P. and", "Hajicova, E." ], "affiliation": [ null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969
1969-09-01
6
0
null
null
null
null
null
of the system proposed by Sgall et el. (its weak generative power, recursivit$~), and (2) some itl~or!~al questions connected with the mentioned approach.(1) From the formal point of view, Mey's paper contains many quite unclear points and errors, which make his cl~Jms unfounded. Some of those may be due to a mere unpreciseness and carelessness Jn formulations (cf. for instance P.7, where he speaks ~,bout "e language that is not CF, or may be not even regular", w!ich is as if one says "This mineral is not found in Europe, not even in whole Switzerland") but others hsve a more consequential bearing on his further argumentation. He confuses (p.3) the trsnsl~tion by the means of s pushdown store transducer in Evey's sense (henceforth pdt) with the question of CF-preservation in the sense of without giving any proofs he simply assumes that one of these results is c0ntradicted by the others.Thus we can state that Mey has not shown that a system of the discussed type generates a language that is not context-free, to say nothing about his clearly exaggerated claim (P.7) of having 'shown" ths~ the lengusge generated by such a system "simply never" is
Main paper: : of the system proposed by Sgall et el. (its weak generative power, recursivit$~), and (2) some itl~or!~al questions connected with the mentioned approach.(1) From the formal point of view, Mey's paper contains many quite unclear points and errors, which make his cl~Jms unfounded. Some of those may be due to a mere unpreciseness and carelessness Jn formulations (cf. for instance P.7, where he speaks ~,bout "e language that is not CF, or may be not even regular", w!ich is as if one says "This mineral is not found in Europe, not even in whole Switzerland") but others hsve a more consequential bearing on his further argumentation. He confuses (p.3) the trsnsl~tion by the means of s pushdown store transducer in Evey's sense (henceforth pdt) with the question of CF-preservation in the sense of without giving any proofs he simply assumes that one of these results is c0ntradicted by the others.Thus we can state that Mey has not shown that a system of the discussed type generates a language that is not context-free, to say nothing about his clearly exaggerated claim (P.7) of having 'shown" ths~ the lengusge generated by such a system "simply never" is Appendix:
null
null
null
null
{ "paperhash": [ "mey|on_the_preservation_of_context-free_languages_in_a_level-based_system" ], "title": [ "On the Preservation of Context-Free Languages in a Level-Based System" ], "abstract": [ "In this paper, a recently proposed level-oriented model for machine analysis and synthesis of natural languages is investigated. Claims concerning the preservation of context-free (CF) languages in such a system are examined and shown to be unjustified. Furthermore, it is shown that even a revised version of the model (incorporating some recent discoveries) will not be CF-preserving. Finally, some theoretical implications of these findings are explored: in particular, claims of greater naturalness and the question of recursivity." ], "authors": [ { "name": [ "J. Mey" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "2106027" ], "intents": [ [] ], "isInfluential": [ false ] }
null
665
0
null
null
null
null
null
null
null
null
b13669371b48f7e5ca84d5f67476f57789cbf4c5
6314592
null
Nexus a Linguistic Technique for Precoordination
A method for automatically precoordinating index terms was devised to form combinations of terms which are stored as subject headings. A computer program accepts lists of auto-indexed terms and by applying linguistic and sequence rules combines appropriate terms, thereby effecting improved searchability of an information storage and retrieval system. A serious falling exists in many indexing systems in that index terms authorized for use are too general for use by technically-knowledgeable searchers. A search conducted using these terms frequently produces too many documents not specifically related to the users' requirements. An indexing method using the language in which the document was written corrects this failing, but eliminates the generality of the previous approach. A compromise between indexing generality and specificity is offered by NEXUS precoordination which combines specific terms into subject-headings, eliminating improper coordination of terms when matching search requirements with document term sets. NEXUS examines the suffix morpheme of each input term and determines whether or not the term should be a member of an index term combination or preeoordination. If insufficient evidence is present to make such a individual terms is effected in conjunction wRh NEXUS so that nothing is missed because of rule exceptions. Comparison tests have been run using the full NEXUS program, a partial application of the program using sequence rules (SEQS), and human analysis of the same data. Although falling short of human analysis in some respects (except for consistency), the NEXUS approach is more effective than SEQS in producing effective combinations. Although some suggestions arc made for applying this technique along with a possible output format for a bibliographic application, the chief value of this effort, however, has been to further study those aspects of language that are amenable to computerized analysis for the purpose of improving input and output functions in information retrieval. SECTION 1
{ "name": [ "Benson, R. A." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 3
1969-09-01
7
2
null
null
null
null
determination, a sequence rule goes into effect which combines terms based on their syntax.A variety of corpora was used to test and develop the NEXUS precoordinatot. Data bases consisting of legal information, computer program descriptions and NASA linear tape system documentation were used. More variety was present in the NASA documents which made the results of the application of NEXUS to this collection more significant than the others. Also, a fuller battery of rules was developed by this time, increasing the power of the program.NEXUS is a research project which is concerned with input processing of natural language for information retrieval.The computer program used to do this task consists of linguistic rules that operate on the suffix portions of printed words, and the order of these words as they appear in a sentence. It must be stressed that NEXUS operates on general rules. There are occurrences in language that are not covetable by this method. Storage by individual terms is effected in conjunction wRh NEXUS so that nothing is missed because of rule exceptions.Comparison tests have been run using the full NEXUS program, a partial application of the program using sequence rules (SEQS), and human analysis of the same data. Although falling short of human analysis in some respects (except for consistency), the NEXUS approach is more effective than SEQS in producing effective combinations.Of all the various operations of an information retrieval system, the input function is the most important. The decision of what to store to best represent the contents of a document involves predicting to a degree how this representation will be looked for by a user. If a user is not conversant with a subject he must be led into it by familiar, more general routes. If a user is conversant with a subject and is perhaps a contributor to its literature himself he will be after specific details which he will request, preferably in the language of his discipline. This dichotomy of users probably exists, to some extent, in any information retrieval situation, it is the intent of such research as NEXUS to help alleviate this paradox by permitting access to information by both general and specific indexing accomplished by machine.The indexing process is discussed in this paper starting at the point where it first becomes necessary. The qualifications for an expert indexer are then enumerated, and the activity of the indexer is examined. Generalized and specific indexing are compared and, finally, a suggestion is made for convetting the results of specific indexing into generalized subject headings, whiei~ is the purpose of the NEXUS programs.Operational tests have been conducted during the stages of developing this approach, and a variety of data was used to allow testing across different types of information.Comparison tests were made using the full set of NEXUS rules vs. only the sequence rule, SEQS. The intent was to find out how more effective the program works using suffixal morphemes to combine terms than to merely connect words that follow one another in sequence.The NEXUS-generated subject headings can be used in bibliographic printouts to aid in locating desired information. Combinations of terms prepared in this way avoid the occurrence of incorrect coordinations of terms which sometimes happens when individual terms are coordinated by the user.An individual is faced with the prospect of maintaining a growing collection of documentation. The documents in this collection contain information that will answer frequently asked questions. When the collection consists of a few documents, this individual can read them all and be prepared to answer these questions. But, as the amount of documeqts increases, he will be forced to find some method of recording clues to the information found in each document. These clues will have to be stored separately from the documents, on a list or perhaps on file cards, so that the maintainer of the documents can scan them easily. When he is asked a question, instead of trying to remember which document or documents have the answer, he goes to his list of clues, and then selects the documents from the collection. The number assigned to each group of clues is the same as the number on the document.Let us assume that most or even all of the questions asked of this individual are predictable. He is then in the fortunate position of being able to look for specific answers to specific questions as he records the clues from each incoming document. He can then arrange the list of clues in whatever order is most convenient for him. He can arrange the clues by frequency of questions asked, he can classify the clues by hierarchical relationship, by chronology or by any other convenient method that might best or most quickly answer these stock questions.In some very fortunate cases, a collection of documentation consists of documents that have been specifically designed to answer questions. Each document is constructed with a consistent number of information or data blocks and the contents of these blocks vary to a predictable degree. The recording of information clues (we may as well now refer to this function as indexing) then becomes a simple task.Collections of technical papers, the most common type of information collections, do not lend themselves to similar handling. One can predict only to a very small degree, what questions will be asked of such a collection. Therefore, the indexer must select clues from each document based on his speculation of what questions will be asked in the future. It would seem that wearenow getting a vague picture of what an indexer looks like. He is able to pick up any highly technical paper, most of which are at the forefront of their disciplines (otherwise why should they be published?), to understand the content of this document so expertly that he can predict the questions that will be asked and then answered by this document, and then to record the clues to its contents in such a manner that they will lead a searcher directly to this segment of recorded knowledge at some unknown future date. This astute person must certainly possess knowlege equivalent to advanced degree level in numerous scientific disciplines, he must have working knowledge of many of the world's languages, surely he must possess an advanced degree in Library Science {more popular -Information Science), and the knowledge of practical economics to such an extent that he can subsist comfortably on six to seven thousand a year (the going rate for indexers). Armed with such a formidable background this individual would render better service, at least to himself, by doing the research and writing the paper himself.Obviously, the indexing function must be performed by someone less qualified than the individual described above.In a normal library atmosphere, the area usually given responsibility for the important endeavor of maintaining documentation collections, there is a traditional way to process such material. Indexing is performed using such aids as subject-heading lists or thesauri.The documentalist/librarian use of the term, thesaurus, refers to a dictionary-order list of approved indexing terms, similar to a subject-heading list.The indexer, in the above-mentioned environment, scans a document, tries to figure it out the best he can, and then selects terms from these approved lists that he thinks best describe the document. Sometimes this works, sometimes not. After all, the indexer cannot be expected to be expert in all technical fields. Anyway, the resulting terms that are the clues to the document's content are generalizations of this content. It goes without saying, ff a researcher is writing about a new usage of holography in pathological x-ray applications, this document surely has something to do with photographic techniques in medicine. If holography is not an approved term, it will eventually be added to the list when approved. In the meantime, it cannot be used, of course. But the term, x-ray, has been around long enough to be acceptable, and the searcher can hunt around at a higher (more general) level until he locates the document.The point is, such approved term lists are designed to aid the partially knowledgeable library user (or library worker) who does not know the technical vocabularies of special disciplines well enough to use them intelligently.The use of generalized terms stems also from the attempt, on the part of librarians, to store their reading materials in related clumps within a library. This is understandable in a public library or even in a book collection of a technical library. A user wants a book on computer programming, so he goes to the section of books that contains programming books. However, if he wants to know the latest published research on a particular programming technique he will find it in document or journal article form. He will know, in his own terminology, what he wants at a considerably more specific level than "computer programming," or say, than the approved To generalize these terms one would have to know that holography is related to photography, pathology is related to medicine, thorax is related to anatomy, and so on. We don't expect that much sophistication from our clerical workers.We really can't afford to pay for that much knowledge. Actually, we don't want them to know that much. It could bias their indexing. This is exactly the way the KARDIAK 1 automated bibliography on artificial heart research was produced. Now that it has been released (almost three years ago) and has received some acclaim throughout the world of medical research (e. g., Harvard Medical School, National Library of Medicine, We should add, however, for justice's sake, that if the KARDIAK were on "Information Science" instead of "Cardiac Medicine", the situation would surely be reversed.The thesis, so far, has hopefully convinced the reader that it is possible to index highly technical collections cheaply and accurately without superintelligent, universal men wielding the indexer's pencil. But we are still faced with the problem of some cross, discipline communication. We cannot query a collection on "Cardiac Medicine", and they cannot query a collection on "Information Science. " Now then, how do we go about communicating to one another through the medium of a general-information collection? That is, how do we do this without getting too general and paying the price for this generality?KARDIAK, once again, has giver us a clue to how this may be done.As we were feeding KARDIAK the terms selected by our clerk/indexer, some of these terms kept recurring; recurring with such frequency that our computer program could not hold them all in storage. That is, there was not enough room set aside to hold all the document numbers with which these terms were associated. The number of these terms was small, only seven in all, but the number of documents that used these seven terms was extensive. Because of the physical impossiblity of storing all these document numbers, these terms were rejected for storage. Oddly enough, perhaps serendipitously enough, if you will, these were the terms that generally described the collection: We don't need an approved list of terms. We couldn't have found one, nor known how to use one, if we had had one. It has been said, "Let the documents themselves generate their own terms. ,,2 One step further, let the terms rejected because of over-frequency be combined as subjectheadings. These combinations can then be used as general descriptors for the particular collection.Tile KARDIAK is a closed collection. That is, it was produced for a specific purpose, it served its purpose, and it is now a static piece of documentation history. Of course, it can always be picked up at a later date and be added to; but we don't foresee this happening at the present time. This is all leading up to the fact that there is any amount of manipulation one can perform on a static collection that cannot be done on a growing one. When a collection is constantly being added to, one must figure out away to maintain control of it as it develops. If the collection is specialized enough, the term rejection factor, mentioned above, will still appear. But, as the collection grows, we certainly must increase our storage capacity of the ratio of document numbers to terms. This ratio probably remains the same, but we cantt say so for sure unless we do some research on it. This is an area for further work with which we are not principally concerned in this report.What wewould now like to suggest is an interim feature: an aid to indexing and searching that is in between a free, specific, individual key word system and a generalized, controlled subject-heading system. We have already shown an almost algorithmic way of doing indexing. One element is missing, however, and that is syntax. He must presume that the hits he comes up with are of terms arranged in the same syntactical order as his search query. In other words, he is attempting to regenerate sentence order. This is successful much of the time, but then again there are times that it doesn't work.If we had our clerical worker again, we could show her some lines of text and ask her to combine words that bear relationship to one another. If she did a good job of making combinations, some of this missing syntax would be recovered. Let's fake a title, for example: "Applications of Linguistic Experiments to the Industrial Community. " Our clerk would probably make the following combinations:"Applications" (not combined)"Linguistic Experiments" (combined)"Industrial Community" (combined).These term combinations aid in restoring syntax, to some degree, where the free terms might be recalled out of order; for example, something like "Linguistic Community" or "hflustrial Experiments" or "Community Experiments, " all of which are entirely misleading in regard to the actual meaning of the title. Now, for the clerk to do term combining correctly, she uses some simple rules. The most obvious rule is that of sequence. There are other rules used that are not so obvious, even to her, because she may not know she is using them. These rules have to do with linguistics, specifically suffixal morph(~logy. This is to say that the suffixal morphemes of the words in this title are giving her clues about the relationship of one word to another. In other words, the presence of one of a group of particles at the end of a content word in a line of text will give a clue to its relationship to the next content word.Of course, the next word in sequence must be examined for the presence of a final particle, as well. Let's take "linguistic experiments" as an example.The two words are in sequence in the text line, even though this is not an absolute indication that they should be combined. The suffixal morpheme of "linguistic" is "-ic, " an adjectival ending. And since there is no punctuation following "-ic," this indicates the proximity of some next entity to be modified, some noun form coming up. In our example it is "experiments." But, if the suffixal morpheme of "experiments" were "-al" instead of "-s, " and there is still no following punctuation, we would have a clue that we don't yet have a noun form to be modified. We have two adjectives stacking up, and the next following word may be the noun form we have been waiting for. However, the "-s" morpheme is most likely acceptable enough as a noun plural ending, and the combination "linguistic experiments" is a valid one.The application of such rules by our clerical worker is automatic because she does all these operations following the rules that are built into her knowledge of the language. She might possibly be able to explain the process but it is so ob~ous and natural to her that she might not be able to.To do this function by machine ia another matter. We must not only ex- plainThe inspiration for NEXUS came from a particular collection compiled by IS&R on legal literature.The indexing was done by an individual highly trained in law but who had never done any previous indexing. His indexing consistency, to begin with, was slightly erratic in that he occasionally repeated terms in bound form that he had already noted down in free form. However, as he progressed through the collection of 1742 documents his indexing became more stabilized.Each document was given an accession number. The index terms, usually six or seven of them, were listed under the number. The indexer wanted retrieval by date at some future time, so he used the year the document was published as an index term in every case.The output of this project was a KARDIAK-type (or "busted. book", as it is known in IS&R) manual index, which was produced by computer. The terms were sorted alphabetically and the document numbers of the documents indexed by the term listed beneath each term in ascending order.Precoordination of these terms would have aided the searcher, in the way pre~ously indicated, as a time-saver and a syntax safeguard. This would have prevented the searcher from erroneously hooking together terms that actually were not related.To begin with, the unsorted sets of index terms were used as input to NEXUS. NEXUS was first put together in a very rudimentary form. The dates were isolated and the criteria for precoordination were based on(1) sequence, (2) "-ed" suffixal morpheme in the first position, and 3 If this held word is the last in the set, it is also (7) printed as a single term. But, if there is a next word (16), thenext word is examined and (11)O0 o ~1 ~ ~ O.I v .g t-I ,--T t.~tested for being a date. If it is a date, it is printed (12) as a single term.If not, it receives a test for "-ed" (13) as the final morpheme. This morpheme can only be allowed with the first word of a pair (unless, of course, it is the last term in the set; in which case it is printed alone), ff "-ed" is present, the held first word (14) is printed by itself, and the "-ed"word is held for first-position pairing. If "-ed" is not present, the held word is printed with this word (15) as a coupled pair.Let's go back to (5) where a word is tested for the presence of an "-s"final morpheme. The word does end with "-s", so we check for a preceding word (6). In this case we will get "yes" for an answer, and the next test is Problems.Because of our "-s" rule in second position only, the program isolated "Jurimetrics" instead of making the obvious (to a human) coordination, "Jurimetrics Committee." The rule must be valid for only one position, and the second position is the most common one. Continuing the sequence, "Committee" was precoordinated with "Scientific" because of the sequence rule. This is also an,obvious error to a human, because of the suffixal morpheme "-ic", which is part of "Scientific. " In analyzing the production, so far, "-ie" seems like a good candidate for a first-postion suffixal morpheme; so, it became one in the next version of the program. The next combination, "Scientific Investigation," turned out successfully because of sequence, but "Investigation Legal" went bad; once again because of a suffixal morpheme cue that wasn't included in the program.This morpheme was the "-al" on the term ,"Legal" which was later included as a first-position rule. Finally, "Legal Problems" was produced, meeting the requirements of both sequence and "-s" rules. The new first-position rules included the suffixal morphemes "-al", "-ern", "-ese", "-ic", "-ive", "-ly" and "-ous". The remaining rule was one that prevented two words with "-ing" endings from being paired together.As you may have noticed, the first-position rule, "-ous" conflicts with the second-position rule, "-s". The latter rule looks for a final "-s" only and when it finds one, qualifies the term for second position. Because of this, the "-s" test must also include a test for preceding "o" and "u". When these are present, we have a first-position rule in effect; when absent, a second-position rule.One of the NEXUS I rules was eliminated. The rule for stacking "-s" words and attaching the first non-"-s" as a first-position word. This rule did not produce anything of value, and could possibly have contributed to ambiguity.However, a turnabout version of this rule was adopted. This rule, if it locates a sequence of first-position suffixal morphemes, will stack them up until it finds a second-position word. It then prints them all in combination. In this way, we have a method for creating strings of terms in precoordination consisting of more than two words. "Three-dimensional Holographic Techniques,"is an example of a production of this kind.NEXUS I contained an overlapping feature which we haven't mentioned, but which may have been obvious when we went through the "Jurimetrics, Committee, Scientific, and so on" example. The purpose of overlapping was to left-justify each term whether combined or left alone, so that it could be stored alphabetically in an IS&R system. In this way, no term is hidden from the search by reason of being forever concealed in second position in storage.We did install a jump switch in NEXUS II, so that we can eliminate overlapping, The abstract portion of each description was used to supply NEXUS II with material to work with. The abstracts were first processed through an auto-indexer to produce lists of terms. These lists were next presented to NEXUS II and then printed out for analysis after the term-binding operations were performed. NEXUS lI was run two ways; with and without the overlapping feature.The program worked well with this material, with one exception. The suffixal morpheme carried by the third person singular, present tense verb, "-s", has the same physical appearance as the plural morpheme, "-s".Since the computer can't tell the difference, there occurred some bound terms that were somewhat loss than rife with meaning; for example, "Program Calculates", "Computes", "Program Generates", "Program Uses".Although these odd combinations could be avoided by employing a different writing style when producing the abstracts, we are not concerned with preconditioning a corpus, rather with handling it in whatever form we happen to find it. The above combinations can certainly be tolerated, however, since they have no effect on the other precoordinations. As an exercise in demonstrating the difficulties encountered in handling natural language for computerized information retrieval, the NEXUS experiments have been very successful.The intent has been to expand upon more or less standard automatic indexing techniques by reestablishing a connection between terms that, when combined, aid the searcher in retrieving a document reference from storage.We have named this process precoordination because of its relationship to coordinate index systems. In a coordinate index the searcher combines terms, looking for a common accession number, thereby indicating their occurrence together in a document description. NEXUS has an application in precoordinating these terms, when applicable, to save time for the searcher and to ensure a correct coordination and to prevent coordinating terms that give a misleading implication. Precoordinated terms are then, in effect, equivalent to subject headings insofar as they partially express a concept in one or more words in a syntatic construction.The comparison of NEXUS, and its several linguistically-based rules, with SEQS, and its single rule for sequential linking, has shown that NEXUS is the more efficient of the two approaches. Neither, of course, can compare with human decision power, which has the ability to employ knowledge, past experience, and heuristics. Since we are trying to approach a human intellectual activity using a machine, however, the work of a human will probably always make our results look inferior. We arc limited to looking at words primarily as physical entities and then relating these physical features to semantic relationships. There is only so much to work with in English, and that much is not 100% reliable, as we have seen.We have attempted to use a simple algorithm, and to add to it, or subtract from it, through trial and error. No doubt these rules can be expanded more than they have been, so the program is open to further additions at any time.The NEXUS II flow chart, Figure 5 -1, with a narrative explanation, follows.The first step at (1) is to read a record, a document term set.Step (2) examines the first term in the set and if there is one, moves through the date test (3), which is a holdover from the legal data collection. Next, the program makes the first suffixal morpheme test (4) . If tlie examined word does not end in ~-s TT, it is held for pairing (5) and the T~-ed~t counter is set to zero.This counter is used for all first-position suffixal morpheme words, not just for those that end in "-ed'. The counter is used to keep track of the amountM H B ,rlof first-position words that accumulate before a second-position word appears, so that they can all be printed out in a string; e.g., "BINARY DIGITAL CALCULATING MACHINE".The program then moves to (6) where a next word is looked for. If "no", the word held at (5) is printed as a single term (7) and a return to (2) is made, in turn going to (1) and the next record is begun. If (6) is "yes", the NEXUS I date check is made (8) which results in "yes" back through (7) and then (2) again, or "no", which is governed by Sense Switch 2 (9). Sense Switch 2 can be set to pass an examined work through the tests for "-ing" in first position (10) and in second position (11) in order to prevent coupling of words bearing these suffixes. These tests currently have no value because "-ing" has been established as a fairly reliable first-position suffixal morpheme and therefore must be allowed to stack up with words bearing "-tug" or any of the other * (* refers to NOTE -center of page, Figure 5 -1) words. The test has been left in in case it ever appears to be of any future use.Assuming Sense Switch 2 to be in an "on" position, a "no" answer to (8) proceeds directly to (12) where the held first word receives the first-position test for "-ed". If "yes", the "-ed" counter is incremented and the second word is passed through an "-ed" test (14). A "no" at (12) passes the program directly to (14). If (14} is "no", the second word is tested for presence of any of the other suffixal morphemes qualifying a word for first position (noted as *) (15). If (14)is "no", the second word is tested for presence of any of the other suffixal morphemes qualifying a word for first position (noted as *) (15). If (14) is "yes", the first word is tested for an * ending (16). A "no" at (15} moves the program to (17) where the first and second words are printed, the counter is set to zero and a flag, 2, (for later identification as a coupled pair) is placed at the end of the first and second words. This flag is externally suppressed.Passing through an indexer (pointing to the last word of a combination} and moving further to {18}, there is a Sense Switch 1, that controls overlapping. This is the feature that assures all terms a left-justified accessibility, by printing terms indi~cidually as well as in combinations. With the sense switch off, the program moves to (7) and the last word in the combination is printed alone.With the sense switch on, the program returns to (2) and continues through the record.Backing up now, to (15}. If a "yes" answer is made at (15), the first word is tested for * ending at (16}. If "no" at (16), the first word receives an "-ed" test (19} and upon receiving another "no" at (19) the first word is printed alone at (7). If "yes" at either (16) or (19}, the "-ed" counter (20) (which also counts * words}, is incremented and a test for a next word is encountered at (21). If there is not a next word in the record under examination, each "-ed"{or *) word is printed individually (22) and the counter reset to zero. The program then goes back to (2) . If there is a next word in the record, the date test is made (23). If "yes" on (23), the print instruction (22) is applied to all "~ed"/* words, and then back to (2). If "no" on (23), the next word is checked for "-ed", (24) and * (25). Failing both of these tests, all "-ed" and * words are printed in a string (with the last member of the string a non-"-ed"/*) (26).If either of these tests (24}, (25) are positive, the program loops back through (2}, increments the "-ed" counter and cycles through (21), etc., again.Let's now go back to the first suffixal morpheme test, the last word "-s" test at (4), and assume a "yes" answer. We then must find out if it is a plural "-s", or part of an * ending, "-ous" (27). If it is "-ous", we then go to (5), and thence through the route just explained above. If it is not "-ous", but a plural "-s", we move to {28} to check for a preceding word. If there is no preceding word, the "-s" word is printed as a single word (29}, and back to (2).If there is a preceding word, the date check (30) goes into effect. If positive, the program moves to (29) and the "-s" word is printed as a single term. If "no" on (30) the test is made "Does preceding word end with '-s' ?" (31) which, when "no", moves the program to (32) "Is the preceding word part of a coupled pair ?". This is the reason for the flag put at the end of the 1st and 2nd words at (17}.If "yes" at (32) the program shifts to (29) where the "-s" word is printed as a single word. If "no v' at (32), the program prints the preceding word with this "-s" word (33). If "yes" at (31), there is a test for a preceding word (34).If "yes" at (34), the date test (35) takes place. If "no" at (34), the program shifts to (29) and prints the "-s" word as a single word, and then goes back to (2) . This also occurs when there is a "yes" answer at (35). If "no" at (35), the program goes back to (29) where the "-s" word is printed as a single word.This is the latest version of NEXUS II. The flow chart has superfluities that haven't been removed. Many instructions could be combined to save operations. But, the intent has been to get this program operating and reported on. The flaws that are obvious are the combining of various rules that apply to "-ed" endings as well as * endings. These rules are to be treated the same. No doubt, other things could be combined to make a more efficient program.A few suggestions for applying this method should be made. The previous method for auto-iudexed terms has been to use them in a "busted-book" or computer-generatod coordinate index. The NEXUS-generated subject Linquistics, in a general sense, concerns itself with speech sounds, from which a graphemic representation of a language is one step removed. If the day ever comes that a computer can more efficiently accept the spoken word than the written word linguistics, in a fuller sense, will be found applicable. There will probably be interim improvements in methods for computer .input that will predate voice input, however, Such input devices as optical scanners and page readers may make a long-awaited appearance, for practical purposes, before people can talk to a computer in any application other than an experimental one. If there is any doubt of the superiority of the spoken word over the written as an information carrier, one merely has to read a television jingle or such a phrase as, "very interestting~ " heard on a popular TV program, to realize that the suprasegmental phonemes of stress, pitch, juncture and even accent in the dialect sense, completely lost in the written word, are very much present and necessary in the spoken word.Getting back to the kind of linguistics with which we have been directly concerned, we have been devising rules fir joining together two or more words to make up a phrase. The rules are activated when one or more characters (graphemes) are found at the ends of words (suffixal morphemes} that have an effect on the word's connectability to other words in a sequence (syntax}.These rules work every time. There is no decision maker involved allowing a sometimes exemption to a rule. Since the rules are of a general-purpose kind, they are set up to operate on the most frequent conditions. The exceptions to these conditions that occur occasionally are merely tolerated. No attempt has been made to set up ad hoc rules to cover them. It so happens, unfortunately, that the name "Information Retrieval" is one of these exceptions and would not be produced as a combination by the NEXUS program.Although the NEXUS method is far from perfect, even in its present state it is reasonably workable as a subject-heading generator. Its consistency of operation, of course, exceeds human processing; an advantage in some respects and a disadvantage in others, as already pointed out.Research of this type is not intended to produce a panacea that will solve all natural-language-input problems, but is intended to shed a little more light on language manipulation by computer and perhaps take a few tentative steps towards a solution of these problems. Hopefully, this research has been successful to that extent. The SEQS column lists the combinations formed by using sequence rules alone. Here eve*T two terms are connected as they occur in syntactical order. Purpose Operations, Convair division of General Dynamics, San Diego,.December 1968.
null
Main paper: : determination, a sequence rule goes into effect which combines terms based on their syntax.A variety of corpora was used to test and develop the NEXUS precoordinatot. Data bases consisting of legal information, computer program descriptions and NASA linear tape system documentation were used. More variety was present in the NASA documents which made the results of the application of NEXUS to this collection more significant than the others. Also, a fuller battery of rules was developed by this time, increasing the power of the program.NEXUS is a research project which is concerned with input processing of natural language for information retrieval.The computer program used to do this task consists of linguistic rules that operate on the suffix portions of printed words, and the order of these words as they appear in a sentence. It must be stressed that NEXUS operates on general rules. There are occurrences in language that are not covetable by this method. Storage by individual terms is effected in conjunction wRh NEXUS so that nothing is missed because of rule exceptions.Comparison tests have been run using the full NEXUS program, a partial application of the program using sequence rules (SEQS), and human analysis of the same data. Although falling short of human analysis in some respects (except for consistency), the NEXUS approach is more effective than SEQS in producing effective combinations.Of all the various operations of an information retrieval system, the input function is the most important. The decision of what to store to best represent the contents of a document involves predicting to a degree how this representation will be looked for by a user. If a user is not conversant with a subject he must be led into it by familiar, more general routes. If a user is conversant with a subject and is perhaps a contributor to its literature himself he will be after specific details which he will request, preferably in the language of his discipline. This dichotomy of users probably exists, to some extent, in any information retrieval situation, it is the intent of such research as NEXUS to help alleviate this paradox by permitting access to information by both general and specific indexing accomplished by machine.The indexing process is discussed in this paper starting at the point where it first becomes necessary. The qualifications for an expert indexer are then enumerated, and the activity of the indexer is examined. Generalized and specific indexing are compared and, finally, a suggestion is made for convetting the results of specific indexing into generalized subject headings, whiei~ is the purpose of the NEXUS programs.Operational tests have been conducted during the stages of developing this approach, and a variety of data was used to allow testing across different types of information.Comparison tests were made using the full set of NEXUS rules vs. only the sequence rule, SEQS. The intent was to find out how more effective the program works using suffixal morphemes to combine terms than to merely connect words that follow one another in sequence.The NEXUS-generated subject headings can be used in bibliographic printouts to aid in locating desired information. Combinations of terms prepared in this way avoid the occurrence of incorrect coordinations of terms which sometimes happens when individual terms are coordinated by the user.An individual is faced with the prospect of maintaining a growing collection of documentation. The documents in this collection contain information that will answer frequently asked questions. When the collection consists of a few documents, this individual can read them all and be prepared to answer these questions. But, as the amount of documeqts increases, he will be forced to find some method of recording clues to the information found in each document. These clues will have to be stored separately from the documents, on a list or perhaps on file cards, so that the maintainer of the documents can scan them easily. When he is asked a question, instead of trying to remember which document or documents have the answer, he goes to his list of clues, and then selects the documents from the collection. The number assigned to each group of clues is the same as the number on the document.Let us assume that most or even all of the questions asked of this individual are predictable. He is then in the fortunate position of being able to look for specific answers to specific questions as he records the clues from each incoming document. He can then arrange the list of clues in whatever order is most convenient for him. He can arrange the clues by frequency of questions asked, he can classify the clues by hierarchical relationship, by chronology or by any other convenient method that might best or most quickly answer these stock questions.In some very fortunate cases, a collection of documentation consists of documents that have been specifically designed to answer questions. Each document is constructed with a consistent number of information or data blocks and the contents of these blocks vary to a predictable degree. The recording of information clues (we may as well now refer to this function as indexing) then becomes a simple task.Collections of technical papers, the most common type of information collections, do not lend themselves to similar handling. One can predict only to a very small degree, what questions will be asked of such a collection. Therefore, the indexer must select clues from each document based on his speculation of what questions will be asked in the future. It would seem that wearenow getting a vague picture of what an indexer looks like. He is able to pick up any highly technical paper, most of which are at the forefront of their disciplines (otherwise why should they be published?), to understand the content of this document so expertly that he can predict the questions that will be asked and then answered by this document, and then to record the clues to its contents in such a manner that they will lead a searcher directly to this segment of recorded knowledge at some unknown future date. This astute person must certainly possess knowlege equivalent to advanced degree level in numerous scientific disciplines, he must have working knowledge of many of the world's languages, surely he must possess an advanced degree in Library Science {more popular -Information Science), and the knowledge of practical economics to such an extent that he can subsist comfortably on six to seven thousand a year (the going rate for indexers). Armed with such a formidable background this individual would render better service, at least to himself, by doing the research and writing the paper himself.Obviously, the indexing function must be performed by someone less qualified than the individual described above.In a normal library atmosphere, the area usually given responsibility for the important endeavor of maintaining documentation collections, there is a traditional way to process such material. Indexing is performed using such aids as subject-heading lists or thesauri.The documentalist/librarian use of the term, thesaurus, refers to a dictionary-order list of approved indexing terms, similar to a subject-heading list.The indexer, in the above-mentioned environment, scans a document, tries to figure it out the best he can, and then selects terms from these approved lists that he thinks best describe the document. Sometimes this works, sometimes not. After all, the indexer cannot be expected to be expert in all technical fields. Anyway, the resulting terms that are the clues to the document's content are generalizations of this content. It goes without saying, ff a researcher is writing about a new usage of holography in pathological x-ray applications, this document surely has something to do with photographic techniques in medicine. If holography is not an approved term, it will eventually be added to the list when approved. In the meantime, it cannot be used, of course. But the term, x-ray, has been around long enough to be acceptable, and the searcher can hunt around at a higher (more general) level until he locates the document.The point is, such approved term lists are designed to aid the partially knowledgeable library user (or library worker) who does not know the technical vocabularies of special disciplines well enough to use them intelligently.The use of generalized terms stems also from the attempt, on the part of librarians, to store their reading materials in related clumps within a library. This is understandable in a public library or even in a book collection of a technical library. A user wants a book on computer programming, so he goes to the section of books that contains programming books. However, if he wants to know the latest published research on a particular programming technique he will find it in document or journal article form. He will know, in his own terminology, what he wants at a considerably more specific level than "computer programming," or say, than the approved To generalize these terms one would have to know that holography is related to photography, pathology is related to medicine, thorax is related to anatomy, and so on. We don't expect that much sophistication from our clerical workers.We really can't afford to pay for that much knowledge. Actually, we don't want them to know that much. It could bias their indexing. This is exactly the way the KARDIAK 1 automated bibliography on artificial heart research was produced. Now that it has been released (almost three years ago) and has received some acclaim throughout the world of medical research (e. g., Harvard Medical School, National Library of Medicine, We should add, however, for justice's sake, that if the KARDIAK were on "Information Science" instead of "Cardiac Medicine", the situation would surely be reversed.The thesis, so far, has hopefully convinced the reader that it is possible to index highly technical collections cheaply and accurately without superintelligent, universal men wielding the indexer's pencil. But we are still faced with the problem of some cross, discipline communication. We cannot query a collection on "Cardiac Medicine", and they cannot query a collection on "Information Science. " Now then, how do we go about communicating to one another through the medium of a general-information collection? That is, how do we do this without getting too general and paying the price for this generality?KARDIAK, once again, has giver us a clue to how this may be done.As we were feeding KARDIAK the terms selected by our clerk/indexer, some of these terms kept recurring; recurring with such frequency that our computer program could not hold them all in storage. That is, there was not enough room set aside to hold all the document numbers with which these terms were associated. The number of these terms was small, only seven in all, but the number of documents that used these seven terms was extensive. Because of the physical impossiblity of storing all these document numbers, these terms were rejected for storage. Oddly enough, perhaps serendipitously enough, if you will, these were the terms that generally described the collection: We don't need an approved list of terms. We couldn't have found one, nor known how to use one, if we had had one. It has been said, "Let the documents themselves generate their own terms. ,,2 One step further, let the terms rejected because of over-frequency be combined as subjectheadings. These combinations can then be used as general descriptors for the particular collection.Tile KARDIAK is a closed collection. That is, it was produced for a specific purpose, it served its purpose, and it is now a static piece of documentation history. Of course, it can always be picked up at a later date and be added to; but we don't foresee this happening at the present time. This is all leading up to the fact that there is any amount of manipulation one can perform on a static collection that cannot be done on a growing one. When a collection is constantly being added to, one must figure out away to maintain control of it as it develops. If the collection is specialized enough, the term rejection factor, mentioned above, will still appear. But, as the collection grows, we certainly must increase our storage capacity of the ratio of document numbers to terms. This ratio probably remains the same, but we cantt say so for sure unless we do some research on it. This is an area for further work with which we are not principally concerned in this report.What wewould now like to suggest is an interim feature: an aid to indexing and searching that is in between a free, specific, individual key word system and a generalized, controlled subject-heading system. We have already shown an almost algorithmic way of doing indexing. One element is missing, however, and that is syntax. He must presume that the hits he comes up with are of terms arranged in the same syntactical order as his search query. In other words, he is attempting to regenerate sentence order. This is successful much of the time, but then again there are times that it doesn't work.If we had our clerical worker again, we could show her some lines of text and ask her to combine words that bear relationship to one another. If she did a good job of making combinations, some of this missing syntax would be recovered. Let's fake a title, for example: "Applications of Linguistic Experiments to the Industrial Community. " Our clerk would probably make the following combinations:"Applications" (not combined)"Linguistic Experiments" (combined)"Industrial Community" (combined).These term combinations aid in restoring syntax, to some degree, where the free terms might be recalled out of order; for example, something like "Linguistic Community" or "hflustrial Experiments" or "Community Experiments, " all of which are entirely misleading in regard to the actual meaning of the title. Now, for the clerk to do term combining correctly, she uses some simple rules. The most obvious rule is that of sequence. There are other rules used that are not so obvious, even to her, because she may not know she is using them. These rules have to do with linguistics, specifically suffixal morph(~logy. This is to say that the suffixal morphemes of the words in this title are giving her clues about the relationship of one word to another. In other words, the presence of one of a group of particles at the end of a content word in a line of text will give a clue to its relationship to the next content word.Of course, the next word in sequence must be examined for the presence of a final particle, as well. Let's take "linguistic experiments" as an example.The two words are in sequence in the text line, even though this is not an absolute indication that they should be combined. The suffixal morpheme of "linguistic" is "-ic, " an adjectival ending. And since there is no punctuation following "-ic," this indicates the proximity of some next entity to be modified, some noun form coming up. In our example it is "experiments." But, if the suffixal morpheme of "experiments" were "-al" instead of "-s, " and there is still no following punctuation, we would have a clue that we don't yet have a noun form to be modified. We have two adjectives stacking up, and the next following word may be the noun form we have been waiting for. However, the "-s" morpheme is most likely acceptable enough as a noun plural ending, and the combination "linguistic experiments" is a valid one.The application of such rules by our clerical worker is automatic because she does all these operations following the rules that are built into her knowledge of the language. She might possibly be able to explain the process but it is so ob~ous and natural to her that she might not be able to.To do this function by machine ia another matter. We must not only ex- plainThe inspiration for NEXUS came from a particular collection compiled by IS&R on legal literature.The indexing was done by an individual highly trained in law but who had never done any previous indexing. His indexing consistency, to begin with, was slightly erratic in that he occasionally repeated terms in bound form that he had already noted down in free form. However, as he progressed through the collection of 1742 documents his indexing became more stabilized.Each document was given an accession number. The index terms, usually six or seven of them, were listed under the number. The indexer wanted retrieval by date at some future time, so he used the year the document was published as an index term in every case.The output of this project was a KARDIAK-type (or "busted. book", as it is known in IS&R) manual index, which was produced by computer. The terms were sorted alphabetically and the document numbers of the documents indexed by the term listed beneath each term in ascending order.Precoordination of these terms would have aided the searcher, in the way pre~ously indicated, as a time-saver and a syntax safeguard. This would have prevented the searcher from erroneously hooking together terms that actually were not related.To begin with, the unsorted sets of index terms were used as input to NEXUS. NEXUS was first put together in a very rudimentary form. The dates were isolated and the criteria for precoordination were based on(1) sequence, (2) "-ed" suffixal morpheme in the first position, and 3 If this held word is the last in the set, it is also (7) printed as a single term. But, if there is a next word (16), thenext word is examined and (11)O0 o ~1 ~ ~ O.I v .g t-I ,--T t.~tested for being a date. If it is a date, it is printed (12) as a single term.If not, it receives a test for "-ed" (13) as the final morpheme. This morpheme can only be allowed with the first word of a pair (unless, of course, it is the last term in the set; in which case it is printed alone), ff "-ed" is present, the held first word (14) is printed by itself, and the "-ed"word is held for first-position pairing. If "-ed" is not present, the held word is printed with this word (15) as a coupled pair.Let's go back to (5) where a word is tested for the presence of an "-s"final morpheme. The word does end with "-s", so we check for a preceding word (6). In this case we will get "yes" for an answer, and the next test is Problems.Because of our "-s" rule in second position only, the program isolated "Jurimetrics" instead of making the obvious (to a human) coordination, "Jurimetrics Committee." The rule must be valid for only one position, and the second position is the most common one. Continuing the sequence, "Committee" was precoordinated with "Scientific" because of the sequence rule. This is also an,obvious error to a human, because of the suffixal morpheme "-ic", which is part of "Scientific. " In analyzing the production, so far, "-ie" seems like a good candidate for a first-postion suffixal morpheme; so, it became one in the next version of the program. The next combination, "Scientific Investigation," turned out successfully because of sequence, but "Investigation Legal" went bad; once again because of a suffixal morpheme cue that wasn't included in the program.This morpheme was the "-al" on the term ,"Legal" which was later included as a first-position rule. Finally, "Legal Problems" was produced, meeting the requirements of both sequence and "-s" rules. The new first-position rules included the suffixal morphemes "-al", "-ern", "-ese", "-ic", "-ive", "-ly" and "-ous". The remaining rule was one that prevented two words with "-ing" endings from being paired together.As you may have noticed, the first-position rule, "-ous" conflicts with the second-position rule, "-s". The latter rule looks for a final "-s" only and when it finds one, qualifies the term for second position. Because of this, the "-s" test must also include a test for preceding "o" and "u". When these are present, we have a first-position rule in effect; when absent, a second-position rule.One of the NEXUS I rules was eliminated. The rule for stacking "-s" words and attaching the first non-"-s" as a first-position word. This rule did not produce anything of value, and could possibly have contributed to ambiguity.However, a turnabout version of this rule was adopted. This rule, if it locates a sequence of first-position suffixal morphemes, will stack them up until it finds a second-position word. It then prints them all in combination. In this way, we have a method for creating strings of terms in precoordination consisting of more than two words. "Three-dimensional Holographic Techniques,"is an example of a production of this kind.NEXUS I contained an overlapping feature which we haven't mentioned, but which may have been obvious when we went through the "Jurimetrics, Committee, Scientific, and so on" example. The purpose of overlapping was to left-justify each term whether combined or left alone, so that it could be stored alphabetically in an IS&R system. In this way, no term is hidden from the search by reason of being forever concealed in second position in storage.We did install a jump switch in NEXUS II, so that we can eliminate overlapping, The abstract portion of each description was used to supply NEXUS II with material to work with. The abstracts were first processed through an auto-indexer to produce lists of terms. These lists were next presented to NEXUS II and then printed out for analysis after the term-binding operations were performed. NEXUS lI was run two ways; with and without the overlapping feature.The program worked well with this material, with one exception. The suffixal morpheme carried by the third person singular, present tense verb, "-s", has the same physical appearance as the plural morpheme, "-s".Since the computer can't tell the difference, there occurred some bound terms that were somewhat loss than rife with meaning; for example, "Program Calculates", "Computes", "Program Generates", "Program Uses".Although these odd combinations could be avoided by employing a different writing style when producing the abstracts, we are not concerned with preconditioning a corpus, rather with handling it in whatever form we happen to find it. The above combinations can certainly be tolerated, however, since they have no effect on the other precoordinations. As an exercise in demonstrating the difficulties encountered in handling natural language for computerized information retrieval, the NEXUS experiments have been very successful.The intent has been to expand upon more or less standard automatic indexing techniques by reestablishing a connection between terms that, when combined, aid the searcher in retrieving a document reference from storage.We have named this process precoordination because of its relationship to coordinate index systems. In a coordinate index the searcher combines terms, looking for a common accession number, thereby indicating their occurrence together in a document description. NEXUS has an application in precoordinating these terms, when applicable, to save time for the searcher and to ensure a correct coordination and to prevent coordinating terms that give a misleading implication. Precoordinated terms are then, in effect, equivalent to subject headings insofar as they partially express a concept in one or more words in a syntatic construction.The comparison of NEXUS, and its several linguistically-based rules, with SEQS, and its single rule for sequential linking, has shown that NEXUS is the more efficient of the two approaches. Neither, of course, can compare with human decision power, which has the ability to employ knowledge, past experience, and heuristics. Since we are trying to approach a human intellectual activity using a machine, however, the work of a human will probably always make our results look inferior. We arc limited to looking at words primarily as physical entities and then relating these physical features to semantic relationships. There is only so much to work with in English, and that much is not 100% reliable, as we have seen.We have attempted to use a simple algorithm, and to add to it, or subtract from it, through trial and error. No doubt these rules can be expanded more than they have been, so the program is open to further additions at any time.The NEXUS II flow chart, Figure 5 -1, with a narrative explanation, follows.The first step at (1) is to read a record, a document term set.Step (2) examines the first term in the set and if there is one, moves through the date test (3), which is a holdover from the legal data collection. Next, the program makes the first suffixal morpheme test (4) . If tlie examined word does not end in ~-s TT, it is held for pairing (5) and the T~-ed~t counter is set to zero.This counter is used for all first-position suffixal morpheme words, not just for those that end in "-ed'. The counter is used to keep track of the amountM H B ,rlof first-position words that accumulate before a second-position word appears, so that they can all be printed out in a string; e.g., "BINARY DIGITAL CALCULATING MACHINE".The program then moves to (6) where a next word is looked for. If "no", the word held at (5) is printed as a single term (7) and a return to (2) is made, in turn going to (1) and the next record is begun. If (6) is "yes", the NEXUS I date check is made (8) which results in "yes" back through (7) and then (2) again, or "no", which is governed by Sense Switch 2 (9). Sense Switch 2 can be set to pass an examined work through the tests for "-ing" in first position (10) and in second position (11) in order to prevent coupling of words bearing these suffixes. These tests currently have no value because "-ing" has been established as a fairly reliable first-position suffixal morpheme and therefore must be allowed to stack up with words bearing "-tug" or any of the other * (* refers to NOTE -center of page, Figure 5 -1) words. The test has been left in in case it ever appears to be of any future use.Assuming Sense Switch 2 to be in an "on" position, a "no" answer to (8) proceeds directly to (12) where the held first word receives the first-position test for "-ed". If "yes", the "-ed" counter is incremented and the second word is passed through an "-ed" test (14). A "no" at (12) passes the program directly to (14). If (14} is "no", the second word is tested for presence of any of the other suffixal morphemes qualifying a word for first position (noted as *) (15). If (14)is "no", the second word is tested for presence of any of the other suffixal morphemes qualifying a word for first position (noted as *) (15). If (14) is "yes", the first word is tested for an * ending (16). A "no" at (15} moves the program to (17) where the first and second words are printed, the counter is set to zero and a flag, 2, (for later identification as a coupled pair) is placed at the end of the first and second words. This flag is externally suppressed.Passing through an indexer (pointing to the last word of a combination} and moving further to {18}, there is a Sense Switch 1, that controls overlapping. This is the feature that assures all terms a left-justified accessibility, by printing terms indi~cidually as well as in combinations. With the sense switch off, the program moves to (7) and the last word in the combination is printed alone.With the sense switch on, the program returns to (2) and continues through the record.Backing up now, to (15}. If a "yes" answer is made at (15), the first word is tested for * ending at (16}. If "no" at (16), the first word receives an "-ed" test (19} and upon receiving another "no" at (19) the first word is printed alone at (7). If "yes" at either (16) or (19}, the "-ed" counter (20) (which also counts * words}, is incremented and a test for a next word is encountered at (21). If there is not a next word in the record under examination, each "-ed"{or *) word is printed individually (22) and the counter reset to zero. The program then goes back to (2) . If there is a next word in the record, the date test is made (23). If "yes" on (23), the print instruction (22) is applied to all "~ed"/* words, and then back to (2). If "no" on (23), the next word is checked for "-ed", (24) and * (25). Failing both of these tests, all "-ed" and * words are printed in a string (with the last member of the string a non-"-ed"/*) (26).If either of these tests (24}, (25) are positive, the program loops back through (2}, increments the "-ed" counter and cycles through (21), etc., again.Let's now go back to the first suffixal morpheme test, the last word "-s" test at (4), and assume a "yes" answer. We then must find out if it is a plural "-s", or part of an * ending, "-ous" (27). If it is "-ous", we then go to (5), and thence through the route just explained above. If it is not "-ous", but a plural "-s", we move to {28} to check for a preceding word. If there is no preceding word, the "-s" word is printed as a single word (29}, and back to (2).If there is a preceding word, the date check (30) goes into effect. If positive, the program moves to (29) and the "-s" word is printed as a single term. If "no" on (30) the test is made "Does preceding word end with '-s' ?" (31) which, when "no", moves the program to (32) "Is the preceding word part of a coupled pair ?". This is the reason for the flag put at the end of the 1st and 2nd words at (17}.If "yes" at (32) the program shifts to (29) where the "-s" word is printed as a single word. If "no v' at (32), the program prints the preceding word with this "-s" word (33). If "yes" at (31), there is a test for a preceding word (34).If "yes" at (34), the date test (35) takes place. If "no" at (34), the program shifts to (29) and prints the "-s" word as a single word, and then goes back to (2) . This also occurs when there is a "yes" answer at (35). If "no" at (35), the program goes back to (29) where the "-s" word is printed as a single word.This is the latest version of NEXUS II. The flow chart has superfluities that haven't been removed. Many instructions could be combined to save operations. But, the intent has been to get this program operating and reported on. The flaws that are obvious are the combining of various rules that apply to "-ed" endings as well as * endings. These rules are to be treated the same. No doubt, other things could be combined to make a more efficient program.A few suggestions for applying this method should be made. The previous method for auto-iudexed terms has been to use them in a "busted-book" or computer-generatod coordinate index. The NEXUS-generated subject Linquistics, in a general sense, concerns itself with speech sounds, from which a graphemic representation of a language is one step removed. If the day ever comes that a computer can more efficiently accept the spoken word than the written word linguistics, in a fuller sense, will be found applicable. There will probably be interim improvements in methods for computer .input that will predate voice input, however, Such input devices as optical scanners and page readers may make a long-awaited appearance, for practical purposes, before people can talk to a computer in any application other than an experimental one. If there is any doubt of the superiority of the spoken word over the written as an information carrier, one merely has to read a television jingle or such a phrase as, "very interestting~ " heard on a popular TV program, to realize that the suprasegmental phonemes of stress, pitch, juncture and even accent in the dialect sense, completely lost in the written word, are very much present and necessary in the spoken word.Getting back to the kind of linguistics with which we have been directly concerned, we have been devising rules fir joining together two or more words to make up a phrase. The rules are activated when one or more characters (graphemes) are found at the ends of words (suffixal morphemes} that have an effect on the word's connectability to other words in a sequence (syntax}.These rules work every time. There is no decision maker involved allowing a sometimes exemption to a rule. Since the rules are of a general-purpose kind, they are set up to operate on the most frequent conditions. The exceptions to these conditions that occur occasionally are merely tolerated. No attempt has been made to set up ad hoc rules to cover them. It so happens, unfortunately, that the name "Information Retrieval" is one of these exceptions and would not be produced as a combination by the NEXUS program.Although the NEXUS method is far from perfect, even in its present state it is reasonably workable as a subject-heading generator. Its consistency of operation, of course, exceeds human processing; an advantage in some respects and a disadvantage in others, as already pointed out.Research of this type is not intended to produce a panacea that will solve all natural-language-input problems, but is intended to shed a little more light on language manipulation by computer and perhaps take a few tentative steps towards a solution of these problems. Hopefully, this research has been successful to that extent. The SEQS column lists the combinations formed by using sequence rules alone. Here eve*T two terms are connected as they occur in syntactical order. Purpose Operations, Convair division of General Dynamics, San Diego,.December 1968. Appendix:
null
null
null
null
{ "paperhash": [ "macdonald|conversion_of_large-scale_is/r_systems_for_general-purpose_operation", "newcomb|technique_for_the_automatic_generation_of_bibliographies_(a_biomedical_information_application)", "sanford|problems_in_the_application_of_uniterm_coordinate_indexing" ], "title": [ "CONVERSION OF LARGE-SCALE IS/R SYSTEMS FOR GENERAL-PURPOSE OPERATION", "TECHNIQUE FOR THE AUTOMATIC GENERATION OF BIBLIOGRAPHIES (A BIOMEDICAL INFORMATION APPLICATION)", "Problems in the Application of Uniterm Coordinate Indexing" ], "abstract": [ "Abstract : The Technical Data Systems Group has constructed a computer-based, general-information storage and retrieval system (IS+R) that has been in operation for several years over a variety of data bases. The general technique employs a repertory of generalized programs that are specification controlled. This feature allows a selection to be made from a library of programs allowing individual customizing of a total system for handling a particular data base. The intent in this report is to examine the philosophy of obtaining an existing mechanized data base and translating or converting it to a format which will enable its being placed in the Convair IS+R environment and then to subsequently employ the available flexibilities of retrospective search and total data base analysis.", "Abstract : The bibliography generation technique has proved to be an important tool for the rapid compilation of large numbers of searchable literature citations. The KARDIAK/MEDITERM bibliography demonstrates that it is unnecessary to employ a specialist in the field to index citations using this system. The keypuncher/indexer is able to function efficiently with minimal instructions. Initial input on paper tape with subsequent conversion to punched cards alleviates the problem of error correction and verification while providing a hard copy record of all input documents. Standard information storage and retrieval computer programs were used for this task, with the exception of the 'Yellow Dog' FORTRAN language program for the generation of header cards. S-C 4020 formats were used which provided the most convenient and easy-to-read layout for the dual dictionary as well as the citation and author listings. (Author)", "T' H E L I B R A R Y of the National Security Agency has completed the organizational and experimental work necessary for the creation of a large-scale Uniterm coordinate index. Production is now on a routine basis. Over 70,000 documents have been cataloged. This report is written at this time to make our experience available to other librarians who may be considering the use of this system. We wish we could answer all the questions that have been raised about coordinate indexing in the literature. Many earnest librarians with very considerable professional experience have been deeply troubled by its potential pitfalls. Perhaps we have been very lucky. Perhaps the pitfalls will vanish in any other large-scale test. We do not know. We can report only that our system works. We do not know of any other means to gain such tight control of large masses of documents so economically and rapidly. There have been problems, however; and some of them were formidable. Our version of the Uniterm system of coordinate indexing is certainly not the last word in desirable development. It contains some of the whimsical invention and much of the rough-and-ready crudities of Henry Ford's Model \" T \" automobile. But, like the Model \"T,\" it runs. As will be apparent in the account that follows, we have had to introduce many adaptations and changes of the system as originally outlined in the literature. Since we were pioneering, matching our wits against the new system day by day has been challenging. We hope that in solving our own problems we have made a contribution to the developing science of documentation. F o r a n y r e a d e r u n f a m i l i a r w i t h t h e U n i t e r m sys tem of c o o r d i n a t e i n d e x i n g , t h e s c h e m e is conce ived as fo l lows : T h e ideas p r e s e n t e d i n t h e t i t l e of a document, plus additional ideas embodied in the text if t h e t i t l e is n o t suff ic ient ly descr ip t ive , a r e b r o k e n u p i n t o s e p a r a t e words , d u b b e d \" u n i t e r m s . \" T h e d o c u m e n t is ass igned a n a r b i t r a r y n u m b e r . A 5 \" x 8 \" m a s t e r i n d e x c a r d is p r e p a r e d f o r e a c h Un i t e rm , a n d t h e n u m b e r ass igned t o a d o c u m e n t is r eg i s t e r ed o n a l l of t h e U n i t e r m ca rds t h a t de sc r ibe t h a t d o c u m e n t . T h u s , t h e U n i t e r m c a r d bea r i n g t h e h e a d i n g C A T A L O G wi l l h a v e insc r ibed o n i t t h e d o c u m e n t n u m b e r of every r e p o r t h a v i n g a n y t h i n g of m o m e n t to d o w i t h ca ta logs o r c a t a l o g i n g . A docu m e n t t h a t is a c a t a l o g of s p a r e p a r t s f o r a u t o m o b i l e w i n d s h i e l d w i p e r s wi l l h a v e its n u m b e r r e c o r d e d o n e a c h of t h e fo l l o w i n g cards : C A T A L O G , A U T O M O B I L E , W I N D S H I E L D , W I P E R , P A R T S . T O f i nd th i s d o c u m e n t , t h e a b o v e ca rds a r e comp a r e d . W h e r e v e r t h e i d e n t i c a l n u m b e r a p p e a r s o n t w o o r m o r e cards , t h a t n u m b e r r e p r e s e n t s a d o c u m e n t w h e r e i n t he ideas in te rsec t , i.e., c o o r d i n a t e ." ], "authors": [ { "name": [ "W. F. Macdonald", "G. E. Sullivan" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. A. Newcomb", "R. A. Benson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Sanford", "F. Thériault" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null ], "s2_corpus_id": [ "60956153", "60458662", "62356887" ], "intents": [ [], [], [] ], "isInfluential": [ false, false, false ] }
null
665
0.003008
null
null
null
null
null
null
null
null
abdf687e38b748cda3bc401beded64d9203747c7
42477596
null
On Semantics of Some Verbal Categories in {E}nglish
An attempt is made at such a description of the semantics of some verbal categories that would fit in the functional type of generative description of language as proposed by Sgall. In most analyses of the. relationship
{ "name": [ "Hajicova, Eva" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 62: Collection of Abstracts of Papers
1969-09-01
0
0
null
null
null
null
null
between the morphemic verbal forms and their temporal meanings that have been made in this sphere up to now, the schemas employed proceed in two directions, that of temp- The status of these categories will be discussed and illustrated on English examples, as we~l as some remarks will be added concerning the possible application of our investigations (when confronted with Czech) in the work on the problems of machine translation.
Main paper: : between the morphemic verbal forms and their temporal meanings that have been made in this sphere up to now, the schemas employed proceed in two directions, that of temp- The status of these categories will be discussed and illustrated on English examples, as we~l as some remarks will be added concerning the possible application of our investigations (when confronted with Czech) in the work on the problems of machine translation. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
17132198bf336988fd434103bcb2418b5d25445c
14417375
null
Syntactic Patterns in a Sample of Technical {E}nglish
Importance of the Concept of Homogeneity A fundamental assumption of statistical linguistics is that there are differences worthy of note in the frequency of various units in certain texts. At the same Time, there are differences in frequencies which would not be considered important.
{ "name": [ "Streeter, Victor J." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 44
1969-09-01
17
0
null
In examining the raw results it may be clear at once that there is a meaningful difference among the counts or scopes.If samples of 100 sentences were taken at random from each of two texts, and the mean lengths for the two samples were 20 words and W0 words, no one would hesitate to conclude that one text revealed a r'significantly" greater sentence length than the other. But if the figures were closer, say 27 and 33, more exact methods ape needed. m It is a law of nature that a sample taken from a population will not always yield exactly the statistics of the popula-Tion, that on occasion even a large discrepancy will be found.The extent to which sample values may be expected to vary from population values through chance alone is a subject of mathematical statistics, as is the extent to which two or more sample values from the same population will differ.There is considerable data that demonstrates overall similarities in the frequencies of various units between samples from the same writer, fmom different writers, and even from different languages. 1 The problem for statistical linguistics and stylistics is the ordering of degrees of similarity into groups according to some notion of homogeneity.If the sample values differ no more than could reasonably be attributed to chance, we see no reason why the populations from which the samples were taken could not be called one homogeneous population.Whether text samples pass a statistical test fom homogeneity depends on the nature of the text~ the chosen iSee, for example, Herdan, The Advanced Theory o~f Language as Choice and Chance, pp. i--7-/-27, and M. Rensk~"The Noun-Verb Quotlent in Englxsh and Czech, Phllolo~la Pra~ensla, VIII (1965), pp. 289-302. It is possible to imagine a perfectly uniform text, for example, one composed o£ nothing more than repetitions of the same identical sentence.In this case, a statistical test will reveal this homogeneity for any significance level or sample size. This description has many parts. Ideas flourish.Progress gives men hope.Linguists study language.We consider this false.This was realized by others.There a~e few days left. There seems to be no way to do this.It is not easy to estimate this quantity.It seems futile to try this. 2. These are works that embod 7 in the medium of language the esthetic values of the individual or the com-Bach, page I. A main be clause followed by a subte transitive clause:M3---z[.The particular wa 7 of statin~ a theory of a language with which we shall be concerned has taken inspiration from modern logic.Bach, page 9. A main transitive clause with an embedded b_~e clause: M4(3).It is doubtful whether there are an 7 natural lansuases conformin~ to an 7 of these tTpes.Bach, page 105.A main it clause followed by subordinate there and transitive clauses: MEC4.5. We set up terminall 7 discontinuous consZructions as continuous ones and then separate them.Bach, page 120.Two main transitive clauses: M4M4.The coding of the original texts'was carried out "man- We find that this type has its lowest frequencies in chapters 4 and 7. Theme is, then, no strong correlation between sentence types on the basis that they both contain passive clauses. The results, given in Table 5 , clearly indicate the authors' different preferences, but at the same time theme are marked similarities in their frequency of usage of some types, for example the M34 and M43 types. We must remember that Bach's most common sentence types were shown to be strongly non-homogeneous, and thus the data in Table 5 cannot be regarded as highly predictive of the performance to be found in other Bach samples. Because of this great internal inconsistency a chi-square test was not carried out on the data in Table 5 .This study has produced, we believe, much useful and interesting data which leads to several major conclusions For the index of contextuality a higher value means less uniformity for the feature, while a higher value for the rejection size means more uniformity. Table 6 gives values for these two indexes for a number of features as a basis for determining The relative similarity of Bach and Pike. As can be seen, the two writers agree relatively closely on the ratio of main to subordinate clauses of the passive type, but differ greatly on this same ratio for the there type.,We believe that the categomization of clause and sentence types used here is reasonable and simple, and that this sort of categomization would be readily applicable to other languages. In addition, statistical measures such as the index of contextuality and rejection size appear to be quite useful as indicators of the consistency of linguistic performance.
null
null
null
null
Main paper: : In examining the raw results it may be clear at once that there is a meaningful difference among the counts or scopes.If samples of 100 sentences were taken at random from each of two texts, and the mean lengths for the two samples were 20 words and W0 words, no one would hesitate to conclude that one text revealed a r'significantly" greater sentence length than the other. But if the figures were closer, say 27 and 33, more exact methods ape needed. m It is a law of nature that a sample taken from a population will not always yield exactly the statistics of the popula-Tion, that on occasion even a large discrepancy will be found.The extent to which sample values may be expected to vary from population values through chance alone is a subject of mathematical statistics, as is the extent to which two or more sample values from the same population will differ.There is considerable data that demonstrates overall similarities in the frequencies of various units between samples from the same writer, fmom different writers, and even from different languages. 1 The problem for statistical linguistics and stylistics is the ordering of degrees of similarity into groups according to some notion of homogeneity.If the sample values differ no more than could reasonably be attributed to chance, we see no reason why the populations from which the samples were taken could not be called one homogeneous population.Whether text samples pass a statistical test fom homogeneity depends on the nature of the text~ the chosen iSee, for example, Herdan, The Advanced Theory o~f Language as Choice and Chance, pp. i--7-/-27, and M. Rensk~"The Noun-Verb Quotlent in Englxsh and Czech, Phllolo~la Pra~ensla, VIII (1965), pp. 289-302. It is possible to imagine a perfectly uniform text, for example, one composed o£ nothing more than repetitions of the same identical sentence.In this case, a statistical test will reveal this homogeneity for any significance level or sample size. This description has many parts. Ideas flourish.Progress gives men hope.Linguists study language.We consider this false.This was realized by others.There a~e few days left. There seems to be no way to do this.It is not easy to estimate this quantity.It seems futile to try this. 2. These are works that embod 7 in the medium of language the esthetic values of the individual or the com-Bach, page I. A main be clause followed by a subte transitive clause:M3---z[.The particular wa 7 of statin~ a theory of a language with which we shall be concerned has taken inspiration from modern logic.Bach, page 9. A main transitive clause with an embedded b_~e clause: M4(3).It is doubtful whether there are an 7 natural lansuases conformin~ to an 7 of these tTpes.Bach, page 105.A main it clause followed by subordinate there and transitive clauses: MEC4.5. We set up terminall 7 discontinuous consZructions as continuous ones and then separate them.Bach, page 120.Two main transitive clauses: M4M4.The coding of the original texts'was carried out "man- We find that this type has its lowest frequencies in chapters 4 and 7. Theme is, then, no strong correlation between sentence types on the basis that they both contain passive clauses. The results, given in Table 5 , clearly indicate the authors' different preferences, but at the same time theme are marked similarities in their frequency of usage of some types, for example the M34 and M43 types. We must remember that Bach's most common sentence types were shown to be strongly non-homogeneous, and thus the data in Table 5 cannot be regarded as highly predictive of the performance to be found in other Bach samples. Because of this great internal inconsistency a chi-square test was not carried out on the data in Table 5 .This study has produced, we believe, much useful and interesting data which leads to several major conclusions For the index of contextuality a higher value means less uniformity for the feature, while a higher value for the rejection size means more uniformity. Table 6 gives values for these two indexes for a number of features as a basis for determining The relative similarity of Bach and Pike. As can be seen, the two writers agree relatively closely on the ratio of main to subordinate clauses of the passive type, but differ greatly on this same ratio for the there type.,We believe that the categomization of clause and sentence types used here is reasonable and simple, and that this sort of categomization would be readily applicable to other languages. In addition, statistical measures such as the index of contextuality and rejection size appear to be quite useful as indicators of the consistency of linguistic performance. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
745adb4e0d451a8de7e4b282fbef3c1164b0bb90
62354584
null
Discourse Referents
Consider a device designed to read a text in some r~tural language~ interpret it, and store the content in some manner~ say, fop the purpose of being able to answer questions about it. To accomplish this task~ the machine will have to fulfil[ at [east the following baste requirement. It has to be ab|e to build a file that consists of records of-all the individuals~ that is~ events~ objects~ etc. ~ mentioned in the text~ and~ for e~eh individual~ record whatever is said about it. OF couPse~ fop the time being at [east~ it seems that such a text interpreter is not a praetica[ idea~ but this should not discourage us from studying.in abstPaet what kind of capabilities the machine would have to possess~ provided that our study provides us with some insight into natural langL~ge in general. In this paper I intend to discuss one particular feature a text interpreter must have: that it must be able to recognize when a novel individual is mentioned in the input text and to store it atong with its characterization fop future reference. Of course~ in some P_~ses the problem is trivial. Suppose there appears in some sentence a proper name that has not been mentioned previously. This means that a new
{ "name": [ "Karttunen, Lauri" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 70
1969-09-01
2
3
null
In this paper I intend to discuss one particular feature a text interpreter must have: that it must be able to recognize when a novel individual is mentioned in the input text and to store it atong with its characterization fop future reference. Of course~ in some P_~ses the problem is trivial. Suppose there appears in some sentence a proper name that has not been mentioned previously. This means that a new pePson is being intPoduced in the text and appPopPiate action must be taken to PecoPd the name of the pePson and what is said about him.OthePwise, the pPopeP name is used to PefeP to an individual already mentioned and the machine has to locate his file in the memory with the help of the name. This pPoblem of identification wilt be mope difficult where a definite descPiption--a definite noun phPase such as the man Bill saw 7estePda~,--is used, since thePe will, in genePal, not be any simple look-up procedure fop associating the descPiption with the Pight individual. With definite noun phrases there is also the pPoblem that it is not possible to tell just from the noun phPase itself whetheP oP not it is supposed to refer" to an individual at all. Fop example, it is cleaP that the phPase the best student is not used PefePentiaUy in a sentence such as Bill is the best student. -[-here ape thus two problems with ordinary definite noun phrases: (i) Is it a definite descPiption at all? and (ii) How to match a definite descPiption with an individual already mentioned in the text? The fiPst question is clearly of the kind linguists can be expected to solve, but it will not be discussed here. The only aspect of definite descPiptions that interests us hepe is the fact that they caPPy an existential presupposition: to call something "the ... " presupposes that there be some such thing.While it is in genePal a stPaight-for~vaPd matter-to decide whether or not a.proper name in a text introduces a new individual, indefinite noun phrases pose a more difficult probtem. To put the question in a general way: Given an indefinite noun phrase, under what circumstances is there supposed to be an individual described by this noun phrase? This need not be understood as some sort off ontological question subject to philosophical speculation, in this paper" [ intend to approach it from a purety linguistic point of view.It is in just those cases where the appearance of an indefinite NP implies the existence of some specific entity that our hypothetical text interpreter shoutd record the appearance of a new individual.What [ have in mind can perhaps be made clear w ~:h the hetp of the following examples, it is a watt-known fact about language that indefinite noun phrases cannot be interpreted as refer'ring to expressions when they appear, in the predicate nominal position.(l) Bitl is not a linguist.(l) is obviousty a statement about one individual. It is not a statement about some linguist and [Bill. It is also weir-known that in generic sentences singutar indefinite noun phrases play a peculiar rote.(2) A tion is a mighty hunter`.In its gener'ic sense, (2) is a statement about tions in general, not about any tion in particular,, unless we want to postulate a hypothetical entity 'the typical lion' of whom all generic statements about lions ape predicated. It is clear that indefinite noun phrases have a very special Pole in (l) and (2) and it is not difficult to decide that they could not introduce any new individuals into a discourse. It is out of question that a text in which (1) appears would contain a later" reference to 'the linguist which Bitt is not' or that (2), in its generic sense, would justify a later reference to 'the lion who is a mighty hunter'.But consider the following example. (3a) may be followed by any of the sentences (3b-d) that give us more information about a specific cap first mentioned in (3a). On the other hand, (4a) cannot be followed by any of the alternatives The above examples show that just in case of (3a), the text interpreter has to recognize that the appearance of the indefinite NP a cap irnpties the existence of a specific car-that can be talked about again by referring to it with a pronoun or" a definite noun phrase. But no cap is introduced by (4a). The alternative continuations (4b-d) are inappropriate, since they presuppose the existence of something that is not there. To show that this is a linguistic and not an ontological fact one only has to point out that examples (5) and (6) behave just like (3) and (4).(5) Bill saw a unicorn. The unicorn had a gold mane.(6) Bill didn't see a unicorn. *The unicorn had a gold mane.Let us say that the appearance of an indefinite noun phrase establishes a discourse referent just in case it justifies the occurrence of: a corererential pronoun or a definite noun phrase hater in the text. In this paper we will try to find out under what circumstances discourse referents are established. We maintain that the problem of corefer-ence within a discourse is a linguistic problem and can be studied independently of any general theory of extra-linguistic reference.The present study was inspired by the notion of 'referential indices' in transformational grammar.Noam Chomsky (1965) , it has generally been assumed that the base component of a transformational grammar associates with each noun phrase a referential index, say, some integer. The purpose of Chomsky's proposal was not so much to account for the meaning of sentences, but to augment the notion of noun phrase identity. It seemed that the notion of 'referential identity' was needed in addition to the two other types of identity, 'structural identity' and 'morphemic identity', for the structural descriptions of certain transformations.According to the standard theory, referential indices are merely formal indicators of coreference with no further semantic significance.They amP not meant to imply the existence of discourse referents in our sense. This notion of cor~fempntiality has played an important role in recent syntactic arguments. It led to the study of pronoun-antecedent relations, largely ignored by traditional grammarians, which has revealed intricate constraints that have great theoretical impor~tance.What we are studying in this paper can be looked at as further constraints on compFerentiality that extend beyond the sentence level,]. Case studies l. 1 A note on specificity[n the following we amP going to examine case by case certain aspects of sentence structure that play a role in cletermining whether an indefinite NP establishes a discourse referent.In the examples that ape discussed~ there is a possible ambiguity that has to be mentioned in advance, although it will not be discussed until later. In general, indefinite noun phrases have both a specific and norr-specific interpretation. Example (7) can be interpreted to mean either (8a) or (Bb).Bill didn't see a misprint.(a) 'There is a misprint which Bill didn't see'(b) 'Bill saw no misprints'[1 = (7) iS Understood in the sense of (8a), we say that the indefinite NP a misprint is interpreted specifically. (Sb) represents the non-specific interpretation. OF course, not all indefinite noun phrases are ambiguous in this way. We could disambiguate (7) by adding the word certain ("a certain misprint") or an appositive relative clause ("a misprint, which I had made on purpose"). These changes would allow only the specific interpretation (8a). The addition of the word ~ ("a single misprint") would allow only the sense (Sb). There are also cases where the verbs involved partially disambiguate the sentence by making one interpretationFar more plausible to the reader than the other. For example, the NP a piano in (9a) is naturally understood non-specifically, that is, as meaning 'any piano', white the same noun phrase in (9b) suggests the l interpretation 'a certain pianO'.(9) (a) John tried to find a piano. ]. 21The following examples are anomalous in the intended sen&e, although there is no negation involved.(10) (a) You must write a letter to your parents. *They are expecting the letter.(b) Bill can make a kite. *The kite has a tong string.Tr`aditionalty~ sentences with a modal auxiliary have been considered as simple sentences. However~ it has been argued convincingly by Ross (1967a) and others that modals should be analyzed as main ver"bs o£ higher` sentences. Therefore~ let us assume that~ even in the above examples~ the indefinite NPs originate in a complement clause~ just as they do in (l l). represent a yet untrue proposition at the time specified by the tense and time adverbials in the main clause. The present pr`oblem~ is in fact~ another point in favor` of the view that modals originate in a higher" sentence~ because it enables us to acknowledge the similarity of the anomaly in (10) and (]1). The conct'usion is that non-specific indefinites do not establish discourse referents when they appear` in a complement of a modal verb.There is a class of verbs that~ if they are not negated~ imply the truth of the proposition represented by their" complement sentence. There ape also verbs that inhePently have a negative implication. In English, this type includes ver'bs such as foP~t, fail, and neglect.Consider" the following anomalous discour"ses.(14) (a) John for-got to wr"ite atePm paper". *He cannot show it to the teacher".(b) John failed to find an answer". *It was wr"ong.These implicative ver"bs have the ver"y interesting pr"oper"ty that, if i there is double negation, the implication is positive, and an indefinite NP does, after" all~ establish a referent.(14) (a) John didn't fail to find an answer. The answer was even right.(b) John didn't remember" not to bring an umbrella, although we had no room for" it.This pr`operty distinguishes clearly verbs with negative implication, such as for`ge.___, tt, fr`orn modal verbs discussed above, although both types deny the tr`uth of the proposition represented by the complement sentence.There is a group o£ verbs, called facttve verbs (Kiparsky 1968 ), that presuppose the truth of the proposition represented by the cornplement. For" example, know, realty_e, and regret are factive. It is not surpr-istng to find out that an indefinite NP does establish a refer'ent in a complement of a factive verb, of course, provided that the complement itself is affirmative.(15) John knew that aar`y had a car', but he had never" seen it.In contr"ast to the implicative ver"bs discussed above, negation in the main sentence has no effect at art. (b) If tv~ary has a oar~ she will take me to work in it.I can drive the cam too. (e) When Mary has a car~ she can take me to work in in it. I can drive the car too.is based on the counter1:actual or dubious premise that Mary has a car. The difference between the first and the second pair is that in (28c--d (36) admits both the specific and non-specific interpretation of a girl.The reason for" the anomaly of the non-specific interpretation in 35and its acceptability here is apparently that, in (36), every successive sentence continues to have a similar quantifier--like term: "at every convention", "always", "usually". There is also nothing wrOng with ~he non-specific interpretation of the NP a book in (37).(37) Every time Bill comes here, he picks up a book and wants to borrow it. I never let him take the book.We have to say that, although a non-specific indefinite that falls into the scope of a quantifier fails to establish a permanent discourse referent, theme may be a short term referent ,within the scope of the quantifier and its life-span may be extended by flagging every successive sentence with a quantifier" of the same type. 4 Let us now return to the problem of specificity that was First introduced in § 1.1. As we already pointed out, many of the examples above that were judged anomalous in the intended sense can also be given another interpretation that makes them perfectly acceptable. is, of course, something very similar to the existential quantifier in predicate calculus. (Bach calls it 'the some operator'.)Base structures resemble formulas in symbolic logic. This approach to syntax has now become known as 'generative semantics'.It is easy to see that in the framework of generative semantics e there is no justification nor need for a feature such as [+specific] .The ambiguities in question ape naturally accounted fop by the fact that the quantifier binding variable that underlies some indefinite noun phrase may be placed in different positions in the base structure.Specificity thus becomes a ITiEtter of the scope of quantifie rs.As far as the problems discussed in this paper ape relevant to choosing a theoPetical framework~ they seem to argue in favor of adopting the Bach-McOawle~ proposals, It is rather difficult to see, how one could achieve an adequate description of the facts in the classical theory, FoP example~ consider the following case. Both (39a) and (39b) are ambiguous with respect to specificity.(39) (a) Bill intends to visit a museum.(b) Bill visits a museum every day.In the tspecifict sense~ both examples establish a discouPse referent.q It would make perfect sense to continue with a descP~:~tion of lthe museum Bill intends to visit t or fthe museum Bill visits every day'.-In the 'non-specific' sense, there is no such museum at all. So far so good, we can say that the NP a museum can be r+specific]. But what about example (40)?(40) Bill intends to visit a museum every day.It is clear that (40) is ambiguous in many ways. For example, the quantified time adverb every day could be assigned either to the complement or to the main clause, let us now consider only the Former case. The remaining ambiguities should be attributable to the indefinite NP a museum, in Fact, we should have a two--way ambiguity between the specific and nort-specific interpretation. But example (40) is stilt ambiguous in more than two ways. It could be interpreted to mean (41a)~ (41b), or (41c).(41) (a) 'There is a certain museum that Bill intends to visit every day. ' (b) 'Bill intends that there be some museum that he visits every day. ' (c) 'Bill intends to do a museum visit every day. 'It is easy to see why this happens. What the feature F±specific] accomplishes in case of (39a) is that it clarifies the relation between the indefinite NP a museum and the verb intend in the main sentence:Is Bill's intention about some particular museum or not? In (39b), we employed the same device to characterize the relation between Another advantage of generative semantics is that there is an explanation ready for the fact that (40) establishes a discourse referent under only one of the three interpretations we have considered, narnely (42a).The rule is that an indefinite NP establishes a permanent referent just in case the proposition to which the binding quantifier is attached is assumed (asserted, implied, or presupposed) to be true, provided that the quantifier is not itself in the scope of some higher quantifier. 7The First part of the rule accounts For the difference between 42aand (42b-c), the second part is needed to explain why (39b) establishes a permanent referent only under one of the two possible interpretations. Notice that, in (42a), the quantifier underlying the NP a museum is attached to the main proposition. Since the main proposition is asserted to be true and there are no higher quantifiers involved, (2~2a) establishes a referent corresponding to the NP a museum. Now, consider the other two interpretations of (40). The verb intend is one of the modal verbs discussed in (1.21). We know that the complement of a modal verb taken by itself is not implied or presupposed to be true. In (43) Mary may want to mammy a Swede.Highly schematicalty~ the underlying structure of (48) is something like (44). if the speaker continues the discourse with (46), the preceding sentence (48) can only be understood in the sense of (4.5a).(46) She introduced him to her mother yesterday.However, the following continuation, .where the pronoun its' stands for" $2, permits both (4,5a) and (45b).(47) Suppose that it is tr`ue, then she will certainly introduce him to her mother.As a final example, after some thought it should be obvious that a disoourse consisting of (48) and 48 Suppose that it is true and that she does it, then she will certainly introduce him to her mother.Although the argument against the traditional fea±ure [-+specific] should leave no doubt about its uselessness in discussing anything but the simplest kind of scope ambiguity, it does not necessarily mean that the familiar terms 'specific' and 'non-specific' should be rejected.They have proved quite useful and no harm is done, provided that they are understood in a relative sense and not as denoting some absolute property inherent in indefinite noun phrases. For example, consider interpretation (45b) of (4.3), which assigns the quantifier to S 2 • One might ,want to say that, with respect to the verb want the indefinite NP a Swede is specific. On the other hand, if the quantifier is attached to $3, as in (45c), a Swede could be called non-specific with respect to want. In general', let us call an indefinite NP specific with respect to a given verb (or quantifier, or negation) if the latter is in the scope of the quantifier associated with the NP. It is non-specific in case the verb commands the quantifier. This kind of definition seems consistent with the way these terms have been used in recent literature, and there is no reason to stop using them as tong as the relative nature of specificity is understood.
null
null
null
It is time to review the situation. We started by asking the seemingly na|be question: "When is there supposed to be an individual associated with an indefinite noun phrase?" Na|k/e as it may be, it must be answered in case there is ever going to be a device for inter-preting written texts or everyday conversation with anything approaching human sophistication. There is also another reason to be interested in the subject. Secondly, if relative clauses are derived transformationally from conjoined sentences by 'Retativization', as many linguists believe, the constraints discussed here are also a prerequisite For that transformation. For these reasons~ the problems studied in this paper are of some theoretical interest quite independently from whether the results lead to any practical applications.-We found that, in simple sentences that do not contain certain quantifier-like expressions, an indefinite NP establishes a discourse referent just in case the sentence is an affirmative assertion. By 'establishes a discourse referent' we meant that there may be a coreferentiat pronoun or definite noun phrase tater in the discourse, indefinite NPs in Yes-No questions and commands do not establish refe rents.In studying more complicated examples, it was found neces-saP)/to replace Chomsky's integer-type referential indices by bound variables.In this frarnework~ the traditional problem of specificity is treated as scope ambiguity. We studied several types of verbs that take complements and their semantic properties. We concluded that~ in general~ an indefinite NP establishes a permanent discourse referent just in case the quant[fier associated with it is attached to a sentence that is asserted, implied~ or presupposed to be true and there ape no higher quantifiers involved.There ape a couple of special problems: 'other worlds' and short term referents. Although discourse referents ordinarily exist for the speaker~ there is a class of fworid--cr@ating t verbs~ such as believe r that also establish PefePents of another kind. These exist fop somebody else~ not necessarily for the speaker. ThePefoPe~ we need to distinguish between the speakerts world and other realms -35and allow for the possibility that they are not populated by the same individuals. Secondly, there are short term referents, whose lifespan may be extended by continuing the discourse in the proper mode.What this proper mode is depends on the circumstances. FoP example, every successive sentence may have to contain (i) a modal as the main verb, (ii) a quantifier of a certain type, or" (iii) be in the counterfactual mood. That is, it is possible to elaborate for" a~vhite on situations that are known not to obtain or that may or should obtain and discuss what sometimes or always is the case.This work was supported by the National Science Foundation Grant GU-1598 to the University of Texas at Austin.1 These examples are due to C. LeRoy Baker 1966. 2 I am indebted to Robert E. Wall for suggesting the term 'implicative' to me.3 What remains unexplained here is the fact (pointed out to me by John Olney) that must in (27) has two meanings depending on the specificity of the NP a rich man in the preceding sentence. If the first sentence is about a specific man, then must in the second sentence is interpreted in a rather weak sense: 'It is likely that he is a banker'.But tf the NP a rich man is non-specific, the second sentence means: 'It is necessary that he be a banker'. 4 George Lakoff (forthcoming) has suggested that quantifiePs and negation be analyzed as verbs (predicates) instead of giving them a special status, as is usually done in symbolic logic. It is yet unclear to me whether there is any st,,bsta,.-tive issue involved or whether he is only proposing another no~-'_ .-,.5 There are other good aPgurr~nics against the feature [ *_ specific] in Janet Dean 1968. Unfortunately, they did not persuade the author herself.6 The complement of intend is what W. V. O. Quine calls 'opaque context'. ] ignore here his view that one should not be permitted to quantify into such a context. It seems to me that the objections he raises have to do with the double Pole names play in such contexts and only call fop moPe sophisticated linguistic analysis. Notice that Qu~ne approves of (i) while rejecting (ii) as meaningless '(Quine 1960, • p. 166):(i) (~x) (Tom believes x to have denounced Catiline) (it) (~x) (Tom believes that x denounced Oatiline)From a linguistic point of view, however, there is nothing but a superficial differenoe between (i) and (ii) due to 'Subject raising' that l~s applied in (i) but not in (ii).7 By 'higher" quantifier" I mean quantifter's such as al.._.t, each, many, and few, in fact, everything except the quantifier" associated Mth the singular" some and the indefinite article. The reason for" making this distinction is the fact that, if there are two indefinite singular NPs in the same sentence, both establish a referent no matter what their" order` is.(i) A dog was kitted by a car`.The above example, of course, justifies a later reference both to the dog and the car'.
Main paper: factive verbs: There is a group o£ verbs, called facttve verbs (Kiparsky 1968 ), that presuppose the truth of the proposition represented by the cornplement. For" example, know, realty_e, and regret are factive. It is not surpr-istng to find out that an indefinite NP does establish a refer'ent in a complement of a factive verb, of course, provided that the complement itself is affirmative.(15) John knew that aar`y had a car', but he had never" seen it.In contr"ast to the implicative ver"bs discussed above, negation in the main sentence has no effect at art. (b) If tv~ary has a oar~ she will take me to work in it.I can drive the cam too. (e) When Mary has a car~ she can take me to work in in it. I can drive the car too.is based on the counter1:actual or dubious premise that Mary has a car. The difference between the first and the second pair is that in (28c--d (36) admits both the specific and non-specific interpretation of a girl.The reason for" the anomaly of the non-specific interpretation in 35and its acceptability here is apparently that, in (36), every successive sentence continues to have a similar quantifier--like term: "at every convention", "always", "usually". There is also nothing wrOng with ~he non-specific interpretation of the NP a book in (37).(37) Every time Bill comes here, he picks up a book and wants to borrow it. I never let him take the book.We have to say that, although a non-specific indefinite that falls into the scope of a quantifier fails to establish a permanent discourse referent, theme may be a short term referent ,within the scope of the quantifier and its life-span may be extended by flagging every successive sentence with a quantifier" of the same type. 4 Let us now return to the problem of specificity that was First introduced in § 1.1. As we already pointed out, many of the examples above that were judged anomalous in the intended sense can also be given another interpretation that makes them perfectly acceptable. is, of course, something very similar to the existential quantifier in predicate calculus. (Bach calls it 'the some operator'.)Base structures resemble formulas in symbolic logic. This approach to syntax has now become known as 'generative semantics'.It is easy to see that in the framework of generative semantics e there is no justification nor need for a feature such as [+specific] .The ambiguities in question ape naturally accounted fop by the fact that the quantifier binding variable that underlies some indefinite noun phrase may be placed in different positions in the base structure.Specificity thus becomes a ITiEtter of the scope of quantifie rs.As far as the problems discussed in this paper ape relevant to choosing a theoPetical framework~ they seem to argue in favor of adopting the Bach-McOawle~ proposals, It is rather difficult to see, how one could achieve an adequate description of the facts in the classical theory, FoP example~ consider the following case. Both (39a) and (39b) are ambiguous with respect to specificity.(39) (a) Bill intends to visit a museum.(b) Bill visits a museum every day.In the tspecifict sense~ both examples establish a discouPse referent.q It would make perfect sense to continue with a descP~:~tion of lthe museum Bill intends to visit t or fthe museum Bill visits every day'.-In the 'non-specific' sense, there is no such museum at all. So far so good, we can say that the NP a museum can be r+specific]. But what about example (40)?(40) Bill intends to visit a museum every day.It is clear that (40) is ambiguous in many ways. For example, the quantified time adverb every day could be assigned either to the complement or to the main clause, let us now consider only the Former case. The remaining ambiguities should be attributable to the indefinite NP a museum, in Fact, we should have a two--way ambiguity between the specific and nort-specific interpretation. But example (40) is stilt ambiguous in more than two ways. It could be interpreted to mean (41a)~ (41b), or (41c).(41) (a) 'There is a certain museum that Bill intends to visit every day. ' (b) 'Bill intends that there be some museum that he visits every day. ' (c) 'Bill intends to do a museum visit every day. 'It is easy to see why this happens. What the feature F±specific] accomplishes in case of (39a) is that it clarifies the relation between the indefinite NP a museum and the verb intend in the main sentence:Is Bill's intention about some particular museum or not? In (39b), we employed the same device to characterize the relation between Another advantage of generative semantics is that there is an explanation ready for the fact that (40) establishes a discourse referent under only one of the three interpretations we have considered, narnely (42a).The rule is that an indefinite NP establishes a permanent referent just in case the proposition to which the binding quantifier is attached is assumed (asserted, implied, or presupposed) to be true, provided that the quantifier is not itself in the scope of some higher quantifier. 7The First part of the rule accounts For the difference between 42aand (42b-c), the second part is needed to explain why (39b) establishes a permanent referent only under one of the two possible interpretations. Notice that, in (42a), the quantifier underlying the NP a museum is attached to the main proposition. Since the main proposition is asserted to be true and there are no higher quantifiers involved, (2~2a) establishes a referent corresponding to the NP a museum. Now, consider the other two interpretations of (40). The verb intend is one of the modal verbs discussed in (1.21). We know that the complement of a modal verb taken by itself is not implied or presupposed to be true. In (43) Mary may want to mammy a Swede.Highly schematicalty~ the underlying structure of (48) is something like (44). if the speaker continues the discourse with (46), the preceding sentence (48) can only be understood in the sense of (4.5a).(46) She introduced him to her mother yesterday.However, the following continuation, .where the pronoun its' stands for" $2, permits both (4,5a) and (45b).(47) Suppose that it is tr`ue, then she will certainly introduce him to her mother.As a final example, after some thought it should be obvious that a disoourse consisting of (48) and 48 Suppose that it is true and that she does it, then she will certainly introduce him to her mother.Although the argument against the traditional fea±ure [-+specific] should leave no doubt about its uselessness in discussing anything but the simplest kind of scope ambiguity, it does not necessarily mean that the familiar terms 'specific' and 'non-specific' should be rejected.They have proved quite useful and no harm is done, provided that they are understood in a relative sense and not as denoting some absolute property inherent in indefinite noun phrases. For example, consider interpretation (45b) of (4.3), which assigns the quantifier to S 2 • One might ,want to say that, with respect to the verb want the indefinite NP a Swede is specific. On the other hand, if the quantifier is attached to $3, as in (45c), a Swede could be called non-specific with respect to want. In general', let us call an indefinite NP specific with respect to a given verb (or quantifier, or negation) if the latter is in the scope of the quantifier associated with the NP. It is non-specific in case the verb commands the quantifier. This kind of definition seems consistent with the way these terms have been used in recent literature, and there is no reason to stop using them as tong as the relative nature of specificity is understood. summary: It is time to review the situation. We started by asking the seemingly na|be question: "When is there supposed to be an individual associated with an indefinite noun phrase?" Na|k/e as it may be, it must be answered in case there is ever going to be a device for inter-preting written texts or everyday conversation with anything approaching human sophistication. There is also another reason to be interested in the subject. Secondly, if relative clauses are derived transformationally from conjoined sentences by 'Retativization', as many linguists believe, the constraints discussed here are also a prerequisite For that transformation. For these reasons~ the problems studied in this paper are of some theoretical interest quite independently from whether the results lead to any practical applications.-We found that, in simple sentences that do not contain certain quantifier-like expressions, an indefinite NP establishes a discourse referent just in case the sentence is an affirmative assertion. By 'establishes a discourse referent' we meant that there may be a coreferentiat pronoun or definite noun phrase tater in the discourse, indefinite NPs in Yes-No questions and commands do not establish refe rents.In studying more complicated examples, it was found neces-saP)/to replace Chomsky's integer-type referential indices by bound variables.In this frarnework~ the traditional problem of specificity is treated as scope ambiguity. We studied several types of verbs that take complements and their semantic properties. We concluded that~ in general~ an indefinite NP establishes a permanent discourse referent just in case the quant[fier associated with it is attached to a sentence that is asserted, implied~ or presupposed to be true and there ape no higher quantifiers involved.There ape a couple of special problems: 'other worlds' and short term referents. Although discourse referents ordinarily exist for the speaker~ there is a class of fworid--cr@ating t verbs~ such as believe r that also establish PefePents of another kind. These exist fop somebody else~ not necessarily for the speaker. ThePefoPe~ we need to distinguish between the speakerts world and other realms -35and allow for the possibility that they are not populated by the same individuals. Secondly, there are short term referents, whose lifespan may be extended by continuing the discourse in the proper mode.What this proper mode is depends on the circumstances. FoP example, every successive sentence may have to contain (i) a modal as the main verb, (ii) a quantifier of a certain type, or" (iii) be in the counterfactual mood. That is, it is possible to elaborate for" a~vhite on situations that are known not to obtain or that may or should obtain and discuss what sometimes or always is the case.This work was supported by the National Science Foundation Grant GU-1598 to the University of Texas at Austin.1 These examples are due to C. LeRoy Baker 1966. 2 I am indebted to Robert E. Wall for suggesting the term 'implicative' to me.3 What remains unexplained here is the fact (pointed out to me by John Olney) that must in (27) has two meanings depending on the specificity of the NP a rich man in the preceding sentence. If the first sentence is about a specific man, then must in the second sentence is interpreted in a rather weak sense: 'It is likely that he is a banker'.But tf the NP a rich man is non-specific, the second sentence means: 'It is necessary that he be a banker'. 4 George Lakoff (forthcoming) has suggested that quantifiePs and negation be analyzed as verbs (predicates) instead of giving them a special status, as is usually done in symbolic logic. It is yet unclear to me whether there is any st,,bsta,.-tive issue involved or whether he is only proposing another no~-'_ .-,.5 There are other good aPgurr~nics against the feature [ *_ specific] in Janet Dean 1968. Unfortunately, they did not persuade the author herself.6 The complement of intend is what W. V. O. Quine calls 'opaque context'. ] ignore here his view that one should not be permitted to quantify into such a context. It seems to me that the objections he raises have to do with the double Pole names play in such contexts and only call fop moPe sophisticated linguistic analysis. Notice that Qu~ne approves of (i) while rejecting (ii) as meaningless '(Quine 1960, • p. 166):(i) (~x) (Tom believes x to have denounced Catiline) (it) (~x) (Tom believes that x denounced Oatiline)From a linguistic point of view, however, there is nothing but a superficial differenoe between (i) and (ii) due to 'Subject raising' that l~s applied in (i) but not in (ii).7 By 'higher" quantifier" I mean quantifter's such as al.._.t, each, many, and few, in fact, everything except the quantifier" associated Mth the singular" some and the indefinite article. The reason for" making this distinction is the fact that, if there are two indefinite singular NPs in the same sentence, both establish a referent no matter what their" order` is.(i) A dog was kitted by a car`.The above example, of course, justifies a later reference both to the dog and the car'. : In this paper I intend to discuss one particular feature a text interpreter must have: that it must be able to recognize when a novel individual is mentioned in the input text and to store it atong with its characterization fop future reference. Of course~ in some P_~ses the problem is trivial. Suppose there appears in some sentence a proper name that has not been mentioned previously. This means that a new pePson is being intPoduced in the text and appPopPiate action must be taken to PecoPd the name of the pePson and what is said about him.OthePwise, the pPopeP name is used to PefeP to an individual already mentioned and the machine has to locate his file in the memory with the help of the name. This pPoblem of identification wilt be mope difficult where a definite descPiption--a definite noun phPase such as the man Bill saw 7estePda~,--is used, since thePe will, in genePal, not be any simple look-up procedure fop associating the descPiption with the Pight individual. With definite noun phrases there is also the pPoblem that it is not possible to tell just from the noun phPase itself whetheP oP not it is supposed to refer" to an individual at all. Fop example, it is cleaP that the phPase the best student is not used PefePentiaUy in a sentence such as Bill is the best student. -[-here ape thus two problems with ordinary definite noun phrases: (i) Is it a definite descPiption at all? and (ii) How to match a definite descPiption with an individual already mentioned in the text? The fiPst question is clearly of the kind linguists can be expected to solve, but it will not be discussed here. The only aspect of definite descPiptions that interests us hepe is the fact that they caPPy an existential presupposition: to call something "the ... " presupposes that there be some such thing.While it is in genePal a stPaight-for~vaPd matter-to decide whether or not a.proper name in a text introduces a new individual, indefinite noun phrases pose a more difficult probtem. To put the question in a general way: Given an indefinite noun phrase, under what circumstances is there supposed to be an individual described by this noun phrase? This need not be understood as some sort off ontological question subject to philosophical speculation, in this paper" [ intend to approach it from a purety linguistic point of view.It is in just those cases where the appearance of an indefinite NP implies the existence of some specific entity that our hypothetical text interpreter shoutd record the appearance of a new individual.What [ have in mind can perhaps be made clear w ~:h the hetp of the following examples, it is a watt-known fact about language that indefinite noun phrases cannot be interpreted as refer'ring to expressions when they appear, in the predicate nominal position.(l) Bitl is not a linguist.(l) is obviousty a statement about one individual. It is not a statement about some linguist and [Bill. It is also weir-known that in generic sentences singutar indefinite noun phrases play a peculiar rote.(2) A tion is a mighty hunter`.In its gener'ic sense, (2) is a statement about tions in general, not about any tion in particular,, unless we want to postulate a hypothetical entity 'the typical lion' of whom all generic statements about lions ape predicated. It is clear that indefinite noun phrases have a very special Pole in (l) and (2) and it is not difficult to decide that they could not introduce any new individuals into a discourse. It is out of question that a text in which (1) appears would contain a later" reference to 'the linguist which Bitt is not' or that (2), in its generic sense, would justify a later reference to 'the lion who is a mighty hunter'.But consider the following example. (3a) may be followed by any of the sentences (3b-d) that give us more information about a specific cap first mentioned in (3a). On the other hand, (4a) cannot be followed by any of the alternatives The above examples show that just in case of (3a), the text interpreter has to recognize that the appearance of the indefinite NP a cap irnpties the existence of a specific car-that can be talked about again by referring to it with a pronoun or" a definite noun phrase. But no cap is introduced by (4a). The alternative continuations (4b-d) are inappropriate, since they presuppose the existence of something that is not there. To show that this is a linguistic and not an ontological fact one only has to point out that examples (5) and (6) behave just like (3) and (4).(5) Bill saw a unicorn. The unicorn had a gold mane.(6) Bill didn't see a unicorn. *The unicorn had a gold mane.Let us say that the appearance of an indefinite noun phrase establishes a discourse referent just in case it justifies the occurrence of: a corererential pronoun or a definite noun phrase hater in the text. In this paper we will try to find out under what circumstances discourse referents are established. We maintain that the problem of corefer-ence within a discourse is a linguistic problem and can be studied independently of any general theory of extra-linguistic reference.The present study was inspired by the notion of 'referential indices' in transformational grammar.Noam Chomsky (1965) , it has generally been assumed that the base component of a transformational grammar associates with each noun phrase a referential index, say, some integer. The purpose of Chomsky's proposal was not so much to account for the meaning of sentences, but to augment the notion of noun phrase identity. It seemed that the notion of 'referential identity' was needed in addition to the two other types of identity, 'structural identity' and 'morphemic identity', for the structural descriptions of certain transformations.According to the standard theory, referential indices are merely formal indicators of coreference with no further semantic significance.They amP not meant to imply the existence of discourse referents in our sense. This notion of cor~fempntiality has played an important role in recent syntactic arguments. It led to the study of pronoun-antecedent relations, largely ignored by traditional grammarians, which has revealed intricate constraints that have great theoretical impor~tance.What we are studying in this paper can be looked at as further constraints on compFerentiality that extend beyond the sentence level,]. Case studies l. 1 A note on specificity[n the following we amP going to examine case by case certain aspects of sentence structure that play a role in cletermining whether an indefinite NP establishes a discourse referent.In the examples that ape discussed~ there is a possible ambiguity that has to be mentioned in advance, although it will not be discussed until later. In general, indefinite noun phrases have both a specific and norr-specific interpretation. Example (7) can be interpreted to mean either (8a) or (Bb).Bill didn't see a misprint.(a) 'There is a misprint which Bill didn't see'(b) 'Bill saw no misprints'[1 = (7) iS Understood in the sense of (8a), we say that the indefinite NP a misprint is interpreted specifically. (Sb) represents the non-specific interpretation. OF course, not all indefinite noun phrases are ambiguous in this way. We could disambiguate (7) by adding the word certain ("a certain misprint") or an appositive relative clause ("a misprint, which I had made on purpose"). These changes would allow only the specific interpretation (8a). The addition of the word ~ ("a single misprint") would allow only the sense (Sb). There are also cases where the verbs involved partially disambiguate the sentence by making one interpretationFar more plausible to the reader than the other. For example, the NP a piano in (9a) is naturally understood non-specifically, that is, as meaning 'any piano', white the same noun phrase in (9b) suggests the l interpretation 'a certain pianO'.(9) (a) John tried to find a piano. ]. 21The following examples are anomalous in the intended sen&e, although there is no negation involved.(10) (a) You must write a letter to your parents. *They are expecting the letter.(b) Bill can make a kite. *The kite has a tong string.Tr`aditionalty~ sentences with a modal auxiliary have been considered as simple sentences. However~ it has been argued convincingly by Ross (1967a) and others that modals should be analyzed as main ver"bs o£ higher` sentences. Therefore~ let us assume that~ even in the above examples~ the indefinite NPs originate in a complement clause~ just as they do in (l l). represent a yet untrue proposition at the time specified by the tense and time adverbials in the main clause. The present pr`oblem~ is in fact~ another point in favor` of the view that modals originate in a higher" sentence~ because it enables us to acknowledge the similarity of the anomaly in (10) and (]1). The conct'usion is that non-specific indefinites do not establish discourse referents when they appear` in a complement of a modal verb.There is a class of verbs that~ if they are not negated~ imply the truth of the proposition represented by their" complement sentence. There ape also verbs that inhePently have a negative implication. In English, this type includes ver'bs such as foP~t, fail, and neglect.Consider" the following anomalous discour"ses.(14) (a) John for-got to wr"ite atePm paper". *He cannot show it to the teacher".(b) John failed to find an answer". *It was wr"ong.These implicative ver"bs have the ver"y interesting pr"oper"ty that, if i there is double negation, the implication is positive, and an indefinite NP does, after" all~ establish a referent.(14) (a) John didn't fail to find an answer. The answer was even right.(b) John didn't remember" not to bring an umbrella, although we had no room for" it.This pr`operty distinguishes clearly verbs with negative implication, such as for`ge.___, tt, fr`orn modal verbs discussed above, although both types deny the tr`uth of the proposition represented by the complement sentence. Appendix:
null
null
null
null
{ "paperhash": [ "damerau|a_technique_for_computer_detection_and_correction_of_spelling_errors" ], "title": [ "A technique for computer detection and correction of spelling errors" ], "abstract": [ "The method described assumes that a word which cannot be found in a dictionary has at most one error, which might be a wrong, missing or extra letter or a single transposition. The unidentified input word is compared to the dictionary again, testing each time to see if the words match—assuming one of these errors occurred. During a test run on garbled text, correct identifications were made for over 95 percent of these error types." ], "authors": [ { "name": [ "F. J. Damerau" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "7713345" ], "intents": [ [] ], "isInfluential": [ false ] }
null
665
0.004511
null
null
null
null
null
null
null
null
d8f7cc78f56db48fb55161ffbd6e5b329d707812
9094008
null
A New Approach to Syntax
This paper describes a new method for syntactic analysis of English. Instead of the conventional subject-predicate structure as a basis for analysis, elementary sentence patterns are used. It is observed that there are two basic sentence formats in English. One, using a transitive verb, consists of the sequence noun, verb, noun, noun. The other, using an intransitive verb, consists of the sequence nou_.._nn, verb, adjective, noun. In each of these basic forms, syntax is specified by the word order. Since there are 64 ways to arrange four words when they are taken one, two, three, and four at a time, there are 12B elementary or cannonical sentences to be studied. The central goal of analysis is to determine the particular c annonical sentence corresponding to a given statement. In order to show how any sentence can be reduced to its basic format, certain essentially algebraic operations are proposed, together with certain rules for transforming one structure into another. Conversely, the same rules may be used to construct a generative grammar that permits a cannonical sentence to be expanded to an equivalent form in accord with prescribed requirements. In as much as the operations are essentially algebraic, the method is very advantageous for computer use.
{ "name": [ "Dasher, B. J." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 62: Collection of Abstracts of Papers
1969-09-01
0
0
null
In order to show how any sentence can be reduced to its basic format, certain essentially algebraic operations are proposed, together with certain rules for transforming one structure into another. Conversely, the same rules may be used to construct a generative grammar that permits a cannonical sentence to be expanded to an equivalent form in accord with prescribed requirements. In as much as the operations are essentially algebraic, the method is very advantageous for computer use.Through the use of various devices, word order can be changed without changing essential syntax. Also, the same basic structures can be used to express a variety of semantic relationships. For example, the two sentences, "Give the book to John" and "Save the book for John", have the same structure. The difference between the t__o_o and fo___~r relationship is semantic rather than grammatical. Moreover, the two statements, "Give John the money" and "Give the money to John", express the same relationship between~ive, Joh____~n, and money, and it would be the same if an inflected form for Joh_.__~n were used instead of the preposition, or if some other syntactic label were used. Thus, it should be possible to begin with a statement in one language, find its corresponding cannonical sentence, transform this cannonical sentence into a corresponding cannonical sentence in a new language, and then reconstruct the statement in the new language. Thus, the method is advantageous for machine translation. This paper describes the fund~ental concepts of the scheme and illustrates its potential. Many details remain to be supplied in order to obtain a working system.
null
null
null
null
Main paper: : In order to show how any sentence can be reduced to its basic format, certain essentially algebraic operations are proposed, together with certain rules for transforming one structure into another. Conversely, the same rules may be used to construct a generative grammar that permits a cannonical sentence to be expanded to an equivalent form in accord with prescribed requirements. In as much as the operations are essentially algebraic, the method is very advantageous for computer use.Through the use of various devices, word order can be changed without changing essential syntax. Also, the same basic structures can be used to express a variety of semantic relationships. For example, the two sentences, "Give the book to John" and "Save the book for John", have the same structure. The difference between the t__o_o and fo___~r relationship is semantic rather than grammatical. Moreover, the two statements, "Give John the money" and "Give the money to John", express the same relationship between~ive, Joh____~n, and money, and it would be the same if an inflected form for Joh_.__~n were used instead of the preposition, or if some other syntactic label were used. Thus, it should be possible to begin with a statement in one language, find its corresponding cannonical sentence, transform this cannonical sentence into a corresponding cannonical sentence in a new language, and then reconstruct the statement in the new language. Thus, the method is advantageous for machine translation. This paper describes the fund~ental concepts of the scheme and illustrates its potential. Many details remain to be supplied in order to obtain a working system. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
ebf65f81d3c200dac2f11dc57216973240bdbca4
9797304
null
The machine realization of the periphrasing system and the results of the experiment
The paper is devoted to the machine realization of the periphrasing system devised by I.A. Mel'chuk and A.K. Geolcovsky According to the idea of the semantic nature of a synthesis a deep sentence structure is used am input data for the periphrasing sjstem. By the given rules which are mashing up as a matter of fact the periphrasing ~ystem the other sentence deep structures are obtained from the given deep structure, being equivalent by meaning. Both the machine realization of the periphrasin ~ystem and the experiment carried out wre~ intended to verify and to check out both the periphrasing system and the realization programs. Further the specified versinn of the periphrasing system is supposed to be included in a general system of a Russi an semantic synthesis used for the machine translation. The notions used in the periphrasing system are shortly described. The detailed description of the semantic and syntac-ti~ rules is given. A paragrapher is devoted the detailed descri ption of the machine dictionary ( s. the paper of I.A°Mel'chuk and J. D. Apresyan). The algorithm periphrasing is given. The work of the algorithm is illustrated by some examples.
{ "name": [ "Arsentyeva, N. G." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 62: Collection of Abstracts of Papers
1969-09-01
0
0
null
null
null
null
of fact the periphrasing ~ystem the other sentence deep structures are obtained from the given deep structure, being equivalent by meaning. Both the machine realization of the periphrasin ~ystem and the experiment carried out wre~ intended to verify and to check out both the periphrasing system and the realization programs. Further the specified versinn of the periphrasing system is supposed to be included in a general system of a Russi an semantic synthesis used for the machine translation.The notions used in the periphrasing system are shortly described. The detailed description of the semantic and syntac-ti~ rules is given. A paragrapher is devoted the detailed descri ption of the machine dictionary ( s. the paper of I.A°Mel'chuk and J. D. Apresyan). The algorithm periphrasing is given.The work of the algorithm is illustrated by some examples.
null
Main paper: : of fact the periphrasing ~ystem the other sentence deep structures are obtained from the given deep structure, being equivalent by meaning. Both the machine realization of the periphrasin ~ystem and the experiment carried out wre~ intended to verify and to check out both the periphrasing system and the realization programs. Further the specified versinn of the periphrasing system is supposed to be included in a general system of a Russi an semantic synthesis used for the machine translation.The notions used in the periphrasing system are shortly described. The detailed description of the semantic and syntac-ti~ rules is given. A paragrapher is devoted the detailed descri ption of the machine dictionary ( s. the paper of I.A°Mel'chuk and J. D. Apresyan). The algorithm periphrasing is given.The work of the algorithm is illustrated by some examples. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
2d542af98e877589ad3809fe47cb036fe0644e39
12783059
null
Statistical Methods in Lexicological Research in the {B}altic States
In this study"lexicology" refers to the dictionary form of a text, a vocabulary, without consideration of the lexical structure of an individual word and the system of its meaning: The first attempts to apply statistical methods in lexicological research are connected with the computation of "frequency dictionaries", particularly various specialized lexica aimed at mechanical exploitation, A "frequency dictionary" is a list of words in which every word carries an indicator or its occurence frequency in a text of a certain length. Frequency dictionaries permit one to compare words from the standpoint of their usage frequency. The data of frequency dictionaries are of great theoretical interest for studies of certain properties of the text with regard to its relation to the vocabulary.
{ "name": [ "Radzin, Hilda" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Post-Print {I}
1969-09-01
0
0
null
The first attempts to apply statistical methods in lexicological research are connected with the computation of "frequency dictionaries", particularly various specialized lexica aimed at mechanical exploitation, A "frequency dictionary" is a list of words in which every word carries an indicator or its occurence frequency in a text of a certain length. Frequency dictionaries permit one to compare words from the standpoint of their usage frequency.The data of frequency dictionaries are of great theoretical interest for studies of certain properties of the text with regard to its relation to the vocabulary.In the Baltic states we find research work pertaining to compilation of frequency dictionaries.In Tallin, Estonia, work was started pertaining to compilation of Russian frequency dictionary for teaching purposes ( 1960 ) . The project worked on a number of texts, am'oua]ting to 400, 000 tokens. The completed dictionary reveals frequency distributions according to word-classes, government of case by verbs, prepositional phrases and also the substantival cases.In other Baltic States we find linguists engaged in similar forms of study on the word level. On the basis of results gained from the statistical/worcl count of Latvian technical-industrial texts, the first volume (pertaining to technical industrial field) of a frequency dictionary of Latvian language was compiled and published in t966.The statistical approach is used in the studies of style. The concept of "style" presupposes the presence of several properties inherent in a given text or texts or author, as opposed to others.One can propose that style is a sum of statistical characteristics describing the content properties of a particular text as distinct from others.In the Baltic states we find linguists using statisical methods in the analysis of style. In scanning some prose texts of Latvian writer B~aumanis, K. Karulis found that words that Blaumanis uses in the direct speech ~re shorter than those in the indirect speech.St. John' s University Jamaica, New York
null
null
null
null
Main paper: : The first attempts to apply statistical methods in lexicological research are connected with the computation of "frequency dictionaries", particularly various specialized lexica aimed at mechanical exploitation, A "frequency dictionary" is a list of words in which every word carries an indicator or its occurence frequency in a text of a certain length. Frequency dictionaries permit one to compare words from the standpoint of their usage frequency.The data of frequency dictionaries are of great theoretical interest for studies of certain properties of the text with regard to its relation to the vocabulary.In the Baltic states we find research work pertaining to compilation of frequency dictionaries.In Tallin, Estonia, work was started pertaining to compilation of Russian frequency dictionary for teaching purposes ( 1960 ) . The project worked on a number of texts, am'oua]ting to 400, 000 tokens. The completed dictionary reveals frequency distributions according to word-classes, government of case by verbs, prepositional phrases and also the substantival cases.In other Baltic States we find linguists engaged in similar forms of study on the word level. On the basis of results gained from the statistical/worcl count of Latvian technical-industrial texts, the first volume (pertaining to technical industrial field) of a frequency dictionary of Latvian language was compiled and published in t966.The statistical approach is used in the studies of style. The concept of "style" presupposes the presence of several properties inherent in a given text or texts or author, as opposed to others.One can propose that style is a sum of statistical characteristics describing the content properties of a particular text as distinct from others.In the Baltic states we find linguists using statisical methods in the analysis of style. In scanning some prose texts of Latvian writer B~aumanis, K. Karulis found that words that Blaumanis uses in the direct speech ~re shorter than those in the indirect speech.St. John' s University Jamaica, New York Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
5cbcc02dda8457ca2e4aeab03a6ed4c22a5eaa31
26299274
null
Project {DOC}
Project DOC Dictionary On Computer, hereafter DOC, is part of an overall effort to harness an on-line computer for phonological research. For certain problems the linguist finds it necessary to organize large amounts of data or to perform rather involved logical tasks --such as checking out a body of rules with intricate ordering relations. In these situations a computer is invaluable, for it forces the linguist to analyze his problems with greater precision and it executes certain Jobs with a speed and accuracy not otherwise possible.
{ "name": [ "Wang, William S-Y." ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 69: Collection of Abstracts of Papers
1969-09-01
0
1
null
The overt aim of DOC is the reconstruction of the phonological histories of the major Chinese dialects. ~ a deeper level, our interest is to learn more about how phonological structures change in general and about the relation between these changes and the synchronic systems they lead to. The achievement of this understanding is crucial for the formulation of a general theory of linguistic change.Of the many language families in the world, Chinese offers an ideal laboratory within which to study phonological change, due both to its unrivaled wealth of materials and to its distinctive phonology and orthography. Thus it affords both the time depth of philological documentation and the spatial wealth ofall the dialects. Since modern linguistics was born in the hands of the Indo-Europeanists during the last century, our conception of how language changes and how it patterns has been excessively dominated by Indo-EurOpean studies. Analyzing a language family with a very different structure car, help us balance this skewed perspective.The challenge of ~onetic explanations, however, can only be met when a sufficient fund of information is available, enabling phonology to make the exciting transition from a descriptive effort into an explanatory science. DOC is designed to facilitate the gathering of this fund of information. In this paper, I will describe the actual organization of the DOC data and the methodology involved in applying it.
null
null
null
null
Main paper: : The overt aim of DOC is the reconstruction of the phonological histories of the major Chinese dialects. ~ a deeper level, our interest is to learn more about how phonological structures change in general and about the relation between these changes and the synchronic systems they lead to. The achievement of this understanding is crucial for the formulation of a general theory of linguistic change.Of the many language families in the world, Chinese offers an ideal laboratory within which to study phonological change, due both to its unrivaled wealth of materials and to its distinctive phonology and orthography. Thus it affords both the time depth of philological documentation and the spatial wealth ofall the dialects. Since modern linguistics was born in the hands of the Indo-Europeanists during the last century, our conception of how language changes and how it patterns has been excessively dominated by Indo-EurOpean studies. Analyzing a language family with a very different structure car, help us balance this skewed perspective.The challenge of ~onetic explanations, however, can only be met when a sufficient fund of information is available, enabling phonology to make the exciting transition from a descriptive effort into an explanatory science. DOC is designed to facilitate the gathering of this fund of information. In this paper, I will describe the actual organization of the DOC data and the methodology involved in applying it. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.001504
null
null
null
null
null
null
null
null
49e40cc3c03f7f255327394ac953dcd5d35b4282
28804243
null
An Application of Computer Programming to the Reconstruction of a Proto-Language
This paper illustrates the use of a computer program as a tool in linguistic research. The program under consideration produces a concordance on words according to phonological segments and environments. Phonological segments are defined as a predetermined aet of consonants and vowels. An environment is defined as the locus of occurrence of any of the phonological segments. The concordance facilitates the recognition of sound correspondances that lead to the reconstruction of a protolanguage. 2.0,Program Description. The program for production of the concordance was written in the SNOBOL4 programming language, which was selected because of its pattern match-1 ing capabilities. The summary Flow Chart of the program, found in §7, should be adequate for the experienced reader. Nevertheless, a few general comments are in order. 2.1.1nitlallzation. All patterns to be used in the program are created during the Initialization. As originally conceived, the program was composed of one long run where 1For a full exposition of SNOBOL4,
{ "name": [ "Durham, Stanton P. and", "Rogers, David Ellis" ], "affiliation": [ null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 5
1969-09-01
0
6
null
null
null
The concordance facilitates the recognition of sound correspondances that lead to the reconstruction of a protolanguage.The program for production of the concordance was written in the SNOBOL4 programming language, which was selected because of its pattern match-1 ing capabilities.The summary Flow Chart of the program, found in §7, should be adequate for the experienced reader. Nevertheless, a few general comments are in order.2.1.1nitlallzation. All patterns to be used in the program are created during the Initialization. As originally conceived, the program was composed of one long run where 1For a full exposition of SNOBOL4, see Griswold, R.E., Page, J.F., and Polonsky, I.P., The SNOBOL4 Programming Lan_~. Holmdel, New Jersey, Bell.19~8.all steps of the analysis were carried out. However, due to problems of internal storage caused by the numerous data, it was decided to run the program in two passes, each of which is explained below. As the items are read, determination is made of the largest size of each element for later column slignment in the print-out. Each item is then stored as a strlmE named after the sequential number assigned to the ~tem, and the phonological form on which the concordance will be based is selected. The phonological form is then analyzed, ln order to retain the generic types and specific segment-environments occurring in that,phonological form, A generic type is defined as a consonant or vowel in a given environment, as for example, word-initial consonant or tonic free vowel. A specific segment-environment is defined as one certain consonant or vowel in a given environment, as for example, word-initial P or tonic free A.For each specific segment-environment found, a list is created composed of the numbers of the items containing that specific segment-envlronment. II II II Ig ~I fl II II Ig II II II II ,e II I~ II II ~f and so on, through the long tonic checked vowels, the nonlong tonic free and checked vo~els, the long pre-tonlc free and checked vowels, the non-long pre-tonic free and checked vowels, etc., until all possible combinations of parameters have been listed. where < stands for /~/I ?, /2%/I @, /8/~ and >, /D/.After the entire item has been read into computer memory, and determination has been made as to the size of each entry relative to the individual entries of all other items, a search is made for the so-called "special" environments, at Ci in the Flow Chart. None of these environments are applicable in the case of alteru. Therefore, these searches will fail, and the next search will be for a word-lnitial consonant or consonants, at C2 in the FlowChart.In the case of alter~ this search, too, will fail, and the next search will be for a vowel, at A8 in the Flow Chart.A tonic vowel in a checked syllable will be found at A8.2 and A8.6, and in the subroutine Br tonic checked A will be queued to the string containing all tonlc checked vowels, and the item number will be queued to a string containing the numbers of all items having a tonic checked A.The next search will be for a consonant or consonants in all possible environments, beginning at At0 in the Flow Chart. Searches for a strong sequence or a geminate con-~onant will fail. At Ai2 the search for a sequence will be successful, the sequence found being L.TR. Once more, subroutine B is entered, the sequence L.TR is queued to the string labeled "sequence C.CC" at BI.1, if this is the first occurrence of L.TR, and the item number is queued to the string containing the item numbers of all items having the sequence L.TR at B~2. Next, at A131 the syllable-final L, and at A14, the syllable-lnltial cluster TR, will be queued respectively to the strings containing syllablefinal consonants and syllable-initial clhsters, and the item number will be queued to the strings containing the numbers of all items having syllable-final L's in the one case, and to the string containing the item numbers of all items having syllable-initial TR in the other.The subsequent search for a post-tonic vowel will In Pass Two, the tape will be read, and the listings will be printed with the elements of each item aligned in columns.
null
null
Main paper: : The concordance facilitates the recognition of sound correspondances that lead to the reconstruction of a protolanguage.The program for production of the concordance was written in the SNOBOL4 programming language, which was selected because of its pattern match-1 ing capabilities.The summary Flow Chart of the program, found in §7, should be adequate for the experienced reader. Nevertheless, a few general comments are in order.2.1.1nitlallzation. All patterns to be used in the program are created during the Initialization. As originally conceived, the program was composed of one long run where 1For a full exposition of SNOBOL4, see Griswold, R.E., Page, J.F., and Polonsky, I.P., The SNOBOL4 Programming Lan_~. Holmdel, New Jersey, Bell.19~8.all steps of the analysis were carried out. However, due to problems of internal storage caused by the numerous data, it was decided to run the program in two passes, each of which is explained below. As the items are read, determination is made of the largest size of each element for later column slignment in the print-out. Each item is then stored as a strlmE named after the sequential number assigned to the ~tem, and the phonological form on which the concordance will be based is selected. The phonological form is then analyzed, ln order to retain the generic types and specific segment-environments occurring in that,phonological form, A generic type is defined as a consonant or vowel in a given environment, as for example, word-initial consonant or tonic free vowel. A specific segment-environment is defined as one certain consonant or vowel in a given environment, as for example, word-initial P or tonic free A.For each specific segment-environment found, a list is created composed of the numbers of the items containing that specific segment-envlronment. II II II Ig ~I fl II II Ig II II II II ,e II I~ II II ~f and so on, through the long tonic checked vowels, the nonlong tonic free and checked vo~els, the long pre-tonlc free and checked vowels, the non-long pre-tonic free and checked vowels, etc., until all possible combinations of parameters have been listed. where < stands for /~/I ?, /2%/I @, /8/~ and >, /D/.After the entire item has been read into computer memory, and determination has been made as to the size of each entry relative to the individual entries of all other items, a search is made for the so-called "special" environments, at Ci in the Flow Chart. None of these environments are applicable in the case of alteru. Therefore, these searches will fail, and the next search will be for a word-lnitial consonant or consonants, at C2 in the FlowChart.In the case of alter~ this search, too, will fail, and the next search will be for a vowel, at A8 in the Flow Chart.A tonic vowel in a checked syllable will be found at A8.2 and A8.6, and in the subroutine Br tonic checked A will be queued to the string containing all tonlc checked vowels, and the item number will be queued to a string containing the numbers of all items having a tonic checked A.The next search will be for a consonant or consonants in all possible environments, beginning at At0 in the Flow Chart. Searches for a strong sequence or a geminate con-~onant will fail. At Ai2 the search for a sequence will be successful, the sequence found being L.TR. Once more, subroutine B is entered, the sequence L.TR is queued to the string labeled "sequence C.CC" at BI.1, if this is the first occurrence of L.TR, and the item number is queued to the string containing the item numbers of all items having the sequence L.TR at B~2. Next, at A131 the syllable-final L, and at A14, the syllable-lnltial cluster TR, will be queued respectively to the strings containing syllablefinal consonants and syllable-initial clhsters, and the item number will be queued to the strings containing the numbers of all items having syllable-final L's in the one case, and to the string containing the item numbers of all items having syllable-initial TR in the other.The subsequent search for a post-tonic vowel will In Pass Two, the tape will be read, and the listings will be printed with the elements of each item aligned in columns. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0.009023
null
null
null
null
null
null
null
null
fbff1afb0deac3705cf4d89a1f08ca9cc3c00638
28508921
null
Machine Transcoding
~4achine Transcoding" There are grounds for questioning whether translation done by computer should resemble closely, either in process or in product, translation done manually. Several workers in machine translation have proposed radical departures from the naively adopted goal of simulating manual translation mechanically. Because they saw a similarity between their mechanically feasible output and the pidgin languages that have arisen naturally in some parts of the world, they dubbed their proposals 'pidgin translation'.
{ "name": [ "Hofmann, T. R. and", "Harris, Brian" ], "affiliation": [ null, null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 62: Collection of Abstracts of Papers
1969-09-01
0
0
null
This paper explores a more rigorous approach, which may be described as follows: texts in one language are encoded into words and other morphs taken from a second language, but this is done without disturbing the syntax of the source language. The term 'transcoding' is coined to denote that process. Transcoding results in a product in many ways similar to earlier types of pidgin translation (and incidentally more like natural pidgin languages than the latter were), but is founded on the principle that no information-bearing elements at all may be lost in the process. This principle is embodied in two strict guidelines: I. Each and every grammatical morph in the original must be reproduced in the transcoded text by a code morph; the linear order of elements is not to be changed.2. The transcoding dictionary (the codebook) must be so constructed as to preserve all the lexical contrasts inherent in the source vocabulary.These guidelines are investigated and supplemented, and their effects are predicted with regard to (1) the output, (2) the user's ability to interpret transcoded material and to learn to interpret it better, (5) the possibility-of employing transcoding to make rough drafts for the human professionals in translation agencies to revise.Sample passages of transcoded Arabic and transcoded Chinese are presented, following which some comparisons are made between transcoding and pidgin translation, machine translation as practised or envisioned, and manual translation.T.R. Hofmann, Collage Militaire Royal, St-Jean, Que.On peut douter qu'une traduction automatique doive ressembler beaucoup une traduction faite ~ la main, dans sa forme ou dans le processus qui le produit. Plusieurs d'entre ceux qui se sont attaqu~s aux probl~mes de la traduction automatique ont propos6 de s'~carter de fa~on radicale de ce but na~f qu'est la simulation automatique du processus de traduction humaine. Ayant vu quelques similarit~s entre leur sortie machine, qui de toute ~vidence repr~sentait quelque chose de faisable sur ordinateur, et les langues naturelles dites creoles qui ont surgi en diverses r~gions du monde, ils surnommaient leur genre de traduction "traduction cr~olis~e". Cette comaunication fair le point d'une approche plus rigoureuse. I1 s'agit de textes dans une langue source que l'on fait coder de nouveau en des * mots et des morphemes tir~s d'une langue seconde, mais tout en conservant la syntaxe et m6me la s~mantique de la langue source. D'apr~s la pr~sente communication, ce proc6d6 s'appelle 'transcodage'. Le transcodage donc nous fournit un produit semblable ~ bien des ~gards ~ la traduction cr6olis~e ci-dessus; d'ailleurs il se rapproche encore plus des cr6oles naturels que celle-ci. Cependant, il est bas~ sur un principe qui interdit formellement la perte, en cours du transcodage, de tout ~l~ment porteur d'information grammaticale ou s~mantique. Ce principe se r~alise par deux consignes:1. Tousles ~l~ments grammaticaux du texte source seront repr~sent~s dans la version transcod~e; l'ordre lin~aire des ~l~ments ne doit pas 6ire modifi~.2. Lots de la fabrication d'un dictionnaire destin~ au transcodage, qui serait en effet une esp~ce de dictionnaire chiffr~, on conservera soigneusement tousles ¢ontrastes ressortant de la structure du lexique de la langue source.Nous explorons la port~e de ces ¢onsignes, nous les compl~tons, et pr~voyons leurs effets sur (i) la sortie (2) la possibilit~ pour l'usager d'in-terpr6£er le materiel transcod~ et d'apprendre ~ l'interpr~ter mieux (3) la possibilit~ d'employer le transcodage pour preparer des premieres versions gros-. si~res que les sp~cialistes humains des agences de traduction pourraient am~liorer.Nous pr~sentons des exemples d'arabe transcod6 et de chinois trans-cod~, suivis de quelques comparaisons entre le transcodage, la traduction cr6olisle, la traduction automatique telle qu'on la pratique et telle qu'on l'envisage pour l'avenir, et la traduction manuelle. Brian Harris, Machine Translation Project, Universit~ de Montreal.
null
null
null
null
Main paper: : This paper explores a more rigorous approach, which may be described as follows: texts in one language are encoded into words and other morphs taken from a second language, but this is done without disturbing the syntax of the source language. The term 'transcoding' is coined to denote that process. Transcoding results in a product in many ways similar to earlier types of pidgin translation (and incidentally more like natural pidgin languages than the latter were), but is founded on the principle that no information-bearing elements at all may be lost in the process. This principle is embodied in two strict guidelines: I. Each and every grammatical morph in the original must be reproduced in the transcoded text by a code morph; the linear order of elements is not to be changed.2. The transcoding dictionary (the codebook) must be so constructed as to preserve all the lexical contrasts inherent in the source vocabulary.These guidelines are investigated and supplemented, and their effects are predicted with regard to (1) the output, (2) the user's ability to interpret transcoded material and to learn to interpret it better, (5) the possibility-of employing transcoding to make rough drafts for the human professionals in translation agencies to revise.Sample passages of transcoded Arabic and transcoded Chinese are presented, following which some comparisons are made between transcoding and pidgin translation, machine translation as practised or envisioned, and manual translation.T.R. Hofmann, Collage Militaire Royal, St-Jean, Que.On peut douter qu'une traduction automatique doive ressembler beaucoup une traduction faite ~ la main, dans sa forme ou dans le processus qui le produit. Plusieurs d'entre ceux qui se sont attaqu~s aux probl~mes de la traduction automatique ont propos6 de s'~carter de fa~on radicale de ce but na~f qu'est la simulation automatique du processus de traduction humaine. Ayant vu quelques similarit~s entre leur sortie machine, qui de toute ~vidence repr~sentait quelque chose de faisable sur ordinateur, et les langues naturelles dites creoles qui ont surgi en diverses r~gions du monde, ils surnommaient leur genre de traduction "traduction cr~olis~e". Cette comaunication fair le point d'une approche plus rigoureuse. I1 s'agit de textes dans une langue source que l'on fait coder de nouveau en des * mots et des morphemes tir~s d'une langue seconde, mais tout en conservant la syntaxe et m6me la s~mantique de la langue source. D'apr~s la pr~sente communication, ce proc6d6 s'appelle 'transcodage'. Le transcodage donc nous fournit un produit semblable ~ bien des ~gards ~ la traduction cr6olis~e ci-dessus; d'ailleurs il se rapproche encore plus des cr6oles naturels que celle-ci. Cependant, il est bas~ sur un principe qui interdit formellement la perte, en cours du transcodage, de tout ~l~ment porteur d'information grammaticale ou s~mantique. Ce principe se r~alise par deux consignes:1. Tousles ~l~ments grammaticaux du texte source seront repr~sent~s dans la version transcod~e; l'ordre lin~aire des ~l~ments ne doit pas 6ire modifi~.2. Lots de la fabrication d'un dictionnaire destin~ au transcodage, qui serait en effet une esp~ce de dictionnaire chiffr~, on conservera soigneusement tousles ¢ontrastes ressortant de la structure du lexique de la langue source.Nous explorons la port~e de ces ¢onsignes, nous les compl~tons, et pr~voyons leurs effets sur (i) la sortie (2) la possibilit~ pour l'usager d'in-terpr6£er le materiel transcod~ et d'apprendre ~ l'interpr~ter mieux (3) la possibilit~ d'employer le transcodage pour preparer des premieres versions gros-. si~res que les sp~cialistes humains des agences de traduction pourraient am~liorer.Nous pr~sentons des exemples d'arabe transcod6 et de chinois trans-cod~, suivis de quelques comparaisons entre le transcodage, la traduction cr6olisle, la traduction automatique telle qu'on la pratique et telle qu'on l'envisage pour l'avenir, et la traduction manuelle. Brian Harris, Machine Translation Project, Universit~ de Montreal. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
2a7cfab7184affce6bd8c7726ff75c0a9c4d6abd
15417147
null
Computers in the {Y}ugoslav {S}erbo-{C}roat/{E}nglish {C}ontrastive {A}nalysis {P}roject
As far as the present writer is aware, the Yu~os-~av Serbo-Croat/English Contrastive Analysis Project'is the first contrastlve analysis effort to use a large corpus of parallel texts. The corpus is made up of the Brown Corpus (reduced by 50%) with its Serbo-Croat translation, and a smaller Control Corpus (Serbo-Croat originals and English translation). A total, thus, of twice 500,000 words plus twice 150,000 words, or a grand total of some 1,300,000 words of running text. 0.2. The Project, let us make it clear, is not exclusively based on this corpus. Compilation and confrontation of grammatical statements by various authors, plus plain old intuition, figure prominently in the methodology. The insistence on a large corpus, however, is due to the conviction, prevailing among the Project workers, that only an extensive investigation of correspondences (original-language elements and their translations) can adequately reveal the less predictable patterns which ten~ to have a considerable contrastive analysis potential. 0.21. The most productive method of obtaining correspondences from our corpus is to concordance separately its Serbo-Croat and English parts, then to merge the resulting KWIC concordances into a contrastive KWIC concordance (with English keywords and alternating English and Serbo-Croat lines). For the more promising patterns, the merging procedure will be used twice, with both English and Serbo-Croat keywords. 0.22. In view of the size of the corpus, and the extensive concordancing required as a major procedure in the Project, the need for computer processing is obvious. It requires no undue strain on imagination to realize the soul-numbing effect of sheer physical handling of this mass of text if written out on slips. Even in its most efficient and flexible form of a manual concordance (a sentence-slip file with keywords underlined monolingually), without which no manual pairing
{ "name": [ "Bujas, Zeljko" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 26
1969-09-01
4
0
null
null
null
null
The Project, let us make it clear, is not exclusively based on this corpus. Compilation and confrontation of grammatical statements by various authors, plus plain old intuition, figure prominently in the methodology. The insistence on a large corpus, however, is due to the conviction, prevailing among the Project workers, that only an extensive investigation of correspondences (original-language elements and their translations) can adequately reveal the less predictable patterns which ten~ to have a considerable contrastive analysis potential.The most productive method of obtaining correspondences from our corpus is to concordance separately its Serbo-Croat and English parts, then to merge the resulting KWIC concordances into a contrastive KWIC concordance (with English keywords and alternating English and Serbo-Croat lines). For the more promising patterns, the merging procedure will be used twice, with both English and Serbo-Croat keywords.In view of the size of the corpus, and the extensive concordancing required as a major procedure in the Project, the need for computer processing is obvious. It requires no undue strain on imagination to realize the soul-numbing effect of sheer physical handling of this mass of text if written out on slips.Even in its most efficient and flexible form of a manual concordance (a sentence-slip file with keywords underlined monolingually), without which no manual pairing of correspondences is possible, the manual handling of this 1,300,O00-word corpus calls for a staggering amount of time and effort to prepare. According to our careful estimate, a total of 7,100 man-hours is required to make such a concordance (without the 1,900 hours of translation from English to Sarbo-Croat, and vice versa).The slip file thus obtained would, however, secure only a one-way approach: either from English or Serbo-Croat. A slip-file allowing a two-way approach would require an additional effort of at least 4,500 man-hours.Finally, even these two manual concordances would still leave unfilled the need for reverse concordancing, go important for morphosyntactic research. To meet this need, two additional (though less ample) slip files would i~ave to be established.In view of all this, the Yugoslav Serbo-Croat/ English Contrastive Analysis Project has from the outset linked the planning of its work to the services of a local computer, the City of Zagreb IBM 360/30 machine~ 1.1.._~.StaF.e 1 of computer processing. The tape with the full text of the Brown Corpus (purchased from Brown University, ~rovidence, R.l., U.S.~.), which had been prepared on an IBM 7090 machine, had first to be converted from the density of 800 ~'I to 1,600 BPI, required by the Zagreb computer.1.±~__..._:_. After this, a printout Of the entire text was obtained on the Zagreb machine. The printing took about eight hours, with a special program~restructuring the original format of the Brown Corpus text. This program left out the location-marker column on the right-hand margin of printout ~, and added a sequence of sentence numbers (from 00001 to 52533) on the left.The full text of the Brown Corpus was now reduced hy 50%, retaining, however, as closely as possible, the s~Lme proportions of the 15 genres (styles) contained in the Corpus.Printouts of the samples retained in this reduced version were then sent out to reliable translators, selected to be representative of the three major regional variants of Serbo-Croat (western, central and eastern). Their instructions were to translate at normal speed, and as carefully as when they do any other paid translation ~or~. The only limitation imposed upon them was to observe the sentence limit in the original (English or, in the used for the preparation on the IBM 360/30 of a full for-w~rd EWIC concordance of the Serbo-Croat Corpus.St~e 8. Using the same tape, we now plan to pro-~uc~e a reverse KWIC concordance of the Serbo-Croat text. Ibis concordance will be selective in the same sense that the English reverse concordance was (cf. Stage 4).Sta~e ~. With the normal and reverse KWIC concord-a~ce~ of both the English and Serbo-Croat corpora now obtained ~, we can move on to the final stage(s) of central importance to the ~roject, i.e. the merging of these monolingual concordances to get contrastive concordances (cf. ~ ). We have planned four such concordances, and have ~'~empted to illustrate them here by short simulated sam-~es. AS at the time of writing this no concordances of the Brown Corpus text (either original or translation) w~re available, the text used for these samples is the S~zbo-Croat original and its translation into English of the novel Povratak Filipa Control Corpus, in Serbo-Croat). They were not to split the English sentence into two or more Serbo-Croat sentences, norwere they allowed to combine two or more English sentences into one Serbo-Croat sentence.The reason for this was the need to secure a mechanical palrin~ of the English (or Serbo-Croat) keyword~ marked by its sentence number, with the same-numbered, parallel, Serbo-Croat (or English) sentence in the twolanguage concordancing planned for the later Project stages.1.2..__~. Sta~e 2. A new magnetic tape will be prepared of the reduced Brown Corpus text, and with the sentence sequence numbers interpolated. This version will be used for all subsequent concordancing.Sta~e 3. Using this magnetic tape, the IBM 360/30 will new prepare a full forward EWIC concordance of the reduced Brown Corpus text~ 1.4. ~ Now (while the reduced Brown Corpus is still being translated) we shall use the same tape to obtain a reverse EWIC concordance of the same text. S~nce all "function words" -such as of, had, most, those, did, etc. -were already isolated in'~he-~evi-~ st~g-~-(in--the forward concordance)°this will further reduce the n~ass of text to be concordanced by one-half~ 1.5.Sta~e 5. The Serbo-Croat trsnslation of the reduced Brown Corpus, by now in an advanced stsge, will be copied out on a Flexowriter in batches (as translators Rend in their typescripts), resulting in a paper tape.The same procedure can, at thiz stage, be applied to the 300,000 words of the Control Corpus. No time for translation has to be set spart here, since only already published English translations of Serbo-Croat originsls are to be used.Sta~e 6. Although the Serbo-Cro~t paper tapes ob-~-ned in the preceding stage are immeg~ately computerprocessable, we shell convert them to a magnetic tape, because this medium secures an incomparably speedier proeessing on the computer.1.61. We hope that stages 2 to 6 will not take more than twenty weeks (if enough personnel can be hired simultaneously). The reason why these four concordances have been presented under one processing stage (9) is that, first, ~e are not sure whether we can afford the computer for each of them, and, second, we do not, at this point, know how selective each of them is going to be. A considerable reduction of the text to be concordanced can be achieved in reverse concordancing if we restrict ourselves only to word~, ending in a characteristic morpheme with clearly foreseeable contrastive analysis potential (such as -e._dd, -l_~! -est, -in/~, -ness, -less, etc. in English, and -ao, -vsl, -e--~n, -sc_..~u, -o-~, -~etc.in Serbo-Croat).--1.11. It may be pointed out here that, irrespective of how restrictive the selection of keywords for concordanc-±n~ may have to be, no concessions should be made in the principle of bilingual approach. Only if, in our investi-6ation of the contrastive potential of individual elements, we strictly observe the approach from both the English and the Serbo-Croat texts, can we be certain that we shall hsve covered all possible contrastive description patterns based on correspondences in both corpora.Once contrastive concordancing has been completed we shall still be facing some practical technical problems.2.1. Project analysts, for instance, will often have to be provided with slips instead of computer printout sheets. Only if the material being analyzed is in the form of slips will they be able to classify and reclassify the key elements swiftly and flexibly (by putting together, breaking up and re-establishing batches of slips).Cutting up the concordance printouts to get the slips is not very practical in view of the varying size of contrasted pairs of elements with their context (cf. n. 9, second half). The way around this, clearly, is to have the pairs printed out at regular intervals with sufficient blank space in between. This, however, would probably triple the amount of printout paper required. Also, this is complicated further by the need for a number of copies for each pair (slip), because of simultaneous demands that may often be made upon the same slip by several Project analysts, approaching the same element from various descriptive levels. These copies could be secured by using special, multiple-carbon printout paper, but this might prove quite expensive.In view of all this, the Yugoslav Serbo-Croat/ English Contrastive Analysis Project has envisaged the use of a Flexowriter here as an alternative method. This machine has already provided us with the paper tape of the Serbo-Croat translation of the reduced Brown Corpus, plus the tapes of Serbo-Croat originals and English translations of the Control Corpus (cf. Stage 5). The missing paper tape of the English text of the Brown Corpus can be obtained on a magtape-to-papertape converter. Once both paper tapes are ready, running them through the Flexowriter provides us with up to 13 (some claim 20) carbons of each contrasted pair. An additional advantage of using the Flexowriter for slip duplication is in the less awkward shape of slips. Paper tapes reproduce the text in 60-character-wide lines of the original translators' typescript, as opposed to the llO to 120-character streamers of normal computer printout (unless the concordance printout was programmed for a narrower format, requiring considerably more paper).The resulting slip files of sentence-numbered English and Serbo-Croat texts, coupled with the Project's basic (monolingual -forward and reverse) concordances, can now be used as a replacement for contrastive concordances. It would work approximately like this: upon receiv-ing an analyst's request for examples of all correspondences in the corpus of an element under analysis, the Project headquarters in Zagreb would look the element up in one of the basic concordances, record semtence numbers of all the occurrences, extract slips bearing these numbers from the Flexowriter-produced slip file, and forward them to the analyst for further research. 7. ~uttimg the top lO0 words from the Brown Corpus Rank List on the exclusion list (compared to a total of some 1SO "function words", in the present author's estimate), would reduce the text by 47.4 per cent, while including only one morphologically marked word (YEARS) and two lexical words (~iEW, TIME). Expanding the exclusion list to cover the top 200 words would probably not be economical (though only two additional morphologically marked words would be included: UI~ITED and STATES), because the computer would be slowed down, whereas the textual 8.9.mass would be reduced by only 6 more per cent (to 53.6 per cent.Which may take between 40 and 60 computer hours, as opposed to an estimated 2,350 hours of manual processing (for only the English forward concordance at that).In addition to being simulations, all these concordance samples are in an idealized format, with the correspondences spatially parallel to the keyword. In practice, however, it is impossible to achieve this ideal textual parallelism, because there are no other formal signals to govern it, except the sentence sequence number which can only mark the sentence as a whole.For this reason, the actual computer concordances will, when ready, have the correspondence to the keyword printed out with the whole sentence in which it occurs, under the single line with the keyword. This will, naturally, increase the size of the concordance, but not more than about 50 per cent in our estimate. This is because only an approximate 40 per cent of all sentences in the original text of the Brown Corpus are in excess of 20 words (which can be accommodated by the average printout line). A mere 6 per cent of these sentences are longer than 40 words, requiring, consequently, :.ore than two printout lines.
null
Main paper: 0.2.: The Project, let us make it clear, is not exclusively based on this corpus. Compilation and confrontation of grammatical statements by various authors, plus plain old intuition, figure prominently in the methodology. The insistence on a large corpus, however, is due to the conviction, prevailing among the Project workers, that only an extensive investigation of correspondences (original-language elements and their translations) can adequately reveal the less predictable patterns which ten~ to have a considerable contrastive analysis potential.The most productive method of obtaining correspondences from our corpus is to concordance separately its Serbo-Croat and English parts, then to merge the resulting KWIC concordances into a contrastive KWIC concordance (with English keywords and alternating English and Serbo-Croat lines). For the more promising patterns, the merging procedure will be used twice, with both English and Serbo-Croat keywords.In view of the size of the corpus, and the extensive concordancing required as a major procedure in the Project, the need for computer processing is obvious. It requires no undue strain on imagination to realize the soul-numbing effect of sheer physical handling of this mass of text if written out on slips.Even in its most efficient and flexible form of a manual concordance (a sentence-slip file with keywords underlined monolingually), without which no manual pairing of correspondences is possible, the manual handling of this 1,300,O00-word corpus calls for a staggering amount of time and effort to prepare. According to our careful estimate, a total of 7,100 man-hours is required to make such a concordance (without the 1,900 hours of translation from English to Sarbo-Croat, and vice versa).The slip file thus obtained would, however, secure only a one-way approach: either from English or Serbo-Croat. A slip-file allowing a two-way approach would require an additional effort of at least 4,500 man-hours.Finally, even these two manual concordances would still leave unfilled the need for reverse concordancing, go important for morphosyntactic research. To meet this need, two additional (though less ample) slip files would i~ave to be established.In view of all this, the Yugoslav Serbo-Croat/ English Contrastive Analysis Project has from the outset linked the planning of its work to the services of a local computer, the City of Zagreb IBM 360/30 machine~ 1.1.._~.StaF.e 1 of computer processing. The tape with the full text of the Brown Corpus (purchased from Brown University, ~rovidence, R.l., U.S.~.), which had been prepared on an IBM 7090 machine, had first to be converted from the density of 800 ~'I to 1,600 BPI, required by the Zagreb computer.1.±~__..._:_. After this, a printout Of the entire text was obtained on the Zagreb machine. The printing took about eight hours, with a special program~restructuring the original format of the Brown Corpus text. This program left out the location-marker column on the right-hand margin of printout ~, and added a sequence of sentence numbers (from 00001 to 52533) on the left.The full text of the Brown Corpus was now reduced hy 50%, retaining, however, as closely as possible, the s~Lme proportions of the 15 genres (styles) contained in the Corpus.Printouts of the samples retained in this reduced version were then sent out to reliable translators, selected to be representative of the three major regional variants of Serbo-Croat (western, central and eastern). Their instructions were to translate at normal speed, and as carefully as when they do any other paid translation ~or~. The only limitation imposed upon them was to observe the sentence limit in the original (English or, in the used for the preparation on the IBM 360/30 of a full for-w~rd EWIC concordance of the Serbo-Croat Corpus.St~e 8. Using the same tape, we now plan to pro-~uc~e a reverse KWIC concordance of the Serbo-Croat text. Ibis concordance will be selective in the same sense that the English reverse concordance was (cf. Stage 4).Sta~e ~. With the normal and reverse KWIC concord-a~ce~ of both the English and Serbo-Croat corpora now obtained ~, we can move on to the final stage(s) of central importance to the ~roject, i.e. the merging of these monolingual concordances to get contrastive concordances (cf. ~ ). We have planned four such concordances, and have ~'~empted to illustrate them here by short simulated sam-~es. AS at the time of writing this no concordances of the Brown Corpus text (either original or translation) w~re available, the text used for these samples is the S~zbo-Croat original and its translation into English of the novel Povratak Filipa Control Corpus, in Serbo-Croat). They were not to split the English sentence into two or more Serbo-Croat sentences, norwere they allowed to combine two or more English sentences into one Serbo-Croat sentence.The reason for this was the need to secure a mechanical palrin~ of the English (or Serbo-Croat) keyword~ marked by its sentence number, with the same-numbered, parallel, Serbo-Croat (or English) sentence in the twolanguage concordancing planned for the later Project stages.1.2..__~. Sta~e 2. A new magnetic tape will be prepared of the reduced Brown Corpus text, and with the sentence sequence numbers interpolated. This version will be used for all subsequent concordancing.Sta~e 3. Using this magnetic tape, the IBM 360/30 will new prepare a full forward EWIC concordance of the reduced Brown Corpus text~ 1.4. ~ Now (while the reduced Brown Corpus is still being translated) we shall use the same tape to obtain a reverse EWIC concordance of the same text. S~nce all "function words" -such as of, had, most, those, did, etc. -were already isolated in'~he-~evi-~ st~g-~-(in--the forward concordance)°this will further reduce the n~ass of text to be concordanced by one-half~ 1.5.Sta~e 5. The Serbo-Croat trsnslation of the reduced Brown Corpus, by now in an advanced stsge, will be copied out on a Flexowriter in batches (as translators Rend in their typescripts), resulting in a paper tape.The same procedure can, at thiz stage, be applied to the 300,000 words of the Control Corpus. No time for translation has to be set spart here, since only already published English translations of Serbo-Croat originsls are to be used.Sta~e 6. Although the Serbo-Cro~t paper tapes ob-~-ned in the preceding stage are immeg~ately computerprocessable, we shell convert them to a magnetic tape, because this medium secures an incomparably speedier proeessing on the computer.1.61. We hope that stages 2 to 6 will not take more than twenty weeks (if enough personnel can be hired simultaneously). The reason why these four concordances have been presented under one processing stage (9) is that, first, ~e are not sure whether we can afford the computer for each of them, and, second, we do not, at this point, know how selective each of them is going to be. A considerable reduction of the text to be concordanced can be achieved in reverse concordancing if we restrict ourselves only to word~, ending in a characteristic morpheme with clearly foreseeable contrastive analysis potential (such as -e._dd, -l_~! -est, -in/~, -ness, -less, etc. in English, and -ao, -vsl, -e--~n, -sc_..~u, -o-~, -~etc.in Serbo-Croat).--1.11. It may be pointed out here that, irrespective of how restrictive the selection of keywords for concordanc-±n~ may have to be, no concessions should be made in the principle of bilingual approach. Only if, in our investi-6ation of the contrastive potential of individual elements, we strictly observe the approach from both the English and the Serbo-Croat texts, can we be certain that we shall hsve covered all possible contrastive description patterns based on correspondences in both corpora.Once contrastive concordancing has been completed we shall still be facing some practical technical problems.2.1. Project analysts, for instance, will often have to be provided with slips instead of computer printout sheets. Only if the material being analyzed is in the form of slips will they be able to classify and reclassify the key elements swiftly and flexibly (by putting together, breaking up and re-establishing batches of slips).Cutting up the concordance printouts to get the slips is not very practical in view of the varying size of contrasted pairs of elements with their context (cf. n. 9, second half). The way around this, clearly, is to have the pairs printed out at regular intervals with sufficient blank space in between. This, however, would probably triple the amount of printout paper required. Also, this is complicated further by the need for a number of copies for each pair (slip), because of simultaneous demands that may often be made upon the same slip by several Project analysts, approaching the same element from various descriptive levels. These copies could be secured by using special, multiple-carbon printout paper, but this might prove quite expensive.In view of all this, the Yugoslav Serbo-Croat/ English Contrastive Analysis Project has envisaged the use of a Flexowriter here as an alternative method. This machine has already provided us with the paper tape of the Serbo-Croat translation of the reduced Brown Corpus, plus the tapes of Serbo-Croat originals and English translations of the Control Corpus (cf. Stage 5). The missing paper tape of the English text of the Brown Corpus can be obtained on a magtape-to-papertape converter. Once both paper tapes are ready, running them through the Flexowriter provides us with up to 13 (some claim 20) carbons of each contrasted pair. An additional advantage of using the Flexowriter for slip duplication is in the less awkward shape of slips. Paper tapes reproduce the text in 60-character-wide lines of the original translators' typescript, as opposed to the llO to 120-character streamers of normal computer printout (unless the concordance printout was programmed for a narrower format, requiring considerably more paper).The resulting slip files of sentence-numbered English and Serbo-Croat texts, coupled with the Project's basic (monolingual -forward and reverse) concordances, can now be used as a replacement for contrastive concordances. It would work approximately like this: upon receiv-ing an analyst's request for examples of all correspondences in the corpus of an element under analysis, the Project headquarters in Zagreb would look the element up in one of the basic concordances, record semtence numbers of all the occurrences, extract slips bearing these numbers from the Flexowriter-produced slip file, and forward them to the analyst for further research. 7. ~uttimg the top lO0 words from the Brown Corpus Rank List on the exclusion list (compared to a total of some 1SO "function words", in the present author's estimate), would reduce the text by 47.4 per cent, while including only one morphologically marked word (YEARS) and two lexical words (~iEW, TIME). Expanding the exclusion list to cover the top 200 words would probably not be economical (though only two additional morphologically marked words would be included: UI~ITED and STATES), because the computer would be slowed down, whereas the textual 8.9.mass would be reduced by only 6 more per cent (to 53.6 per cent.Which may take between 40 and 60 computer hours, as opposed to an estimated 2,350 hours of manual processing (for only the English forward concordance at that).In addition to being simulations, all these concordance samples are in an idealized format, with the correspondences spatially parallel to the keyword. In practice, however, it is impossible to achieve this ideal textual parallelism, because there are no other formal signals to govern it, except the sentence sequence number which can only mark the sentence as a whole.For this reason, the actual computer concordances will, when ready, have the correspondence to the keyword printed out with the whole sentence in which it occurs, under the single line with the keyword. This will, naturally, increase the size of the concordance, but not more than about 50 per cent in our estimate. This is because only an approximate 40 per cent of all sentences in the original text of the Brown Corpus are in excess of 20 words (which can be accommodated by the average printout line). A mere 6 per cent of these sentences are longer than 40 words, requiring, consequently, :.ore than two printout lines. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
665
0
null
null
null
null
null
null
null
null
164d5336f1128c60a2db16f7385a36e807bfda18
29849726
null
A Search Algorithm and Data Structure for an Efficient Information System
This paper describes a system for information storage, retrieval, and updating, with special attention to the search algorithm and data structure demanded for maximum program efficiency. The program efficiency is especially warrantedwhen a natural language or a symbolic language is involved in the searching process. The system is a basic framework for an efficient information system. It can be implemented for text processing and document retrieval; numerical data retrieval; and for handling of large files such as dictionaries, catalogs, and personnel records, as well as graphic ~ informations. Currently, eight cor~nands are implementedand operational in batch mode on a CDC 3600: STORE, RETRIEVE, ADD, DELETE, REPLACE, PRINT, C(R4PP, ESS and LIST. Further development will be on the use of teletype console, CRT terminal, and plotter under a timesharing environment for producing innnediate responses. The maximum program efficiency is obtained through a unique search algorithm and data structure, instead of examining the recall ratio and the precision ratio at a higher level, this efficiency is measured in the most basic term of "average number of searches" required for looking ~p an item. In order to identify an item, at least one search is necessary even if it is found the first time. Hc~ever, through the use of the hash-address of a key or keyword, in conjunction with an indirect-chaining list-structured table, and a large available space list, the average number of searches required for retrieving a certain item is 1.25 regardless of the size of the file in question. This is to be compared with 15.6 searches for the binary search technique in a 50,O00-item file, and 5.8 searches for the letter-table method with no regard to file size.
{ "name": [ "Yang, Shou-chuan" ], "affiliation": [ null ] }
null
null
{I}nternational {C}onference on {C}omputational {L}inguistics {COLING} 1969: Preprint No. 51
1969-09-01
26
5
null
Best of all, since the program can use the same technique for storing and updating informations, the maximum efficiency is also applicable to them wlth the same ease. Thus, it eliminates all the problems of inefficiency caused in establishing a file, and in updating a file.In our daily life, there are too many instances of looking for some type of information such as checking a new vocabulary in a dictionary, finding a telephone number and/or an address in a directory, searching a book of a certain author, title, or subject in a library catalog card file, etc, Before the desired information is found, one has to go through a number of items or entries for close examination. The quantitative measurement is usually termed as the "number of searches", "number of lookups", or "number of file accesses" in mechanized information systems. In the same volume (p. 139), Shoffner cc~nented on the evaluation of system, s that "it is important to be able to determine the extent to which file structures and search techniques influence the recall, precision, and other measures of syste~ performance". Not until very recently, file structure and search techniques were apparently i unpopular topics among information scientists except Salton and a few others. Nevertheless, these topics have been attacked constantly by system scientists for a much smaller size of file but the maximt~ efficiency is a vital factor for the total system. They are frequently discussed under the title of "symbol table techniques", or "scatter storage techniques" as used by Morris as the title of his article. In addition to the "number of searches" and the "number of lookups" other terminologies used by the syste~ scientist for referencing the most basic measure are the "number of probes"~ the "number of attempts", and the "search length".Ever since 1964 the author stepping into the cemputer profession noticed that the efficiency of a file handling system is always crippled by its file searching technique no matter how sophisticated the system. This was especially the case during 1965 and 1966 when the author was employed at the Itek Corporation on an Air Force project of a Chinese to English machine translation experiment. The best search technique used for dictionary lookups was the binary search which is still considered one of the best techniques available today.For a large file with a huge number of records, entries or items, the binary search technique will still yield a substantial number of searches which is a function of the file size. The typical files are: dictionaries of any sort, telephone directories, library catalog cards, personnel records, merchandise catalogs, doct~ment collections, etc. For example, in a 50,000-entry file system the average number of searches for finding an entry is 15.6 calculated as log2N. This figure will not be very satisfactory if frequent search inquiries to a file are the case. As a result to finding better search techniques, at least three kinds of search techniques or algorithms are found to be more satisfactory than the binary search.Namely they are: lamb and Jacobson's "Letter Table Method", Peterson's "Open-Addressing Technique", and Johnson's "Indirect Chaining Method".They have a rather interesting c~on feature that the file size is no longer a factor in the search efficiency.-3-In order to have a gross understanding of various search algorithms, six of them are examined and compared in respect to their search effieieneies.This is also called sequential search or sequential scan.The linear search of an unordered list or file is the simplest one, but is inefficient because the average number of searches for a given entry in a N-entry file will be N/2. For example, if N = 50,000, the average number of searches for a given entry is an enormous 25,000. It is assumed that the probability of finding a given entry in the file is one. The average number of searches in a linear search is calculated as:S N + 1 or S = _~N 2 2if N is a large number°The linear search has to be performed in a consecutive storage area and this sOmetimes causes certain inconvenience if the required storage area is very large. The inconvenience can be avoided by using the last cOmputer word (or some bits of it) to index the location of the next section of sto~age area used and thus form a single chain for searching. This variation of the linear search method is called the single chain method. It differs from the linear search in storage flexibility but is otherwise the same in the efficiency.As early as 1957, Peterson introduced this method for random access storage addressing. This method is also called linear probing. It assumes the existence of a certain hash function to transform the key or keyword of an entry into a numerical value within the range of thq table size which is predetermined as 2 M for any integer value of M. The table size should be large enough to accomodate all the entries -6-of the file. As in other methods, this method also assumes the probability of finding an entry in the file is equal to one.Under these two assumptions, and if a good hash function is selected for a balanced distribution of hash values, the open addressing method will resolve the situation if more than one key is mapped into a particular slot in the table, and yields a very attractive average number of searches in most of the cases. The algorithm is best described in Morris' phrases:'The first method of generating successive calculated addresses to be suggested in the literature was simply to place colliding entries as near as possible to their nominally allocated position, in the following sense. Upon collision, search forward from the nominal position (the initial calculated address), until either the desired entry is found or an empty space is encountered-searching circularly past the end of the table to the beginning, if necessary.If an empty space is encountered, that space becomes the home for the new entry. S=I+--L 2More interesting yet, this foz~ula is still valid when the loading factor L is greater than one which means the number of entries 2(1-L'-"~" + 1 (L S 0.9) L + I 1.25 Excellent Excellent Average 2 (L < 1.0) -12-Under the consideration of the programming and computing efficiency and of the storage efficiency, usually a keyword of one computer word size is more desirable, e.g., eight-character keyword in a 48-bit word machine.In machine files such as dictionaries, thesauruses, keyword indices, and merchandise catalogs, the keyword is almost readily available for hashing. If the keyword is longer than the allowable number of characters, a simple word truncation at the right end or some word compression schemes can be used to reduce the word size to a desired amount of number of characters.-13-For example, the standard word abbreviatiOn, and a simple procedure to eliminate all the vowels and One of the two same consecutive cOnsonants in a word will all be acceptable for this purpose.In In some cases where a unique number is assigned to an entry~ there is no need to hash this number provided that number is inside the range of the allotted table size.This is mostly seen when a record or document is arranged in its accession number or location index. Otherwise the number can be treated as letters and be constructed in one of the methods described above.This is for the replacement of an entry itself in the Available Space List with the same keyword and linkages in the Chaining Table unchanged . Replacement entries longer than the original entries can be treated in a few different ways.The current algorithm will truncate the excessive end and give a message to indicate the situation. A remedy if desired then can be made through the deletion of the incomplete entry and the addition of the complete entry as a new entry. This algorithm will make use of the Algorithms RETRIEVE and STORE to find the desired entry and then replace the old contents with the new contents in the Available Space List.Compute I = HASH(EEY)If POINTER(I) = O, exit on failure.If POINTER(I) ~ 0, then I = POINTER(l)If KEYWORD(1) = KEY, then J = INDEX(I), clear the old entry in ASL starting from ASL(J) and including an EOEThe new entry is stored in ASL starting from ASL(J)If the new entry plus an EOE can be accomodated in the old space, exit on success.If the new entry plus an EOE can not be accommodated in the old space, then store the new entry up to the same length of the old entry and put an EOE at the end, exit on partial success.If KEYWORD(1) ~ KEY and LINK(1) = 0, then exit on failure.If KEYWORD(1) ~ KEY and LINK(1) ~ 0, then I = LINK(1), go to Step RP4.Repeat for additional entries starting atStep RPI ................................... ROl'~hh~y)6;)6,~ ELHwO01) AV MIDOLETON,~-3m-S?69. The ADD efficiency is a function of the STORE efficiency. And in this sample's statistics the ADD efficiency obtained through the addition of four entries to make a full chaining table, is in fact the same as if these four entries are placed at the end of the STORE command. Thus the ADD efficiency of 0.75 for four entries can be combined with the STORE efficiency for tWenty-eight entries and the result is a 0.344 of STORE efficiency for a full 32-entry chaining table. It is noted that the ADD efficiency is always greater than (or equal to) the STORE efficiency due to the nonemptiness of the chaining table.The RETRIEVE efficiency is always identical with the search efficiency as indicated in Table 3 which is an average of 1.25 for the indirect chaining method. The accumulative average number of searches does fall into the range between the minimum of 1.0 and the maximum of 1.5 which is a 1.263 at 59.4% table fullness. Specifically, the dictionary lookup operation as the principal operation of an information system, is no longer a lengthy and painful procedure and thus a barrier in natural language processing. Linguistic analysis may be provided with a complete freedom in referring back and forth any entry in the dictionary and the grammar, and the information gained at any stage of analysis can be stored and retrieved in the same way. Document retrieval may go deeper in content analysis and providing a synonym dictionary for some better query descriptor transforw~tions and matching functions. As Shoffner noted, "it is important to be able to determine the extent to which file structure and search techniques influence recall, precision, and other measures of system performance." This paper tends to support Shoffner's statement by presenting an analysis of current search techniques and a detailed description of the HAICS method which is a possible framework for most information systems.
This attractive method as suggested by Lamb and Jacobsen in 1961 for the dictionary lookup in a machine translation system did not receive good attention for its possible applications in general information systems. The reasons could be the immediate response to the numerous letter tables after the second level which indicated its inefficiency in storage, and that no clear search efficiency and update efficiency were expressed.Suppose only the twenty-six English letters are involved, in theory there are twenty-six tables at the firs£ level, 262 The average number of searches or the expected search length of this method can not be calculated as a function of the file or dictionary size. It is simply the average number of letters or characters of a certain language plus one space character or any other delimiter. For the English language, it is a favorable 5.8 searches (S = 4.8 + I), with no concern of the file size. Its update efficiency is compatible with its search efficiency and may be estimated at less than twice the average nmnber of searches.In order to achieve the above efficiency, the letter tables at each level should be structured in alphabetic order, and every letter should be converted into a numeric value such as A = I, B = 2, C = 3, ... , Z = 26 and the space delimiter = 0 or 27 through a simple table-lookup procedure.Those converted values would then be used as the direct-access address within each subset of alphabetic letters at each letter-table level.This discards the need for binary search within each subset of "brothers" as in the cases of Hibbardls and Sussenguth's searches.
null
This is also called block search. With the aid of a directory which contains the addresses of every Bth entry of the ordered file, a better result can be achieved because the average number of searches is greatly reduced. For the best result, choosing the blocking factor B = 220 in the example above, the answer is 223.6 searches whihh is calculated as: The respective calculations of the example are:Hibbard: S = 1.4 log2N = 21.9 and Sussenguth: S = 1.24 Iog2N = 19.4The different functions used for random number generations can also serve as the hash function if a likely one-to-one relation can be established between the keyword and the resulting random number. This is also subject to the restriction that only nmnbers inside the range of Three methods of computing hash addresses with proven satisfactory results were described very neatly by Morris:-15-"If the keys are names or other objects that fit into a single machine word, a popular method of generating a hash address from the key is to choose some bits from the middle of the square of the key--enough bits to be used as an index to address any item in the table. Since the value of the middle bits of the square depends on all of bits of the key, we can expect that different keys will give rise to different hash addresses with high probability, more or less independently of whether the keys share some coe~on feature, say all beginning with the same bit pattern."If the keys are multiword items, then some bits from the product of the words making up the key may be satisfactory as long as care is taken that the calculated address does not turn out to be zero most of the time. The most dangerous situation in this respect is when blanks are coded internally as zeros or when partial word items are padded to full word length with zeros.'~ third method of computing a hash address is to cut the key up into N-bit sections, where N is the number of bits needed for the hash address, and then to form the sum of all of these sections. The low order N bits of the sum is used as the hash address. This method can be used for single-word keys as well as for multiword keys .... " All these three method assume one slight restriction that the size of the table has to be a power of two because of the binary bit selection.Personally the author prefers the first method of these three due to the extremely simple programming Step SllRepeat for additional entries starting at Step $2.Examples in Tables 5 and 6 show the result of several stored entries under this algorithm.-23- (2). If POINTER(1) ~ 0, then I = POINTER(l)If KEYWORD(1) = KEY, then J = I~EX(1), move the entry in ASL starting from ASL(J) to a working area untll an EOE is encountered, and exit on success.If KEYWORD(1) ~ KEY, and if LINK(X) = 0, then exit on failure.If KEYWORD(1) ~ KEy, and if LINK(1) ~ 0, then I = LINK(X), go to step R4.Step RI. Tables 5 and 6 will also illustrate this algorlthm in actual applications, The execution of the RETRIEVE command will not change the contents of the Chaining Table and the Available Space List in any event.
This algorithm is used when an additlonal or new entry is put into the already established HAICS file. It is an operation of "adding" an entry to the end of a chain of its hashed address, rather than breaking up the chain and "inserting" the entry according to some order or hierarchy. This is so because each chain in the HAICS file is mostly very short with only one or two entries and the "inserting" will gain very little in search and update efficiencies. List and all the pertinate information in the Chaining Table. DI Go toStep RI in Algorithm RETRIEVE, return to Step D2 upon exit on failure from Algorithm RETRIEVE, or return to Step D3 upon exit on success from Algorithm RETRIEVEExit on failure.Clear up the occupied section of the entry in ASL including the special symbol EOE at the end of the entry D4 INDEX(1) = 0 and KEYWORD(1) = 0If POINTER(1) = I and LINK(1) -0, then POINTER(I) = O, exit on success.If POINTER(1) -I and LINK(1) ~ O, then POINTER(1) = LINK(1), LINK(1) = 0, exit on success.If POINTER(1) ~ I and LINK(I) = O, then trace back the previous link which contains I and set it to zero when it is found, otherwise trace back the origlnal pointer and set it to zero, exit on success.If POINTER(1) ~ I and LINK(1) ~ O, then trace back the previous link which contains I and replace it with LINK(1), exlt on success.Repeat for additional entries starting at Step DI. Tables 7 and 8 are shown in Tables 9 and 10. -30- Table I0 .The available space list after a DEI~ command THE AVAILABLE SPACE LIST --
Main paper: directory search: This is also called block search. With the aid of a directory which contains the addresses of every Bth entry of the ordered file, a better result can be achieved because the average number of searches is greatly reduced. For the best result, choosing the blocking factor B = 220 in the example above, the answer is 223.6 searches whihh is calculated as: The respective calculations of the example are:Hibbard: S = 1.4 log2N = 21.9 and Sussenguth: S = 1.24 Iog2N = 19.4The different functions used for random number generations can also serve as the hash function if a likely one-to-one relation can be established between the keyword and the resulting random number. This is also subject to the restriction that only nmnbers inside the range of Three methods of computing hash addresses with proven satisfactory results were described very neatly by Morris:-15-"If the keys are names or other objects that fit into a single machine word, a popular method of generating a hash address from the key is to choose some bits from the middle of the square of the key--enough bits to be used as an index to address any item in the table. Since the value of the middle bits of the square depends on all of bits of the key, we can expect that different keys will give rise to different hash addresses with high probability, more or less independently of whether the keys share some coe~on feature, say all beginning with the same bit pattern."If the keys are multiword items, then some bits from the product of the words making up the key may be satisfactory as long as care is taken that the calculated address does not turn out to be zero most of the time. The most dangerous situation in this respect is when blanks are coded internally as zeros or when partial word items are padded to full word length with zeros.'~ third method of computing a hash address is to cut the key up into N-bit sections, where N is the number of bits needed for the hash address, and then to form the sum of all of these sections. The low order N bits of the sum is used as the hash address. This method can be used for single-word keys as well as for multiword keys .... " All these three method assume one slight restriction that the size of the table has to be a power of two because of the binary bit selection.Personally the author prefers the first method of these three due to the extremely simple programming Step SllRepeat for additional entries starting at Step $2.Examples in Tables 5 and 6 show the result of several stored entries under this algorithm.-23- (2). If POINTER(1) ~ 0, then I = POINTER(l)If KEYWORD(1) = KEY, then J = I~EX(1), move the entry in ASL starting from ASL(J) to a working area untll an EOE is encountered, and exit on success.If KEYWORD(1) ~ KEY, and if LINK(X) = 0, then exit on failure.If KEYWORD(1) ~ KEy, and if LINK(1) ~ 0, then I = LINK(X), go to step R4.Step RI. Tables 5 and 6 will also illustrate this algorlthm in actual applications, The execution of the RETRIEVE command will not change the contents of the Chaining Table and the Available Space List in any event. algorlthm add (a): This algorithm is used when an additlonal or new entry is put into the already established HAICS file. It is an operation of "adding" an entry to the end of a chain of its hashed address, rather than breaking up the chain and "inserting" the entry according to some order or hierarchy. This is so because each chain in the HAICS file is mostly very short with only one or two entries and the "inserting" will gain very little in search and update efficiencies. List and all the pertinate information in the Chaining Table. DI Go toStep RI in Algorithm RETRIEVE, return to Step D2 upon exit on failure from Algorithm RETRIEVE, or return to Step D3 upon exit on success from Algorithm RETRIEVEExit on failure.Clear up the occupied section of the entry in ASL including the special symbol EOE at the end of the entry D4 INDEX(1) = 0 and KEYWORD(1) = 0If POINTER(1) = I and LINK(1) -0, then POINTER(I) = O, exit on success.If POINTER(1) -I and LINK(1) ~ O, then POINTER(1) = LINK(1), LINK(1) = 0, exit on success.If POINTER(1) ~ I and LINK(I) = O, then trace back the previous link which contains I and set it to zero when it is found, otherwise trace back the origlnal pointer and set it to zero, exit on success.If POINTER(1) ~ I and LINK(1) ~ O, then trace back the previous link which contains I and replace it with LINK(1), exlt on success.Repeat for additional entries starting at Step DI. Tables 7 and 8 are shown in Tables 9 and 10. -30- Table I0 .The available space list after a DEI~ command THE AVAILABLE SPACE LIST -- letter table method: This attractive method as suggested by Lamb and Jacobsen in 1961 for the dictionary lookup in a machine translation system did not receive good attention for its possible applications in general information systems. The reasons could be the immediate response to the numerous letter tables after the second level which indicated its inefficiency in storage, and that no clear search efficiency and update efficiency were expressed.Suppose only the twenty-six English letters are involved, in theory there are twenty-six tables at the firs£ level, 262 The average number of searches or the expected search length of this method can not be calculated as a function of the file or dictionary size. It is simply the average number of letters or characters of a certain language plus one space character or any other delimiter. For the English language, it is a favorable 5.8 searches (S = 4.8 + I), with no concern of the file size. Its update efficiency is compatible with its search efficiency and may be estimated at less than twice the average nmnber of searches.In order to achieve the above efficiency, the letter tables at each level should be structured in alphabetic order, and every letter should be converted into a numeric value such as A = I, B = 2, C = 3, ... , Z = 26 and the space delimiter = 0 or 27 through a simple table-lookup procedure.Those converted values would then be used as the direct-access address within each subset of alphabetic letters at each letter-table level.This discards the need for binary search within each subset of "brothers" as in the cases of Hibbardls and Sussenguth's searches. open addressing method: As early as 1957, Peterson introduced this method for random access storage addressing. This method is also called linear probing. It assumes the existence of a certain hash function to transform the key or keyword of an entry into a numerical value within the range of thq table size which is predetermined as 2 M for any integer value of M. The table size should be large enough to accomodate all the entries -6-of the file. As in other methods, this method also assumes the probability of finding an entry in the file is equal to one.Under these two assumptions, and if a good hash function is selected for a balanced distribution of hash values, the open addressing method will resolve the situation if more than one key is mapped into a particular slot in the table, and yields a very attractive average number of searches in most of the cases. The algorithm is best described in Morris' phrases:'The first method of generating successive calculated addresses to be suggested in the literature was simply to place colliding entries as near as possible to their nominally allocated position, in the following sense. Upon collision, search forward from the nominal position (the initial calculated address), until either the desired entry is found or an empty space is encountered-searching circularly past the end of the table to the beginning, if necessary.If an empty space is encountered, that space becomes the home for the new entry. S=I+--L 2More interesting yet, this foz~ula is still valid when the loading factor L is greater than one which means the number of entries 2(1-L'-"~" + 1 (L S 0.9) L + I 1.25 Excellent Excellent Average 2 (L < 1.0) -12-Under the consideration of the programming and computing efficiency and of the storage efficiency, usually a keyword of one computer word size is more desirable, e.g., eight-character keyword in a 48-bit word machine.In machine files such as dictionaries, thesauruses, keyword indices, and merchandise catalogs, the keyword is almost readily available for hashing. If the keyword is longer than the allowable number of characters, a simple word truncation at the right end or some word compression schemes can be used to reduce the word size to a desired amount of number of characters.-13-For example, the standard word abbreviatiOn, and a simple procedure to eliminate all the vowels and One of the two same consecutive cOnsonants in a word will all be acceptable for this purpose.In In some cases where a unique number is assigned to an entry~ there is no need to hash this number provided that number is inside the range of the allotted table size.This is mostly seen when a record or document is arranged in its accession number or location index. Otherwise the number can be treated as letters and be constructed in one of the methods described above.This is for the replacement of an entry itself in the Available Space List with the same keyword and linkages in the Chaining Table unchanged . Replacement entries longer than the original entries can be treated in a few different ways.The current algorithm will truncate the excessive end and give a message to indicate the situation. A remedy if desired then can be made through the deletion of the incomplete entry and the addition of the complete entry as a new entry. This algorithm will make use of the Algorithms RETRIEVE and STORE to find the desired entry and then replace the old contents with the new contents in the Available Space List.Compute I = HASH(EEY)If POINTER(I) = O, exit on failure.If POINTER(I) ~ 0, then I = POINTER(l)If KEYWORD(1) = KEY, then J = INDEX(I), clear the old entry in ASL starting from ASL(J) and including an EOEThe new entry is stored in ASL starting from ASL(J)If the new entry plus an EOE can be accomodated in the old space, exit on success.If the new entry plus an EOE can not be accommodated in the old space, then store the new entry up to the same length of the old entry and put an EOE at the end, exit on partial success.If KEYWORD(1) ~ KEY and LINK(1) = 0, then exit on failure.If KEYWORD(1) ~ KEY and LINK(1) ~ 0, then I = LINK(1), go to Step RP4.Repeat for additional entries starting atStep RPI ................................... ROl'~hh~y)6;)6,~ ELHwO01) AV MIDOLETON,~-3m-S?69. The ADD efficiency is a function of the STORE efficiency. And in this sample's statistics the ADD efficiency obtained through the addition of four entries to make a full chaining table, is in fact the same as if these four entries are placed at the end of the STORE command. Thus the ADD efficiency of 0.75 for four entries can be combined with the STORE efficiency for tWenty-eight entries and the result is a 0.344 of STORE efficiency for a full 32-entry chaining table. It is noted that the ADD efficiency is always greater than (or equal to) the STORE efficiency due to the nonemptiness of the chaining table.The RETRIEVE efficiency is always identical with the search efficiency as indicated in Table 3 which is an average of 1.25 for the indirect chaining method. The accumulative average number of searches does fall into the range between the minimum of 1.0 and the maximum of 1.5 which is a 1.263 at 59.4% table fullness. Specifically, the dictionary lookup operation as the principal operation of an information system, is no longer a lengthy and painful procedure and thus a barrier in natural language processing. Linguistic analysis may be provided with a complete freedom in referring back and forth any entry in the dictionary and the grammar, and the information gained at any stage of analysis can be stored and retrieved in the same way. Document retrieval may go deeper in content analysis and providing a synonym dictionary for some better query descriptor transforw~tions and matching functions. As Shoffner noted, "it is important to be able to determine the extent to which file structure and search techniques influence recall, precision, and other measures of system performance." This paper tends to support Shoffner's statement by presenting an analysis of current search techniques and a detailed description of the HAICS method which is a possible framework for most information systems. : Best of all, since the program can use the same technique for storing and updating informations, the maximum efficiency is also applicable to them wlth the same ease. Thus, it eliminates all the problems of inefficiency caused in establishing a file, and in updating a file.In our daily life, there are too many instances of looking for some type of information such as checking a new vocabulary in a dictionary, finding a telephone number and/or an address in a directory, searching a book of a certain author, title, or subject in a library catalog card file, etc, Before the desired information is found, one has to go through a number of items or entries for close examination. The quantitative measurement is usually termed as the "number of searches", "number of lookups", or "number of file accesses" in mechanized information systems. In the same volume (p. 139), Shoffner cc~nented on the evaluation of system, s that "it is important to be able to determine the extent to which file structures and search techniques influence the recall, precision, and other measures of syste~ performance". Not until very recently, file structure and search techniques were apparently i unpopular topics among information scientists except Salton and a few others. Nevertheless, these topics have been attacked constantly by system scientists for a much smaller size of file but the maximt~ efficiency is a vital factor for the total system. They are frequently discussed under the title of "symbol table techniques", or "scatter storage techniques" as used by Morris as the title of his article. In addition to the "number of searches" and the "number of lookups" other terminologies used by the syste~ scientist for referencing the most basic measure are the "number of probes"~ the "number of attempts", and the "search length".Ever since 1964 the author stepping into the cemputer profession noticed that the efficiency of a file handling system is always crippled by its file searching technique no matter how sophisticated the system. This was especially the case during 1965 and 1966 when the author was employed at the Itek Corporation on an Air Force project of a Chinese to English machine translation experiment. The best search technique used for dictionary lookups was the binary search which is still considered one of the best techniques available today.For a large file with a huge number of records, entries or items, the binary search technique will still yield a substantial number of searches which is a function of the file size. The typical files are: dictionaries of any sort, telephone directories, library catalog cards, personnel records, merchandise catalogs, doct~ment collections, etc. For example, in a 50,000-entry file system the average number of searches for finding an entry is 15.6 calculated as log2N. This figure will not be very satisfactory if frequent search inquiries to a file are the case. As a result to finding better search techniques, at least three kinds of search techniques or algorithms are found to be more satisfactory than the binary search.Namely they are: lamb and Jacobson's "Letter Table Method", Peterson's "Open-Addressing Technique", and Johnson's "Indirect Chaining Method".They have a rather interesting c~on feature that the file size is no longer a factor in the search efficiency.-3-In order to have a gross understanding of various search algorithms, six of them are examined and compared in respect to their search effieieneies.This is also called sequential search or sequential scan.The linear search of an unordered list or file is the simplest one, but is inefficient because the average number of searches for a given entry in a N-entry file will be N/2. For example, if N = 50,000, the average number of searches for a given entry is an enormous 25,000. It is assumed that the probability of finding a given entry in the file is one. The average number of searches in a linear search is calculated as:S N + 1 or S = _~N 2 2if N is a large number°The linear search has to be performed in a consecutive storage area and this sOmetimes causes certain inconvenience if the required storage area is very large. The inconvenience can be avoided by using the last cOmputer word (or some bits of it) to index the location of the next section of sto~age area used and thus form a single chain for searching. This variation of the linear search method is called the single chain method. It differs from the linear search in storage flexibility but is otherwise the same in the efficiency. Appendix:
null
null
null
null
{ "paperhash": [ "harvey|stylistic_analysis", "lee|linguistic_studies_for_chinese_to_english_machine_translation.", "i.b.|an_indirect_chaining_method_for_addressing_on_secondary_keys", "peterson|addressing_for_random-access_storage", "salton|automatic_information_organization_and_retrieval", "venezky|storage,_retrieval,_and_editing_of_information_for_a_dictionary", "knuth|the_art_of_computer_programming", "simmons|answering_english_questions_by_computer:_a_survey", "lamb|a_high-speed_large-capacity_dictionary_system" ], "title": [ "Stylistic Analysis", "LINGUISTIC STUDIES FOR CHINESE TO ENGLISH MACHINE TRANSLATION.", "An indirect chaining method for addressing on secondary keys", "Addressing for Random-Access Storage", "Automatic Information Organization And Retrieval", "Storage, Retrieval, and Editing of Information for a Dictionary", "The Art of Computer Programming", "Answering English questions by computer: a survey", "A high-speed large-capacity dictionary system" ], "abstract": [ "A recent and fascinating application of statistics has been that of analysing the distribution of words in a written document in an attempt to determine the authorship. This article reviews some of the methods used and proposes a simplified test that could be performed using a desk calculator, although access to a digital computer would reduce the work required. An example of the test applied to a manuscript in the British Museum (Stowe 269) is given in detail to illustrate the method.", "Abstract : The linguistic study for a developmental Chinese-English machine-aided translation resulted in the design of a basic linguistic processing system based on the Contextual Associative Method (CAM). This technique allows machine aided translation through the use of programmed contextual operations. The results of this research effort are presented and include: (1) explanation of the linguistic processing system; (2) morphological and syntactic analyses, and (3) English inflection analysis for Chinese to English machine aided translation. Illustrations showing step by step linguistic processing are included. Recommendations are presented for refinement and further development of the basic linguistic analysis. An explanation of symbols for linguistic rules listings of computer experimentation and listings of verb components in English output are appended.", "Methods for entering random-access files on the basis of one key are briefly surveyed. The widely used chaining method, based on a pseudo-random key transformation, is reviewed in more detail. An efficient generalization of the chaining method which permits recovery on additional keys is then presented.", "Estimates are made of the amount of searching required for the exact location of a record in several types of storage systems, including the index-table method of addressing and the sorted-file method. Detailed data and formulas for access time are given for an \"open\" system which offers high flexibility and speed of access. Experimental results are given for actual record files.", "Spend your time even for only few minutes to read a book. Reading a book will never reduce and waste your time to be useless. Reading, for some people become a need that is to do every day such as spending time for eating. Now, what about you? Do you like to read a book? Now, we will show you a new book enPDFd automatic information organization and retrieval that can be a new way to explore the knowledge. When reading this book, you can get one thing to always remember in every reading time, even step by step.", "A computer system has been designed for storing, retrieving, and editing data for the Dictionary of American Regional English (D.A.R.E.). This dictionary, in contrast to most commercial dictionaries, will consist of words which have regional rather than national currency and will derive its entries from data collected by its own fieldworkers, readers, and researchers. Entries, consisting of a headword or phrase, plus descriptors for such items as the user, meaning, pronunciation, and collection technique for this word or phrase are stored in a central file. Interrogations on this file can be made on the value of any headword or description, or any logical combination of such values. Any portion of an entry which satisfies an interrogation may be designated for retrieval. An experimental editing system employing an on‐line CRT terminal has been developed for the editing process, although a more flexible system will be needed for the actual editing which is scheduled to begin in approximately three years.", "A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.", "Fifteen experimental English language question-answering I systems which are programmed and operating are described ) arid reviewed. The systems range from a conversation machine ~] to programs which make sentences about pictures and systems s~ which translate from English into logical calculi. Systems are ~ classified as list-structured data-based, graphic data-based, ~! text-based and inferential. Principles and methods of opera~4 tions are detailed and discussed. It is concluded that the data-base question-answerer has > passed from initial research into the early developmental ~.4 phase. The most difficult and important research questions for ~i~ the advancement of general-purpose language processors are seen to be concerned with measuring meaning, dealing with ambiguities, translating into formal languages and searching large tree structures.", "This paper describes a method of adapting dictionaries for use by a computer in such a way that comprehensiveness of vocabulary coverage can be maximized while look-up time is minimized. Although the programming of the system has not yet been completed, it is estimated at the time of writing that it will allow for a dictionary of 20,000 entries or more, with a total look-up time of about 8 milliseconds (.008 seconds) per word, when used on an IBM 704 computer with 32,000 words of core storage. With a proper system of segmentation, a dictionary of 20,000 entries can handle several hundred thousand different words, thus providing ample coverage for a single fairly broad field of science. Although the system has been designed specifically for purposes of machine translation of Russian, it is applicable to other areas of linguistic data processing in which dictionaries are needed." ], "authors": [ { "name": [ "P. Harvey" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Theresa Lee", "Hongting Wang", "S. Yang", "E. Farmer" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. I.B.", "Corp", "Yorktown Heights" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. W. Peterson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "G. Salton" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. Venezky" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Donald E. Knuth" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "R. F. Simmons" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Lamb", "W. Jacobsen" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "134768132", "61082592", "7990296", "207729567", "56515970", "62616607", "267817888", "17660655", "26800633" ], "intents": [ [], [], [], [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false, false ] }
null
665
0.007519
null
null
null
null
null
null
null
null
8e2de0876fa7d354ec6bdf04e60db7facb7ff9e3
245118093
null
Syntactic analysis requirements of machine translation
In this note I will confine my attention to machine translation (MT) systems which are based upon an underlying formal generative grammar. This is not to deny the potential importance of various computational aids to human translation, nor to deny the possibility of machine translation not based on a formal grammar. It is clear, however, that for fully automated MT any attempt to make use of presently existing linguistic theory or of that which is likely to exist in the foreseeable future requires a grammar-based approach. A second assumption I wish to make is the existence of two distinct components of a grammar --a syntactic component and a semantic component. The former assigns structure to sentences and the latter interprets those structures by translating them to a natural language (in the case of MT) or to an artificial language which has its own computer interpreter. It will not be assumed that the syntactic and semantic components necessarily interact in a simplistic fashion, i. e. , every syntactic output is to have a distinct well formed semantic interpretation, and the final output of the syntactic component is the input to the semantic component. Instead, we will, for example, allow the syntactic component to generate structures which are rejected by the semantic component, and we will allow semantic analysis (and rejection) of fragments of a syntactic structure prior to the complete determination of that structure.
{ "name": [ "Petrick, S. R." ], "affiliation": [ null ] }
null
null
Feasibility Study on Fully Automatic High Quality Translation
1971-12-01
10
0
null
The importance of the syntactic component has been recognized for some time. For the purposes of MT it has two distinct ends to achieve: on the one hand it must specify a large enough subset of the source language to meet the operational requirements of the MT application in question.(The related function of ruling out syntactically ill-formed sentences is of limited importance in MT). On the other hand the structures it assigns must provide a reasonable basis for semantic interpretation. These two requirements are closely related, i. e. , it is relatively easy to satisfy one at the expense of the other, but much harder to adequately meet them both.A not uncommon attitude which has been expressed both in the computational linguistic literature and orally at symposia and conferences is that syntax in general and syntactic analysis in particular has been well worked over, is thoroughly understood, and presents no serious problemsin contrast to the situation in semantics where little has been done and not much is understood. I submit that such remarks reflect the experience of one who has chosen a class of grammars, in most cases context-free All that is required is that natural subsets provided must be learnable by human speakers and must be rich enough to permit expressing that which must be expressed in a convenient fashion. The attainability of even these requirements remains to be established but at least offers some hope of success. On the other hand the usual situation with MT is that the input is not produced with the limitations of a particular formal grammar in mind. This, more than any other single factor, convinces me that grammar-based MT offers little hope for practical usage for at least the next ten years. This is not to say that MT is not an interesting and productive vehicle for keeping linguistic research in both syntax and semantics tied to reality. Others might disagree with this assessment, of course.There may be a few MT applications where time and economic considerations permit the phrasing or rephrasing of source sentences by speakers cognizant of a system's grammatical constraints. Such an example is the preparation of technical manuals in one language for translation into another language. This is, however, not the usual situation in MT.When we leave the (at least for me) familiar grounds of transformational theory and consider the coverage problem for such analysisbased linguistic theories as those of Woods 9 , Winograd 10 , Bobrow and Fraser 11 , Thorne, 12 Moyne, 13 Kellogg, 14 Kay 15 and Simmons, 16 we are faced with a difficult task for a number of reasons. Many of these models have been used only sparingly for the specification of any natural language. Hence, there is little to go on in assessing the coverage of these models. In addition, those models for which one or more large grammars have been we give an account of the current status of syntactic analysis for transformational grammars. In summary, it can be stated that although the class of grammars for which syntactic analysis is possible has been significantly extended, the introduction of new variants of transformational theory has more than kept pace with theoretical and programming efforts to cope with them. Consequently, any given linguist would undoubtedly find that his rules and assumptions do not correspond perfectly with the formulation of the allowable class of grammars. Nevertheless, it is hoped that this class is now extensive enough to permit recasting of current transformational grammars into an acceptable form without seriously compromising their linguistic integrity.
null
null
null
null
Main paper: : The importance of the syntactic component has been recognized for some time. For the purposes of MT it has two distinct ends to achieve: on the one hand it must specify a large enough subset of the source language to meet the operational requirements of the MT application in question.(The related function of ruling out syntactically ill-formed sentences is of limited importance in MT). On the other hand the structures it assigns must provide a reasonable basis for semantic interpretation. These two requirements are closely related, i. e. , it is relatively easy to satisfy one at the expense of the other, but much harder to adequately meet them both.A not uncommon attitude which has been expressed both in the computational linguistic literature and orally at symposia and conferences is that syntax in general and syntactic analysis in particular has been well worked over, is thoroughly understood, and presents no serious problemsin contrast to the situation in semantics where little has been done and not much is understood. I submit that such remarks reflect the experience of one who has chosen a class of grammars, in most cases context-free All that is required is that natural subsets provided must be learnable by human speakers and must be rich enough to permit expressing that which must be expressed in a convenient fashion. The attainability of even these requirements remains to be established but at least offers some hope of success. On the other hand the usual situation with MT is that the input is not produced with the limitations of a particular formal grammar in mind. This, more than any other single factor, convinces me that grammar-based MT offers little hope for practical usage for at least the next ten years. This is not to say that MT is not an interesting and productive vehicle for keeping linguistic research in both syntax and semantics tied to reality. Others might disagree with this assessment, of course.There may be a few MT applications where time and economic considerations permit the phrasing or rephrasing of source sentences by speakers cognizant of a system's grammatical constraints. Such an example is the preparation of technical manuals in one language for translation into another language. This is, however, not the usual situation in MT.When we leave the (at least for me) familiar grounds of transformational theory and consider the coverage problem for such analysisbased linguistic theories as those of Woods 9 , Winograd 10 , Bobrow and Fraser 11 , Thorne, 12 Moyne, 13 Kellogg, 14 Kay 15 and Simmons, 16 we are faced with a difficult task for a number of reasons. Many of these models have been used only sparingly for the specification of any natural language. Hence, there is little to go on in assessing the coverage of these models. In addition, those models for which one or more large grammars have been we give an account of the current status of syntactic analysis for transformational grammars. In summary, it can be stated that although the class of grammars for which syntactic analysis is possible has been significantly extended, the introduction of new variants of transformational theory has more than kept pace with theoretical and programming efforts to cope with them. Consequently, any given linguist would undoubtedly find that his rules and assumptions do not correspond perfectly with the formulation of the allowable class of grammars. Nevertheless, it is hoped that this class is now extensive enough to permit recasting of current transformational grammars into an acceptable form without seriously compromising their linguistic integrity. Appendix:
null
null
null
null
{ "paperhash": [ "kellogg|the_converse_natural_language_data_management_system:_current_status_and_plans", "petrick|on_the_use_of_syntax-based_translators_for_symbolic_and_algebraic_manipulation", "winograd|procedures_as_a_representation_for_data_in_a_computer_program_for_understanding_natural_language", "bobrow|an_augmented_state_transition_network_analysis_procedure", "kay|experiments_with_a_powerful_parser", "rosenbaum|specification_and_utilization_of_a_transformational_grammar.", "zwicky|the_mitre_syntactic_analysis_procedure_for_transformational_grammars", "irons|a_syntax_directed_compiler_for_algol_60" ], "title": [ "The converse natural language data management system: current status and plans", "On the use of syntax-based translators for symbolic and algebraic manipulation", "Procedures As A Representation For Data In A Computer Program For Understanding Natural Language", "An Augmented State Transition Network Analysis Procedure", "Experiments With a Powerful Parser", "SPECIFICATION AND UTILIZATION OF A TRANSFORMATIONAL GRAMMAR.", "The mitre syntactic analysis procedure for transformational grammars", "A syntax directed compiler for ALGOL 60" ], "abstract": [ "This paper presents an overview of research in progress in which the principal aim is the achievement of more natural and expressive modes of on-line communication with complexly structured data bases. A natural-language compiler has been constructed that accepts sentences in a user-extendable English subset, produces surface and deep-structure syntactic analyses, and uses a network of concepts to construct semantic interpretations formalized as computable procedures. The procedures are evaluated by a data management system that updates, modifies, and searches data bases that can be formalized as finite models of states of affairs. The system has been designed and programmed to handle large vocabularies and large collections of facts efficiently. Plans for extending the research vehicle to interface with a deductive inference component and a voice input-output effort are briefly described.", "In this paper two w ell known formalisms, due to Irons and Knuth, for mapping syntactic tree structures into appropriate target strings or structures are considered. Their utility as general purpose tools for symbolic and algebraic manipulation is illustrated by applying them to a symbolic differentiation exercise. A few of the existing syntax-based translation systems and the uses to which they have been put are discussed. Finally, attempts to mathematically model syntax-based translators are reviewed.", "Abstract : The paper describes a system for the computer understanding of English. The system answers questions, executes commands, and accepts information in normal English dialog. It uses semantic information and context to understand discourse and to disambiguate sentences. It combines a complete syntactic analysis of each sentence with a 'heuristic understander' which uses different kinds of information about a sentence, other parts of the discourse, and general information about the world in deciding what the sentence means.", "A syntactic analysis procedure is described which obtains directly the deep structure information associated with an input sentence. The implementation utilizes a state transition network characterizing those linguistic facts representable in a context free form, and a number of techniques to code and derive additional logic information and to permit the compression of the network size, thereby allowing more efficient operation of the system. By recognizing identical constituent predictions stemming from two different analysis paths, the system determines the structure of this constituent only once. When two alternative paths through the state transition network converge to a single state at some point In the analysis, subsequent analyses are carried out only once despite the earlier ambiguity. Use of flags to carry feature concordance and previous context information allows merging of a number of almost identical paths through the network.", "Abstract : A description is given of a sophisticated computer program for the syntactic analysis of natural languages. The study discusses the notation used to write rules and the extent to which these rules can be made to state the same linguistic facts as a transformational grammar. Whereas most existing programs apply context-free phrase-structure grammars, this new program can analyze sentences with context-sensitive grammars and with grammars of a class very similar to transformational grammars. The program, which is written for the IBM 7040/44 computer, is nondeterministic: The various interpretations of an ambiguous sentence are all worked on simultaneously; at no stage does the program develop one interpretation rather than another. If two interpretations differ only in some small part of a partial syntactic structure, then only one complete structure is stored with two versions of the ambiguous part. The unambiguous portion is worked on only once for both interpretations. Although the current version of the program is written in ALGOL, with very little regard for efficiency, the basic algorithm is inherently much more efficient than any of its competitors. (Author)", "Abstract : The report contains four parts: Part I - The IBM Core Grammar of English. Our current grammar of English is presented in full, and numerous derivations are carried out in detail to illustrate the current generative power of the grammar. Part II - Design of a Grammar Tester. The design considerations on which the present version of the tester was based are discussed, and a set of tentative input, output, and control formats are presented. Part III - Programming for the Grammar Tester. A LISP implementation of the grammar tester is presented. The overall flow of control and the various special functions are described. Part IV - Computer Support for Lexicon Development. A program package (programmed in SNOBOL) to facilitate the compilation, modification, scanning, etc. of the lexicon is described. (Author)", "A solution to the analysis problem for a class of grammars appropriate to the description of natural languages is essential to any system which involves the automatic processing of natural language inputs for purposes of man-machine communication, translation, information retrieval, or data processing. The analysis procedure for transformational grammars described in this paper was developed to explore the feasibility of using ordinary English as a computer control language.", "i Th(~ disposition of lhe parentheses is computed by numberbig the m ull;iplication signs consecutively. If n is divisible /)y 2 k but, not; by 2 kw, then (;he nt,h multiplication sign is 1-),'ecedcd by k right pat'entheses, and followed by k left parentheses. If the lasi, multiplication sign is numbered m, then the entire expression is surrounded by k parentheses, whore 2 k ~\" Ill. The extension go negative integral expohen Ls is obvious. The rewritLen expressions are compiled in the normal manner, the equivalent subexpressions being a.tttotnat ically recognized. At~ operational translator would require additional tests at sew;ml points to detect s.ymbol strings not allowed by lhc language. Such tests are omitted here for the sake of clariLy in {,he flow charts. A C K N O W L E D G M E N T The author is indebted to Arthur Anger, presently at Harvard University, for many helpful criticisms and suggestions, and for coding the algorithm on the UNiwxc 1105. REFERENCES 1. ERs[~ov: I)roqrammi~q Programme for the BESM Computer. Pergamon, 1959. 2. WI,ZSG'r~N, J. It. From formulas to computer oriented bmguage. Comm. ACM 2 (Mar. 1959), 6-8. 3. Am)i~:N, B., and (]m~m~M, R. On GAT and the construction of trtmslators. Comm. ACM 2 (July 1959), 24-26. 4. KANNER, H. An algebraic translator. Comm. ACM 2 (Oct. 1959), 19-22, 5. SAMELSO~', K., and BA*mR, F. L. Sequential formula translation. Comm. ACM 3 (Feb. 1960), 76-83." ], "authors": [ { "name": [ "Charles Kellogg", "J. Burger", "T. Diller", "Kenneth Fogt" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. R. Petrick" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "T. Winograd" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "D. Bobrow", "Bruce Eraser" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "M. Kay" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "Peter S. Rosenbaum", "Fred Blair", "D. Lieberman", "D. Lochak", "P. Postal" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "A. Zwicky", "J. Friedman", "B. Hall", "D. Walker" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "E. Irons" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null, null, null, null, null ], "s2_corpus_id": [ "10172580", "14784820", "54114373", "952810", "26325371", "60851190", "31584423", "15645135" ], "intents": [ [], [], [], [], [], [], [], [] ], "isInfluential": [ false, false, false, false, false, false, false, false ] }
- Problem: The paper discusses the limitations of grammar-based machine translation systems in fully automated translation due to the challenge of relating syntactic structures to underlying meaning, especially in the context of semantic interpretation. - Solution: The paper proposes that for successful machine translation, a grammar-based approach with distinct syntactic and semantic components is necessary, allowing for the generation of syntactic structures that align with semantic interpretation, despite the complexities involved in relating form to meaning.
638
0
null
null
null
null
null
null
null
null
6f7b062db306c4aad4adaffe65279d06013b7acd
245118118
null
Operational problems of machine translation: a position paper
referred to machine translation as "linguistics' most conspicuous and expensive failure." 1 Two years later the Automatic Language Processing Advisory Committee of the National Academy of Sciences, National Research Council in what has since become known as the ALPAC Report (1966: 24) stated that "No one can guarantee, of course, that we will not suddenly or at least quickly attain machine translation, but we feel that this is very unlikely." In the light of these two highly authoritative statements of position, and in view of the abrupt reduction of funding for machine translation research, is it at all reasonable to discuss operational problems of machine translation these days? The answer is of course that if one is to talk about machine translation at all, it must be in terms of some reasonable operational objective, since research without such objectives will at best be related to machine translation only indirectly. The question as to whether or not such objectives are reasonable depends in this author's opinion upon the researcher's basic orientation: with a predominantly theoretical orientation, machine translation research will clearly be close to pointless; with an operational orientation, on the other hand, machine translation research will not only be interesting and valuable in its own right, but will also constitute one of the few available conclusive means of verification of the findings of linguistics (cf. Garvin 1962: 387). This paper will attempt a survey of the major controversial issues in the field of machine translation, all of which, in the light of the above discussion, are considered operational. These issues are considered to fall into three basic categories: linguistic problems, design problems, and bread-and-butter problems.
{ "name": [ "Garvin, Paul L." ], "affiliation": [ null ] }
null
null
Feasibility Study on Fully Automatic High Quality Translation
1971-12-01
10
0
null
It is a commonly held view among linguists, both the few who are interested in machine translation and the many who are not, that any application of linguistics-and in the linguist's view this certainly includes machine translation research-must be based primarily on a strong linguistic theory.(For a recent statement of this view see Bar-Hillel 1970) . This is essentially a capsule view of the theoretical orientation. While nobody will deny that any applied work must have a sound theoretical basis, from an operational standpoint there are a number of things seriously wrong with an over-emphasis on theory.(1) Machine translation is considered primarily an operational rather than a theoretical problem. Consequently, an application of sound linguistic research methods is more important than a further elaboration of linguistic theory.(2) Most strong linguistic theories are essentially generative in nature.However, the basic problem in machine translation is not a generative but a recognition problem. Recent research in psycholinguistics has confirmed an opinion long held by this investigator, namely that a recognition problem cannot be resolved by simply reversing a generative system. (4) It is an old operationalist adage that one can best learn by doing.This is particularly true in the case of machine translation where the machine manipulation of linguistic data forces the investigator to recognize a great many inaccuracies and intuitive shortcuts that are usually glossed over in theoretical linguistic research. Thus, rather than relying excessively on the contributions of linguistic theory to machine translation research one should expect significant contributions to linguistics from research on machine translation.Linguistic models can be categorized as strong or weak, depending on whether or not they have strong or weak formal pretensions. Current trends in linguistics favor strong models; this is of course based on an epistemological attitude that is oriented towards the elaboration of theory rather than of method.In line with the discussion in the preceding section, it is here considered that, particularly for purposes of an application such as machine translation, weak models are to be preferred to strong ones. The reason is that strong models are considered to prejudge the direction of research in a situation in which there are too many unforeseen and as yet insufficiently known factors. Clearly, there have to be grammars of both the source and target languages at the base of any machine translation system. Equally clearly, however, these grammars need not be formal grammars; as a matter of fact, in this author's opinion descriptive grammars are strongly preferable to formal grammars for purposes of machine translation because they are much better able to account for the indeterminacies of natural language structures which, as was so well stated by Charles F. Hockett recently, are essentially ill-defined systems (Hockett 1968: 44-45) . Descriptive grammars can best be developed in a primarily method-oriented, rather than a primarily theory-oriented, frame of reference.As a matter of fact, in such a frame of reference conventional grammars may be used as a reasonable point of departure, with the necessary modifications introduced as the requirements of machine translation become apparent in the process of the development of experimental systems. Operationally oriented machine translation research both in the United States (cf. Garvin forthcoming) and in the Soviet Union (cf. Bel'skaja 1969) has done just that.This author has made strong claims on behalf of his proposed version of an operational machine translation system (Garvin 1967) ; it is not known how far along comparable Soviet versions have progressed.All linguists seem to agree that the system of language is hierarchically structured. That is, they all look upon the system of a language as having different levels, or strata, or components. From a machine translation standpoint, of course, it is most important to know which distinctions between different aspects of language are relevant for the development of machine translation systems. The least significant seems to be that of phonology and grammar, since no machine translation system to this author's knowledge is concerned with phonology at all. The most important is the distinction between grammar and lexicon, since all machine translation systems known to this author make some distinction somewhere between a dictionary lookup based on the lexicon and an algorithmic portion based in part on the grammar.Linguistic approaches differ in regard to whether or not the lexicon is considered a part of the grammar or a dimension separate from it. In either case, the lexicon and the grammar (or the remainder of the grammar)are kept clearly separate by most linguists. The difficulty in machine translation is that the lexicon and the grammar cannot be hermetically sealed off from each other. The dictionary and the algorithmic portion correspond only roughly to lexicon and grammar respectively; the dictionary, after all, contains a grammar code which is based on the grammar, and the algorithmic portion serves to resolve not only grammatical but also lexical ambiguities. Nevertheless, an understanding of the differences between lexicon and grammar is essential for a proper operational assessment of all the variables that enter into the design of a machine translation system.Related to the conception of levels or strata of language is the methodological problem of conducting the analysis "from the bottom up" or "from the top down". In the first case, the minimal units of language are considered as the input into the analysis and the output yields the maximum units which are, for all practical purposes, the sentences of the text. In the second case, the input are the sentences and the output is a decomposition of the sentences into their constituents. Clearly, since in machine translation the grammatical information is transmitted to the algorithmic portion from the dictionary by a lookup of individual textwords, and since therefore the initial input elements into the algorithmic portion are the "bottom units", a "bottom to top" approach is the most operationally efficient one for machine translation.Sensing Units and Translation Units This is one of the oldest and also most important problems faced in the design of machine translation systems. Sensing units are linguistic units which the computing equipment can read, that is, for all practical purposes, strings of letters separated by spaces and/or punctuation marks. Clearly, these correspond only partially to the translation units, that is to say, the grammatical and lexical units that must be manipulated in order to effect translation. The problem consists in providing the machine translation system with a capability for transforming the sensing units into appropriate translation units. In a sense, the entire recognition problem in machine translation is a consequence of this difference between sensing units and translation units.Were it not for that, the brute force conception that machine translation can be effected by a large enough dictionary with some adaptations to make room for syntactic and semantic differences between the two Languages, would indeed be adequate. And, needless to say, everyone who has had any experience with the field knows that this is not so.There has been a good deal of discussion in the machine translation also much more trivial from a theoretical point of view. In that case, the intermediate language is nothing more than a series of symbolic notations to record the output of the recognition routine and to serve as input into the command routine by which the text in the target language is to be generated.This, as was said, is operationally effective-it is also operationally necessary, because there must be some way in which the information gathered by the recognition routine is stored and transmitted out into the command routine.The use of the term intermediate language then becomes trivial, because this information store will certainly not have the language-like qualities which the term implies. It is further conceivable from an operational point of view, although certainly premature at the present state of machine translation research, that the same information store can be filled by a number of different recognition routines for different languages, and in turn feed into a number of different command routines for different target languages. The information store then will be combined with a kind of switchboard that will direct the appropriate recognition routine into the store and make sure that the output of the store is fed into the appropriate command routine. Thus, the theoretical efficiency talked about in the preceding paragraph is conceivable, but in a sense which for the current state of affairs is operationally trivial.Many linguistically oriented researchers in machine translation have claimed that in order for machine translation to be possible, it is necessary to account for all of the linguistic conditions that exist in a language. Some, such as Bar-Hillel (1970) have gone even further and claimed that not only linguistic conditions, but also pragmatic conditions have to be accounted for in order to make machine translation of the desired quality possible.From an operational standpoint, this is an inappropriate identification of the aims of exhaustiveness in linguistic research with the aims of machine translation. Clearly, only those linguistic conditions which have a bearing on the translation process need be accommodated in a machine translation scheme. Thus, most of derivational morphology, although of great interest to the linguistic researcher, is essentially irrelevant to the translation process, since derived forms can be entered into the machine translation dictionary with their appropriate translations without going through the trouble of underlying analysis. Similarly, it is certainly not to be expected of a machine translation system any more than of a human translator to translate unambiguously passages which are inherently ambiguous in the source language. Likewise, no machine translation system should be expected to account in its entirety for those pragmatic factors which under ordinary circumstances would remain obscure to the human peruser of the source language text.Quite a few linguistically oriented machine translation researchers have given a great deal of attention to automatic morphological analysis as part of the machine translation process. This analysis has been primarily concerned with attempting to determine morpheme boundaries within printed words; some researchers have limited themselves to separating inflectional endings from the base portions of the words, while other researchers have gone further than that and also included the segmentation of derivational morphological material.One of the reasons given for this has been the requirement of total accountability which was discussed in the preceding section. Another, operationally more valid, reason has been that separating inflexional endings from base portions, while it may encumber dictionary lookup, saves a great deal of storage space in the dictionary portion of the program. The reason given for segmenting derivational material has been that it facilitates the recognition of neologisms. Clearly, the latter two reasons apply primarily to "highly inflected" languages such as Russian or German.As far as the segmentation of inflectional morphemes is concerned, which some machine translation groups have called "stem-affixing", this is a perfectly reasonable space-saving procedure when it comes to high frequency regular inflectional patterns. In the case of the so-called exceptions, particularly when the irregularities involve changes in the base portions of the words, no operational gain is derived from the segmentation of inflectional morphemes from base portions.As far as the segmentation of derivational elements is concerned, the advantages derived from the facilitation of the recognition of neologisms have to be weighed against the disadvantages of introducing an additional elaborate systems task into the design. In this author's opinion, the segmentation of compounds into their components may well be extremely useful in the recognition of neologisms. On the other hand, the segmentation of derivational morphemes from the remainder of the base portions of the words is both operationally more cumbersome than the segmentation of compounds, and less likely to yield results in the correct recognition of neologisms. It is, after all, well known that the lexical meanings of derived words, particularly in the Slavic languages, are often not predictable from the sum of the meanings of the derivational morpheme or morphemes and the remainder of the base portion.Most workers in the field of machine translation agree that grammatical information is stored in the form of grammar codes in the dictionary of the system; the term grammatical information is here used loosely to include whatever lexical and other semantic information is available to the program.This information is then called by the algorithmic portion of the system for further processing to effect the required recognition of the source language input and subsequent generation of the target language output. This raises the question as to how much information is to be stored in the grammar code, and how much of the recognition and subsequent generation task therefore is to be left to the algorithmic portion. The current trend in much of linguistic theorizing has been to emphasize the significance of rules; this means, from a machine translation standpoint, that a great deal of the recognition burden is placed on the algorithmic portion, with only as much contained in the grammar code as is considered theoretically desirable. Since, however, a information. For those words which may be governed by other words, information in regard to the particular kinds of words which may govern them:for instance, in the case of adjectives, the kind of nouns to which they may be modifiers; in the case of adverbs, the kind of adjectives to which they may be modifiers. (6) Government information. Those words which govern dependent structures, the kind of dependent structures which they may govern.For instance, the kind of case a verb or noun may govern, whether or not more than one dependent structure may be governed and in which case each of the possible dependents will stand, whether or not there is prepositional government (which preposition and demanding which case), etc. (7) Subject class informationFor verbs, the class of subjects which a given verb may take, such as animate, inanimate, human, etc. (8) Object class information. The same type of information as for subject class, except of course, concerning the object which a given verb or a given governing attributive may take.The above includes only a part of the kind of information required for a complete grammar code. Much of this information is commonly considered semantic rather than grammatical; much of it has to do with not only the syntactic recognition of the sentence but also with the recognition of semantic compatibilities. A great deal more information is needed if in addition to this type of recognition correct choices are to be made in the case of multiple meaning.The issue here is whether or not the rules of the grammar of the source language should be contained in a table to be called by a parsing algorithm, or whether these rules should be written into a more elaborate algorithm of which they become an organic portion. In the first case, the machine translation program would essentially consist of three portions: a dictionary, a parsing algorithm, and a table of rules-hence, the term tripartite. In the second case, the machine translation program will consist of only two portions: a dictionary, and a translating algorithm-hence, the term bipartite.The main arguments in favor of a tripartite design are: (1) that it allows the processing by one and the same parsing algorithm of more than one table of rules; thus, if any corrections in the grammar are to be made, this involves only a relatively simple updating of a given rule table, and does not require any revision of the algorithm itself;(2) the labor of the programmer who is responsible for the parsing algorithm can be kept separate from the labor of the linguist who is responsible for the table of rules. In theory, these two advantages appear to be overwhelming. In practice, it turns out that the fundamental problem in the automatic recognition of grammatical structure of text is the correct sequencing of the application of the rules of the grammar which are supposed to effect the recognition. In this author's opinion, such a sequencing of the application of different grammatical rules can be effected only by making the rules of the grammar an organic part of the algorithm; this is the only way to insure that a given rule will be called only after all the conditions that are necessary for its operation have been previously recognized by other rules of the program, and that such a recognition has been effected in the correct order.This requirement of sequencing of rule application is based not only on the recognition that the grammar of a language is hierarchically structured, that is, that there are levels to be gone through. It is also based on the recognition that in addition to the levels of the language, there is also an operational order in which grammatical and other information becomes available to the program. Thus, once again, it is apparent that the operational requirement does not parallel the theoretical desiderata.As was stated above, a bipartite machine translation design is considered operationally preferable to a tripartite one. This means that the algorithmic portion of a machine translation program operates on the basis of something like a pattern recognition strategy, rather than a parsing strategy. This means that the algorithmic portion will in essence carry out a number of context searches to recover the conditions necessary to effect recognition and subsequent translation. University (Oettinger and Sherry 1961) ; the second is the author's fulcrum approach (Garvin 1968) . The basic difficulty of this approach is that the more complex a sentence, the greater the burden placed upon the hindsight; from an operational standpoint, the greatest weakness of this approach has been that the hindsight has never properly been worked out. In the fulcrum approach, on the other hand, searches are designed to use words in order of their grammatical significance, rather than in the linear order of their appearance in text. Thus the searches are directed first at those words which contain the most grammatical information from the standpoint of the recognition of a particular structure (the so called fulcra), then they branch out from these pivot words in order to encompass the remainder of the structures in question. Since not all grammatical information is retrievable in a single pass, the fulcrum approach uses a succession of passes for the retrieval of the grammatical information contained in each sentence.The reasons for which the fulcrum approach is considered operationally preferable to predictive analysis are the same for which a bipartite system is considered operationally preferable to a tripartite one: the need for the appropriate sequencing of the application of grammatical rules to the elements of the text. given sentence, but to arrive at some reasonable form of translation with the minimum of waste motion. Thus, in an operational approach to machine translation priority must be given in each case to the most likely interpretation of any given sentence in the hope that this will indeed turn out to be the interpretation applicable in the particular case. As the machine translation system is refined, provisions can be included for superseding this most likely interpretation in favor of a less likely one, if the latter turns out to be the one applicable to the particular case.This question is closely related to the one treated in the preceding section.A program component called filter has been used in some of the Soviet approaches to operational machine translation (cf. Mel'čuk 1964 , Iordanskaja 1967 it is known, however, that the Mel'čuk group has since turned its attention to other problems of a more theoretical nature (cf. Mel'čuk and Žolkovskij 1970).A machine translation design which gives a preferred single interpretation to each sentence obviously does not need a filter for the selection of one alternative from among many. What it does need is a capability for the revision of the one selected single alternative, in case overriding conditions in the grammatical makeup of the sentence require that it be superseded by another interpretation. The mechanism for overriding previously made determinations as to the interpretation of sentences is given by the inclusion of a heuristic capability in the machine translation design. The initial preferred interpretation of a sentence is given on the basis of information derived early in the syntactic processing. This information may have to be overridden on the basis of more powerful information obtained at later stages in processing.Consequently, the heuristic component must both recognize which interpretation; may be subject to later revisions, as well as identify the conditions on the basis of which any prior interpretation is subject to such a revision. Usually, the original interpretation is arrived at on the basis of the immediate context, and whatever revisions may be necessary arise from the inclusion of a broader, usually clause-wide, context. The advantage of combining a single preferred interpretation with a capability for revision based on heuristics is essentially that in most cases the original preferred decision, precisely because it is based on greater likelihood, may be allowed to stand. Thus a great deal of the processing involved in the use of filters can be avoided. (For a detailed discussion of the use of heuristics in the fulcrum approach, see Garvin 1968: 172-81).A great deal of discussion in the machine translation literature has been devoted to the feasibility or non-feasibility of high-quality machine translation. Much of this discussion has been quite unrelated to reality, because it has been based on an A Priori abstract conception of what constitutes high quality translation. Clearly, the question of the quality of translation has to be related to user need: the greater the need, the more it is possible to compromise with quality. This has recently been recognized even by Bar-Hillel (1970) . For many purposes, machine translation output will be only casually scanned rather than carefully read; from a great mass of documents so perused a few may then be selected for later, more careful, human translation. Another factor to be considered is the speed with which machine translation can be effected, as compared to the time required to produce good quality translation by human labor. This has, of course, been used as an excuse for the perpetuation of operating, though operationally unviable, machine translation systems. Nevertheless, it is one of the practical problems deserving more careful consideration than has been afforded them in the past.In the view of most observers, the greatest practical handicap in the use of machine translation has been the high cost of key-punching the original document for input into the computing system. Clearly, the only way of overcoming this handicap is by the use of automatic character recognition.Recent claims to the effect that character recognition is now feasible for a sufficient number of fonts to be practical seem to have some validity.Undoubtedly, this will have a great effect on the evaluation of the economics of machine translation in the future, provided the question can be approached with sufficient detachment from the mistakes of the past.
null
null
null
null
Main paper: linguistic problems the role of linguistics theory in machine translation: It is a commonly held view among linguists, both the few who are interested in machine translation and the many who are not, that any application of linguistics-and in the linguist's view this certainly includes machine translation research-must be based primarily on a strong linguistic theory.(For a recent statement of this view see Bar-Hillel 1970) . This is essentially a capsule view of the theoretical orientation. While nobody will deny that any applied work must have a sound theoretical basis, from an operational standpoint there are a number of things seriously wrong with an over-emphasis on theory.(1) Machine translation is considered primarily an operational rather than a theoretical problem. Consequently, an application of sound linguistic research methods is more important than a further elaboration of linguistic theory.(2) Most strong linguistic theories are essentially generative in nature.However, the basic problem in machine translation is not a generative but a recognition problem. Recent research in psycholinguistics has confirmed an opinion long held by this investigator, namely that a recognition problem cannot be resolved by simply reversing a generative system. (4) It is an old operationalist adage that one can best learn by doing.This is particularly true in the case of machine translation where the machine manipulation of linguistic data forces the investigator to recognize a great many inaccuracies and intuitive shortcuts that are usually glossed over in theoretical linguistic research. Thus, rather than relying excessively on the contributions of linguistic theory to machine translation research one should expect significant contributions to linguistics from research on machine translation.Linguistic models can be categorized as strong or weak, depending on whether or not they have strong or weak formal pretensions. Current trends in linguistics favor strong models; this is of course based on an epistemological attitude that is oriented towards the elaboration of theory rather than of method.In line with the discussion in the preceding section, it is here considered that, particularly for purposes of an application such as machine translation, weak models are to be preferred to strong ones. The reason is that strong models are considered to prejudge the direction of research in a situation in which there are too many unforeseen and as yet insufficiently known factors. Clearly, there have to be grammars of both the source and target languages at the base of any machine translation system. Equally clearly, however, these grammars need not be formal grammars; as a matter of fact, in this author's opinion descriptive grammars are strongly preferable to formal grammars for purposes of machine translation because they are much better able to account for the indeterminacies of natural language structures which, as was so well stated by Charles F. Hockett recently, are essentially ill-defined systems (Hockett 1968: 44-45) . Descriptive grammars can best be developed in a primarily method-oriented, rather than a primarily theory-oriented, frame of reference.As a matter of fact, in such a frame of reference conventional grammars may be used as a reasonable point of departure, with the necessary modifications introduced as the requirements of machine translation become apparent in the process of the development of experimental systems. Operationally oriented machine translation research both in the United States (cf. Garvin forthcoming) and in the Soviet Union (cf. Bel'skaja 1969) has done just that.This author has made strong claims on behalf of his proposed version of an operational machine translation system (Garvin 1967) ; it is not known how far along comparable Soviet versions have progressed.All linguists seem to agree that the system of language is hierarchically structured. That is, they all look upon the system of a language as having different levels, or strata, or components. From a machine translation standpoint, of course, it is most important to know which distinctions between different aspects of language are relevant for the development of machine translation systems. The least significant seems to be that of phonology and grammar, since no machine translation system to this author's knowledge is concerned with phonology at all. The most important is the distinction between grammar and lexicon, since all machine translation systems known to this author make some distinction somewhere between a dictionary lookup based on the lexicon and an algorithmic portion based in part on the grammar.Linguistic approaches differ in regard to whether or not the lexicon is considered a part of the grammar or a dimension separate from it. In either case, the lexicon and the grammar (or the remainder of the grammar)are kept clearly separate by most linguists. The difficulty in machine translation is that the lexicon and the grammar cannot be hermetically sealed off from each other. The dictionary and the algorithmic portion correspond only roughly to lexicon and grammar respectively; the dictionary, after all, contains a grammar code which is based on the grammar, and the algorithmic portion serves to resolve not only grammatical but also lexical ambiguities. Nevertheless, an understanding of the differences between lexicon and grammar is essential for a proper operational assessment of all the variables that enter into the design of a machine translation system.Related to the conception of levels or strata of language is the methodological problem of conducting the analysis "from the bottom up" or "from the top down". In the first case, the minimal units of language are considered as the input into the analysis and the output yields the maximum units which are, for all practical purposes, the sentences of the text. In the second case, the input are the sentences and the output is a decomposition of the sentences into their constituents. Clearly, since in machine translation the grammatical information is transmitted to the algorithmic portion from the dictionary by a lookup of individual textwords, and since therefore the initial input elements into the algorithmic portion are the "bottom units", a "bottom to top" approach is the most operationally efficient one for machine translation.Sensing Units and Translation Units This is one of the oldest and also most important problems faced in the design of machine translation systems. Sensing units are linguistic units which the computing equipment can read, that is, for all practical purposes, strings of letters separated by spaces and/or punctuation marks. Clearly, these correspond only partially to the translation units, that is to say, the grammatical and lexical units that must be manipulated in order to effect translation. The problem consists in providing the machine translation system with a capability for transforming the sensing units into appropriate translation units. In a sense, the entire recognition problem in machine translation is a consequence of this difference between sensing units and translation units.Were it not for that, the brute force conception that machine translation can be effected by a large enough dictionary with some adaptations to make room for syntactic and semantic differences between the two Languages, would indeed be adequate. And, needless to say, everyone who has had any experience with the field knows that this is not so.There has been a good deal of discussion in the machine translation also much more trivial from a theoretical point of view. In that case, the intermediate language is nothing more than a series of symbolic notations to record the output of the recognition routine and to serve as input into the command routine by which the text in the target language is to be generated.This, as was said, is operationally effective-it is also operationally necessary, because there must be some way in which the information gathered by the recognition routine is stored and transmitted out into the command routine.The use of the term intermediate language then becomes trivial, because this information store will certainly not have the language-like qualities which the term implies. It is further conceivable from an operational point of view, although certainly premature at the present state of machine translation research, that the same information store can be filled by a number of different recognition routines for different languages, and in turn feed into a number of different command routines for different target languages. The information store then will be combined with a kind of switchboard that will direct the appropriate recognition routine into the store and make sure that the output of the store is fed into the appropriate command routine. Thus, the theoretical efficiency talked about in the preceding paragraph is conceivable, but in a sense which for the current state of affairs is operationally trivial.Many linguistically oriented researchers in machine translation have claimed that in order for machine translation to be possible, it is necessary to account for all of the linguistic conditions that exist in a language. Some, such as Bar-Hillel (1970) have gone even further and claimed that not only linguistic conditions, but also pragmatic conditions have to be accounted for in order to make machine translation of the desired quality possible.From an operational standpoint, this is an inappropriate identification of the aims of exhaustiveness in linguistic research with the aims of machine translation. Clearly, only those linguistic conditions which have a bearing on the translation process need be accommodated in a machine translation scheme. Thus, most of derivational morphology, although of great interest to the linguistic researcher, is essentially irrelevant to the translation process, since derived forms can be entered into the machine translation dictionary with their appropriate translations without going through the trouble of underlying analysis. Similarly, it is certainly not to be expected of a machine translation system any more than of a human translator to translate unambiguously passages which are inherently ambiguous in the source language. Likewise, no machine translation system should be expected to account in its entirety for those pragmatic factors which under ordinary circumstances would remain obscure to the human peruser of the source language text.Quite a few linguistically oriented machine translation researchers have given a great deal of attention to automatic morphological analysis as part of the machine translation process. This analysis has been primarily concerned with attempting to determine morpheme boundaries within printed words; some researchers have limited themselves to separating inflectional endings from the base portions of the words, while other researchers have gone further than that and also included the segmentation of derivational morphological material.One of the reasons given for this has been the requirement of total accountability which was discussed in the preceding section. Another, operationally more valid, reason has been that separating inflexional endings from base portions, while it may encumber dictionary lookup, saves a great deal of storage space in the dictionary portion of the program. The reason given for segmenting derivational material has been that it facilitates the recognition of neologisms. Clearly, the latter two reasons apply primarily to "highly inflected" languages such as Russian or German.As far as the segmentation of inflectional morphemes is concerned, which some machine translation groups have called "stem-affixing", this is a perfectly reasonable space-saving procedure when it comes to high frequency regular inflectional patterns. In the case of the so-called exceptions, particularly when the irregularities involve changes in the base portions of the words, no operational gain is derived from the segmentation of inflectional morphemes from base portions.As far as the segmentation of derivational elements is concerned, the advantages derived from the facilitation of the recognition of neologisms have to be weighed against the disadvantages of introducing an additional elaborate systems task into the design. In this author's opinion, the segmentation of compounds into their components may well be extremely useful in the recognition of neologisms. On the other hand, the segmentation of derivational morphemes from the remainder of the base portions of the words is both operationally more cumbersome than the segmentation of compounds, and less likely to yield results in the correct recognition of neologisms. It is, after all, well known that the lexical meanings of derived words, particularly in the Slavic languages, are often not predictable from the sum of the meanings of the derivational morpheme or morphemes and the remainder of the base portion.Most workers in the field of machine translation agree that grammatical information is stored in the form of grammar codes in the dictionary of the system; the term grammatical information is here used loosely to include whatever lexical and other semantic information is available to the program.This information is then called by the algorithmic portion of the system for further processing to effect the required recognition of the source language input and subsequent generation of the target language output. This raises the question as to how much information is to be stored in the grammar code, and how much of the recognition and subsequent generation task therefore is to be left to the algorithmic portion. The current trend in much of linguistic theorizing has been to emphasize the significance of rules; this means, from a machine translation standpoint, that a great deal of the recognition burden is placed on the algorithmic portion, with only as much contained in the grammar code as is considered theoretically desirable. Since, however, a information. For those words which may be governed by other words, information in regard to the particular kinds of words which may govern them:for instance, in the case of adjectives, the kind of nouns to which they may be modifiers; in the case of adverbs, the kind of adjectives to which they may be modifiers. (6) Government information. Those words which govern dependent structures, the kind of dependent structures which they may govern.For instance, the kind of case a verb or noun may govern, whether or not more than one dependent structure may be governed and in which case each of the possible dependents will stand, whether or not there is prepositional government (which preposition and demanding which case), etc. (7) Subject class informationFor verbs, the class of subjects which a given verb may take, such as animate, inanimate, human, etc. (8) Object class information. The same type of information as for subject class, except of course, concerning the object which a given verb or a given governing attributive may take.The above includes only a part of the kind of information required for a complete grammar code. Much of this information is commonly considered semantic rather than grammatical; much of it has to do with not only the syntactic recognition of the sentence but also with the recognition of semantic compatibilities. A great deal more information is needed if in addition to this type of recognition correct choices are to be made in the case of multiple meaning.The issue here is whether or not the rules of the grammar of the source language should be contained in a table to be called by a parsing algorithm, or whether these rules should be written into a more elaborate algorithm of which they become an organic portion. In the first case, the machine translation program would essentially consist of three portions: a dictionary, a parsing algorithm, and a table of rules-hence, the term tripartite. In the second case, the machine translation program will consist of only two portions: a dictionary, and a translating algorithm-hence, the term bipartite.The main arguments in favor of a tripartite design are: (1) that it allows the processing by one and the same parsing algorithm of more than one table of rules; thus, if any corrections in the grammar are to be made, this involves only a relatively simple updating of a given rule table, and does not require any revision of the algorithm itself;(2) the labor of the programmer who is responsible for the parsing algorithm can be kept separate from the labor of the linguist who is responsible for the table of rules. In theory, these two advantages appear to be overwhelming. In practice, it turns out that the fundamental problem in the automatic recognition of grammatical structure of text is the correct sequencing of the application of the rules of the grammar which are supposed to effect the recognition. In this author's opinion, such a sequencing of the application of different grammatical rules can be effected only by making the rules of the grammar an organic part of the algorithm; this is the only way to insure that a given rule will be called only after all the conditions that are necessary for its operation have been previously recognized by other rules of the program, and that such a recognition has been effected in the correct order.This requirement of sequencing of rule application is based not only on the recognition that the grammar of a language is hierarchically structured, that is, that there are levels to be gone through. It is also based on the recognition that in addition to the levels of the language, there is also an operational order in which grammatical and other information becomes available to the program. Thus, once again, it is apparent that the operational requirement does not parallel the theoretical desiderata.As was stated above, a bipartite machine translation design is considered operationally preferable to a tripartite one. This means that the algorithmic portion of a machine translation program operates on the basis of something like a pattern recognition strategy, rather than a parsing strategy. This means that the algorithmic portion will in essence carry out a number of context searches to recover the conditions necessary to effect recognition and subsequent translation. University (Oettinger and Sherry 1961) ; the second is the author's fulcrum approach (Garvin 1968) . The basic difficulty of this approach is that the more complex a sentence, the greater the burden placed upon the hindsight; from an operational standpoint, the greatest weakness of this approach has been that the hindsight has never properly been worked out. In the fulcrum approach, on the other hand, searches are designed to use words in order of their grammatical significance, rather than in the linear order of their appearance in text. Thus the searches are directed first at those words which contain the most grammatical information from the standpoint of the recognition of a particular structure (the so called fulcra), then they branch out from these pivot words in order to encompass the remainder of the structures in question. Since not all grammatical information is retrievable in a single pass, the fulcrum approach uses a succession of passes for the retrieval of the grammatical information contained in each sentence.The reasons for which the fulcrum approach is considered operationally preferable to predictive analysis are the same for which a bipartite system is considered operationally preferable to a tripartite one: the need for the appropriate sequencing of the application of grammatical rules to the elements of the text. given sentence, but to arrive at some reasonable form of translation with the minimum of waste motion. Thus, in an operational approach to machine translation priority must be given in each case to the most likely interpretation of any given sentence in the hope that this will indeed turn out to be the interpretation applicable in the particular case. As the machine translation system is refined, provisions can be included for superseding this most likely interpretation in favor of a less likely one, if the latter turns out to be the one applicable to the particular case.This question is closely related to the one treated in the preceding section.A program component called filter has been used in some of the Soviet approaches to operational machine translation (cf. Mel'čuk 1964 , Iordanskaja 1967 it is known, however, that the Mel'čuk group has since turned its attention to other problems of a more theoretical nature (cf. Mel'čuk and Žolkovskij 1970).A machine translation design which gives a preferred single interpretation to each sentence obviously does not need a filter for the selection of one alternative from among many. What it does need is a capability for the revision of the one selected single alternative, in case overriding conditions in the grammatical makeup of the sentence require that it be superseded by another interpretation. The mechanism for overriding previously made determinations as to the interpretation of sentences is given by the inclusion of a heuristic capability in the machine translation design. The initial preferred interpretation of a sentence is given on the basis of information derived early in the syntactic processing. This information may have to be overridden on the basis of more powerful information obtained at later stages in processing.Consequently, the heuristic component must both recognize which interpretation; may be subject to later revisions, as well as identify the conditions on the basis of which any prior interpretation is subject to such a revision. Usually, the original interpretation is arrived at on the basis of the immediate context, and whatever revisions may be necessary arise from the inclusion of a broader, usually clause-wide, context. The advantage of combining a single preferred interpretation with a capability for revision based on heuristics is essentially that in most cases the original preferred decision, precisely because it is based on greater likelihood, may be allowed to stand. Thus a great deal of the processing involved in the use of filters can be avoided. (For a detailed discussion of the use of heuristics in the fulcrum approach, see Garvin 1968: 172-81).A great deal of discussion in the machine translation literature has been devoted to the feasibility or non-feasibility of high-quality machine translation. Much of this discussion has been quite unrelated to reality, because it has been based on an A Priori abstract conception of what constitutes high quality translation. Clearly, the question of the quality of translation has to be related to user need: the greater the need, the more it is possible to compromise with quality. This has recently been recognized even by Bar-Hillel (1970) . For many purposes, machine translation output will be only casually scanned rather than carefully read; from a great mass of documents so perused a few may then be selected for later, more careful, human translation. Another factor to be considered is the speed with which machine translation can be effected, as compared to the time required to produce good quality translation by human labor. This has, of course, been used as an excuse for the perpetuation of operating, though operationally unviable, machine translation systems. Nevertheless, it is one of the practical problems deserving more careful consideration than has been afforded them in the past.In the view of most observers, the greatest practical handicap in the use of machine translation has been the high cost of key-punching the original document for input into the computing system. Clearly, the only way of overcoming this handicap is by the use of automatic character recognition.Recent claims to the effect that character recognition is now feasible for a sufficient number of fonts to be practical seem to have some validity.Undoubtedly, this will have a great effect on the evaluation of the economics of machine translation in the future, provided the question can be approached with sufficient detachment from the mistakes of the past. Appendix:
null
null
null
null
{ "paperhash": [ "|some_comments_on_algorithm_and_grammar_in_the_automatic_parsing_of_natural_languages" ], "title": [ "Some Comments on Algorithm and Grammar in the Automatic Parsing of Natural Languages" ], "abstract": [ "The purpose of this paper is to examine the oft-repeated assertion regarding the efficiency of a \"simple parsing algorithm\" combinable with a variety of different grammars written in the form of appropriate tables of rules. The paper raises the question of the increasing complexity of the tables when more than the most elementary natural-language conditions are included, as well as the question of the ordering of the rules within such nonelementary tables. Some concrete examples from the field of machine translation will be given in the final version of the paper. Some conclusions are presented." ], "authors": [ { "name": [], "affiliation": [] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "31558712" ], "intents": [ [] ], "isInfluential": [ false ] }
Problem: The paper discusses the operational problems of machine translation research in light of authoritative statements questioning the feasibility of achieving machine translation. Solution: The hypothesis of the paper is that operational problems in machine translation research can be addressed effectively with an operational orientation, focusing on reasonable operational objectives rather than a predominantly theoretical approach.
638
0
null
null
null
null
null
null
null
null
0bc5b1ac05e0487de33ceda528054d7d62a81489
46532082
null
Toward a theory of computational linguistics
To begin with, I would like to assert that computational linguistics (henceforth: CL), despite its qualifying adjective, has to do with human behavior, and, in particular, with that subset of human behavioral patterns that we study in linguistics. In other words, the aim of CL as a science is to explain human behavior insofar as it avails itself of the possibilities inherent in man's faculty of speech. In this sense, CL and linguistics proper both pursue the same aim. However, there are differences, as we will see shortly; for the moment, let us just establish that CL can be considered as a subfield of linguistics, and leave the delineation of the boundaries for later. An important notion in behavioral sciences is that of a model as a set of hypotheses and empirical assumptions leading to certain testable conclusions, called predictions (on this, cf., e.g. Braithwaite 1968; Šaumjan 1966). I would like to call this kind of model the descriptive one. "Descriptive" here is not taken in the sense that Chomsky distinguishes descriptive adequacy from explanatory adequacy: indeed, the function of the descriptive model is to explain, as will become clear below. However, there is another respect in which the descriptive model reminds one of some of the characteristics attributed to Chomskyan models: it need not be (and should not be) considered a "faithful" reproduction of reality, in the sense that to each part of the model there corresponds, by some kind of isomorphic mapping, a particular chunk of "real" life. In other words, this descriptive kind of model does not attempt to imitate the behavior of its descriptum. The other kind of model I propose to call the simulative one. As the name indicates, we are dealing with a conscious effort to picture, point by point, the
{ "name": [ "Mey, Jacob" ], "affiliation": [ null ] }
null
null
Feasibility Study on Fully Automatic High Quality Translation
1971-12-01
8
1
null
activities that we want to describe. Of course, the simulative model, in order to be scientifically interesting, must attempt to explain; a machina loquax, to use Ceccato's expression (1967) is no good if there is a deus in machina. Although the idea of building homunculi, robots and what else they are called is not exactly a new one, the advent of the computer made it possible to conduct these experiments on a hitherto unknown scale, both with regard to dimensions and to exactitude. In fact, one of the popular views of the computer is exactly that: a man-like machine.Interestingly, the fears connected with this kind of image (such as an impending take-over by some super-computer like HAL in the movie "2001") have their counterpart in certain objections that are sometimes voiced against the other kind of model, the descriptive one: namely, that it de-humanizes human activities (such as speech), and establishes a new kind of man, made in the machine's image: machine-like man.Below, in section 3, I will discuss some of the implications of these views for computational linguistics; but first I want to raise the question: what importance do the two kinds of models have for linguistics itself?Competence and PerformanceThe distinction between competence and performance in linguistics has been belabored often enough to let me squeak by here with a short restatement of Chomsky's remarks in Aspects (1965: 4 et pass.) : competence is the speaker's knowledge of a language, performance is what he actually does with his knowledge in a given situation that involves linguistic activity. A theory of competence, Chomsky says, is not a model of the speaker-hearer; according to the distinction made in section 1, above, Iwould rather say that it is not a simulative model, but a descriptive one. In other words, the model that is a grammar does not attempt to explain linguistic activity on the part of the speaker or hearer by appealing to direct similarities between that activity and the rules of the grammar. Rather, the activity of the speaker (his performance) is explained by pointing to the fact that the rules give exactly the same result (if they are correct, that is) as does the performance of the speaker-hearer:the set of all possible utterances of a given language.Although a theory of performance thus is closer to the idea of simulating an actual linguistic situation, it is by no means identical with the simulative model. Rather, simulating actual linguistic activity depends on such a theory for its success; without it, a simulative model will be of little interest to linguists. To take an example: in any concrete linguistic situation there will be a lot of "unexplained" phenomena, such as hemming and hawing, false starts, anacolouths, etc. I feel that Chomsky is wrong in ascribing all of this to what he calls performance: linguistic theory should not account for these aspects of speech (they belong more properly in what one might call "corrective linguistics"). A simulative model wanting to represent this kind of "performance" would be a waste of energy and time.What, then, is the proper object of a theory of linguistic performance? To understand this question is to answer it: if performance by definition is actual human activity, then linguistic performance is activity exercised by humans in the form of speech acts. In terms of the restriction made in the preceding paragraph, our "ideal" performance is that activity minus irrelevant "noise". Notice that this ideal performance does not coincide with that of Chomsky's "ideal speaker-hearer" of the language: as I understand this person, he is some kind of linguistic Superman (with unlimited memory, boundless embedding facilities, etc.). In other words, Chomsky's "ideal speaker" reflects competence rather than performance (in Chomsky's sense). To take a very simple example: the set of sentences generated by a grammar is potentially infinite;this is a fact of competence. However, any actual speaker or set of speakers will always generate some finite subset of the set of all possible sentences: a fact of performance. On a more sophisticated level, consider such questions as: why is it the case that regressive embedding beyond a certain bound is unacceptable? Chomsky calls sentences such as The rat the cat the dog chased killed ate the malt "perfectly grammatical" (1963:286) ; true enough, if one understands by this term: generatable by a competence model. But a performance model would have to incorporate some restrictions by which these "improbable and confusing" sentences (Chomsky, ibid.) would be ruled out. Actually, much of the research in the fields of psycho-, socio-, neuro-, etc., linguistics deals with performance; it is my thesis that computational linguistics, too, is a province of the same realm.The next question to be answered is: how do these theoretical considerations reflect on past and current work in CL? Until recently, very little attention has been paid to the performance aspect of CL. The only really large-scale computeraided research in performance has been concentrating on machine translation and related areas. The lack of success that characterized these efforts has been material in turning off research funds as well as researchers. The result has been that CL workers now mainly direct their attention to such questions as: how to implement grammars on the machine; and: how to let the machine take over some of the work that linguists traditionally have done by hand? An example of the first kind is the transformational grammar developed by Friedman c.s. at Ann Arbor,formerly Stanford (1968 et seqq.) ; work in the second category ranges all the way from fairly unsophisticated and theoretically uninteresting "book-keeping" and "factfinding" aids to theoretically motivated work in the development of syntactic and phonological rule testers (e.g., Londe & Schoene 1968; Fraser 1969 ). Common to this type of research is its ancillary character: these models (descriptive) purport to be an aid in the establishing of a theory of competence. As to performance (and, by inclusion, simulation), it is interesting to note that some of the more worthwhile (1957:48) . It should be kept in mind, though, that the grammars discussed here are concerned with competence, and that performance, in early generative grammar, was thought of as something less than ideal. I have the feeling, however, that the Manichaean streak which accompanied the distinction competence-performance at its birth is about to lose its power, and that competence now is seen as relevant only inasmuch as it can explain performance. But why talk about a theory of performance at all, then? Would it not be possible, with people such as Bar-Hillel (1970) , to abolish the distinction altogether, and say: "competence is the theory of performance" or something similar? In the following, I will attempt to show that a theory of performance serves a purpose of its own, dependent on, but distinct from a theory of competence.In this section, I will conduct a Gedankenexperiment. *) Let us imagine two computers (or two computer programs), one (A) with the characteristics of a competence model (e.g., a system analogous to the transformational grammar described confronted with a sentence that does not conform to their specifications. To take a concrete example, take the sentence: Colorless green ideas sleep furiously. Suppose A has built-in restrictions that, among other things, state that the subject of sleep has to be [+Animate], that the adjective green selects a [+Concrete ] noun, and so on.Since the sentence presented to A violates almost all of the given selectional restrictions, the result would predictably be that A prints out a "reject" message, possibly with the reasons for rejection attached.What would our "Zwittermaschine" (Klee 1926 ) B do? Since B is a model of a human, and expressly purports to imitate human behavior, we can look towards a human hearer to obtain an answer. (Klee wouldn't lie). I think it was Arch Hill who first remarked that such deviant sentences sometimes are very well received by humans; in some of his experiments, students thought sentences like the above to be not only "modern poetry", but "good modern poetry" (Hill 1961) . There is also a persistent rumor around that Dell Hymes, having read Syntactic Structures, promptly sat down and conceived a poem whose first line read: "Colorless green ideas sleep furiously, . . . ". Not to mention, of course, that all-time status symbol, the bumper sticker carrying the same text and serving to fatten the pockets of some enterprising graduate student, while providing the more well-heeled members of the trade with a convenient shibboleth. To come back to our machine B: under the given presuppositions, it would have to find some way of imitating this human behavior, so disturbing to the creators of the selectional restrictions designed to produce the ultimate impossible sentence. For let us face it: there is no sentence so impossible that some human, in some devious way, cannot assign a possible interpretation to it. A quick glance at modern poetry will convince even the most incredulous (see also an article by Joseph Featherstone in The New Republic, 11 July 1970, "On Teaching Writing", where some interesting experiments in teaching children how to write poetry are described). This is not to say that selectional restrictions are for the birds (not even the one sitting perched on the leftmost handle of Klee's machine); only that it seems to be an innate human trait always to try to make the best of seemingly impossible linguistic input. If a machine loquax (or audiens, for that matter) wants to be true to its name, it will have to imitate this kind of behavior, and by doing so, explain some or all of it. 1 ) And at this point I wish to discontinue the Gedankenexperiment, since I do not know how to make my machine do all this. But I hope to have made the issue clear: a simulative model, such as the one described, is different from a descriptive model. The difference becomes even clearer when one tries to implement both models on a computer. The simulative model requires a theoretical base of its own, since the theory of competence, by its own assumptions, rules out some phenomena that were described as typical for the human-like device. Conclusion: if CL wants to address itself to problems such as the ones involved in our little experiment, it will have to provide a wider theoretical base than the one accepted by most CL workers thus far. What we need is a theory of performance with special reference to CL.In this final section, I will try to briefly indicate some of the areas in which I think a performance theory will be of use to CL. I will not propose any concrete solutions to any problems raised. The only aim I have set myself here is to provide some central perspective that I think may be fruitful to those working with the actual problems.As a general preamble, I would like to discuss the question: what do we want to use CL and CL methods for? If the answer is: as an ancillary to theoretical linguistics, i.e., as a practical aid in solving some of the problems that theoretical linguistics poses, then the theory of CL is simply the theory of linguistics. Applications of this theory include, on the one hand such uses as grammar testers, on the other, such purely mechanical aids as automated dictionaries, programs for finding certain morphemes in a corpus, etc. If, on the other hand, the answer is: to implement and perfect actually working models of human behavior in the area of speech production and recognition, then CL needs a theory of its own. Some of the aspects of such a theory are covered, or should be, in what one might call "general robotology"(for some ideas on this, cf. Simon (1968)): questions pertaining to the interaction between robot and man, or even the "computer use of human beings", to paraphraseWiener. Another general question is that of the degree of fidelity in simulation of human behavior, and the best way to implement this simulation. For example, what exactly does it mean: "to achieve a point by point imitation of human behavior"?Surely we do not want to reproduce certain states of the human that we consider irrelevant to the simulated process? In actual speech production, to take one example, we may very frequently be confronted with poor performance on account of extraneous conditions (colds, objects in the mouth, drowsiness of the subject, etc.) For a linguist, there is little point in examining and wishing to simulate these conditions. True, in marginal instances abnormal conditions may throw light on certain otherwise obscured processes; but this is not usually so. But even abstracting from these cases, there are areas where the difference between a competence approach and a performance approach manifests itself in the simulative set-up. Take again the example of embedded sentences.Despite the fact that the recursive embedding rule permits unlimited embedding, actual sentences will always be finite, hence contain a finite number of embedded clauses. Hence the question arises: can we set an upper bound for embeddings such that, for a particular sentence, the depth of embedding will not exceed that bound? And, more importantly, how can we linguistically motivate such a decision?Certain problems in the field of information retrieval have affinity to certain linguistic performance problems. For example, given a certain input to a question-answering system, how can one minimize the number of spurious answers, especially in the case of an imperfectly formulated question? Parallel to this is the problem of perfect understanding of imperfect questions by humans: how much do we really need to identify a given question and produce the correct answer? Traditionally, computational linguists have proceeded from the assumption that one first had to decompose the structure of the sentence (the question), then assign it a semantic interpretation, which subsequently is matched with the data file and produces the correct output. However, it seems clear that humans, in their analysis of linguistic input, often bypass the syntactic part and go straight for the semantics. A very simple and inadequate illustration is found in newspaper titles; a better one is provided by the ease with which small children handle conceptual structures without having the syntax correct. My own under-fours often produce rather complicated "sentences" that are perfectly intelligible, although syntactically completely ill-formed (or non-formed). As an example, consider the following:far gå huse ikke (Norwegian), where the negation is placed at the end of the sentence: ikke ('daddy go house not', i.e., 'daddy don't go to your study'). The most interesting thing about my 3-year old daughter's negative sentences is that the negation particle invariably is placed at the end, no matter how long the sentence. Think of the savings in syntactic analysis time we would obtain if we had this kind of input toEnglish question-answer systems! Furthermore, in a construction such as the one above, certain transformations (NEG-placement, e.g.) are clearly being omitted;but this does not affect the recognizability of the sentence by a human, or even by a computer that would be programmed to recognize deep, rather than surface, structures. Consider also the ease with which a computer could simulate such negative sentences, rather than spend costly time on rearranging the not's, nicht's, and so on that are the horror of freshman classes in ESL or German.I am convinced that simulation experiments will prove to be extremely useful by pointing up phenomena about human speech use that at present are being obscured by the overly abstract approach to grammar of the last decade or so. Current research in applied linguistics as well as in the so-called "hyphenated" areas seems to confirm the trend that is apparent in theoretical linguistics proper: a greater concern for naturalness and directness in explaining the phenomena of language, with an emphasis on semantics rather than syntax, also in CL.Of course it would take a machine both loquax and audiens. So why not audax?
null
null
null
null
Main paper: competence and performance in cl: The next question to be answered is: how do these theoretical considerations reflect on past and current work in CL? Until recently, very little attention has been paid to the performance aspect of CL. The only really large-scale computeraided research in performance has been concentrating on machine translation and related areas. The lack of success that characterized these efforts has been material in turning off research funds as well as researchers. The result has been that CL workers now mainly direct their attention to such questions as: how to implement grammars on the machine; and: how to let the machine take over some of the work that linguists traditionally have done by hand? An example of the first kind is the transformational grammar developed by Friedman c.s. at Ann Arbor,formerly Stanford (1968 et seqq.) ; work in the second category ranges all the way from fairly unsophisticated and theoretically uninteresting "book-keeping" and "factfinding" aids to theoretically motivated work in the development of syntactic and phonological rule testers (e.g., Londe & Schoene 1968; Fraser 1969 ). Common to this type of research is its ancillary character: these models (descriptive) purport to be an aid in the establishing of a theory of competence. As to performance (and, by inclusion, simulation), it is interesting to note that some of the more worthwhile (1957:48) . It should be kept in mind, though, that the grammars discussed here are concerned with competence, and that performance, in early generative grammar, was thought of as something less than ideal. I have the feeling, however, that the Manichaean streak which accompanied the distinction competence-performance at its birth is about to lose its power, and that competence now is seen as relevant only inasmuch as it can explain performance. But why talk about a theory of performance at all, then? Would it not be possible, with people such as Bar-Hillel (1970) , to abolish the distinction altogether, and say: "competence is the theory of performance" or something similar? In the following, I will attempt to show that a theory of performance serves a purpose of its own, dependent on, but distinct from a theory of competence. a tale of two machines: In this section, I will conduct a Gedankenexperiment. *) Let us imagine two computers (or two computer programs), one (A) with the characteristics of a competence model (e.g., a system analogous to the transformational grammar described confronted with a sentence that does not conform to their specifications. To take a concrete example, take the sentence: Colorless green ideas sleep furiously. Suppose A has built-in restrictions that, among other things, state that the subject of sleep has to be [+Animate], that the adjective green selects a [+Concrete ] noun, and so on.Since the sentence presented to A violates almost all of the given selectional restrictions, the result would predictably be that A prints out a "reject" message, possibly with the reasons for rejection attached.What would our "Zwittermaschine" (Klee 1926 ) B do? Since B is a model of a human, and expressly purports to imitate human behavior, we can look towards a human hearer to obtain an answer. (Klee wouldn't lie). I think it was Arch Hill who first remarked that such deviant sentences sometimes are very well received by humans; in some of his experiments, students thought sentences like the above to be not only "modern poetry", but "good modern poetry" (Hill 1961) . There is also a persistent rumor around that Dell Hymes, having read Syntactic Structures, promptly sat down and conceived a poem whose first line read: "Colorless green ideas sleep furiously, . . . ". Not to mention, of course, that all-time status symbol, the bumper sticker carrying the same text and serving to fatten the pockets of some enterprising graduate student, while providing the more well-heeled members of the trade with a convenient shibboleth. To come back to our machine B: under the given presuppositions, it would have to find some way of imitating this human behavior, so disturbing to the creators of the selectional restrictions designed to produce the ultimate impossible sentence. For let us face it: there is no sentence so impossible that some human, in some devious way, cannot assign a possible interpretation to it. A quick glance at modern poetry will convince even the most incredulous (see also an article by Joseph Featherstone in The New Republic, 11 July 1970, "On Teaching Writing", where some interesting experiments in teaching children how to write poetry are described). This is not to say that selectional restrictions are for the birds (not even the one sitting perched on the leftmost handle of Klee's machine); only that it seems to be an innate human trait always to try to make the best of seemingly impossible linguistic input. If a machine loquax (or audiens, for that matter) wants to be true to its name, it will have to imitate this kind of behavior, and by doing so, explain some or all of it. 1 ) And at this point I wish to discontinue the Gedankenexperiment, since I do not know how to make my machine do all this. But I hope to have made the issue clear: a simulative model, such as the one described, is different from a descriptive model. The difference becomes even clearer when one tries to implement both models on a computer. The simulative model requires a theoretical base of its own, since the theory of competence, by its own assumptions, rules out some phenomena that were described as typical for the human-like device. Conclusion: if CL wants to address itself to problems such as the ones involved in our little experiment, it will have to provide a wider theoretical base than the one accepted by most CL workers thus far. What we need is a theory of performance with special reference to CL. some further perspectives: In this final section, I will try to briefly indicate some of the areas in which I think a performance theory will be of use to CL. I will not propose any concrete solutions to any problems raised. The only aim I have set myself here is to provide some central perspective that I think may be fruitful to those working with the actual problems.As a general preamble, I would like to discuss the question: what do we want to use CL and CL methods for? If the answer is: as an ancillary to theoretical linguistics, i.e., as a practical aid in solving some of the problems that theoretical linguistics poses, then the theory of CL is simply the theory of linguistics. Applications of this theory include, on the one hand such uses as grammar testers, on the other, such purely mechanical aids as automated dictionaries, programs for finding certain morphemes in a corpus, etc. If, on the other hand, the answer is: to implement and perfect actually working models of human behavior in the area of speech production and recognition, then CL needs a theory of its own. Some of the aspects of such a theory are covered, or should be, in what one might call "general robotology"(for some ideas on this, cf. Simon (1968)): questions pertaining to the interaction between robot and man, or even the "computer use of human beings", to paraphraseWiener. Another general question is that of the degree of fidelity in simulation of human behavior, and the best way to implement this simulation. For example, what exactly does it mean: "to achieve a point by point imitation of human behavior"?Surely we do not want to reproduce certain states of the human that we consider irrelevant to the simulated process? In actual speech production, to take one example, we may very frequently be confronted with poor performance on account of extraneous conditions (colds, objects in the mouth, drowsiness of the subject, etc.) For a linguist, there is little point in examining and wishing to simulate these conditions. True, in marginal instances abnormal conditions may throw light on certain otherwise obscured processes; but this is not usually so. But even abstracting from these cases, there are areas where the difference between a competence approach and a performance approach manifests itself in the simulative set-up. Take again the example of embedded sentences.Despite the fact that the recursive embedding rule permits unlimited embedding, actual sentences will always be finite, hence contain a finite number of embedded clauses. Hence the question arises: can we set an upper bound for embeddings such that, for a particular sentence, the depth of embedding will not exceed that bound? And, more importantly, how can we linguistically motivate such a decision?Certain problems in the field of information retrieval have affinity to certain linguistic performance problems. For example, given a certain input to a question-answering system, how can one minimize the number of spurious answers, especially in the case of an imperfectly formulated question? Parallel to this is the problem of perfect understanding of imperfect questions by humans: how much do we really need to identify a given question and produce the correct answer? Traditionally, computational linguists have proceeded from the assumption that one first had to decompose the structure of the sentence (the question), then assign it a semantic interpretation, which subsequently is matched with the data file and produces the correct output. However, it seems clear that humans, in their analysis of linguistic input, often bypass the syntactic part and go straight for the semantics. A very simple and inadequate illustration is found in newspaper titles; a better one is provided by the ease with which small children handle conceptual structures without having the syntax correct. My own under-fours often produce rather complicated "sentences" that are perfectly intelligible, although syntactically completely ill-formed (or non-formed). As an example, consider the following:far gå huse ikke (Norwegian), where the negation is placed at the end of the sentence: ikke ('daddy go house not', i.e., 'daddy don't go to your study'). The most interesting thing about my 3-year old daughter's negative sentences is that the negation particle invariably is placed at the end, no matter how long the sentence. Think of the savings in syntactic analysis time we would obtain if we had this kind of input toEnglish question-answer systems! Furthermore, in a construction such as the one above, certain transformations (NEG-placement, e.g.) are clearly being omitted;but this does not affect the recognizability of the sentence by a human, or even by a computer that would be programmed to recognize deep, rather than surface, structures. Consider also the ease with which a computer could simulate such negative sentences, rather than spend costly time on rearranging the not's, nicht's, and so on that are the horror of freshman classes in ESL or German.I am convinced that simulation experiments will prove to be extremely useful by pointing up phenomena about human speech use that at present are being obscured by the overly abstract approach to grammar of the last decade or so. Current research in applied linguistics as well as in the so-called "hyphenated" areas seems to confirm the trend that is apparent in theoretical linguistics proper: a greater concern for naturalness and directness in explaining the phenomena of language, with an emphasis on semantics rather than syntax, also in CL.Of course it would take a machine both loquax and audiens. So why not audax? : activities that we want to describe. Of course, the simulative model, in order to be scientifically interesting, must attempt to explain; a machina loquax, to use Ceccato's expression (1967) is no good if there is a deus in machina. Although the idea of building homunculi, robots and what else they are called is not exactly a new one, the advent of the computer made it possible to conduct these experiments on a hitherto unknown scale, both with regard to dimensions and to exactitude. In fact, one of the popular views of the computer is exactly that: a man-like machine.Interestingly, the fears connected with this kind of image (such as an impending take-over by some super-computer like HAL in the movie "2001") have their counterpart in certain objections that are sometimes voiced against the other kind of model, the descriptive one: namely, that it de-humanizes human activities (such as speech), and establishes a new kind of man, made in the machine's image: machine-like man.Below, in section 3, I will discuss some of the implications of these views for computational linguistics; but first I want to raise the question: what importance do the two kinds of models have for linguistics itself?Competence and PerformanceThe distinction between competence and performance in linguistics has been belabored often enough to let me squeak by here with a short restatement of Chomsky's remarks in Aspects (1965: 4 et pass.) : competence is the speaker's knowledge of a language, performance is what he actually does with his knowledge in a given situation that involves linguistic activity. A theory of competence, Chomsky says, is not a model of the speaker-hearer; according to the distinction made in section 1, above, Iwould rather say that it is not a simulative model, but a descriptive one. In other words, the model that is a grammar does not attempt to explain linguistic activity on the part of the speaker or hearer by appealing to direct similarities between that activity and the rules of the grammar. Rather, the activity of the speaker (his performance) is explained by pointing to the fact that the rules give exactly the same result (if they are correct, that is) as does the performance of the speaker-hearer:the set of all possible utterances of a given language.Although a theory of performance thus is closer to the idea of simulating an actual linguistic situation, it is by no means identical with the simulative model. Rather, simulating actual linguistic activity depends on such a theory for its success; without it, a simulative model will be of little interest to linguists. To take an example: in any concrete linguistic situation there will be a lot of "unexplained" phenomena, such as hemming and hawing, false starts, anacolouths, etc. I feel that Chomsky is wrong in ascribing all of this to what he calls performance: linguistic theory should not account for these aspects of speech (they belong more properly in what one might call "corrective linguistics"). A simulative model wanting to represent this kind of "performance" would be a waste of energy and time.What, then, is the proper object of a theory of linguistic performance? To understand this question is to answer it: if performance by definition is actual human activity, then linguistic performance is activity exercised by humans in the form of speech acts. In terms of the restriction made in the preceding paragraph, our "ideal" performance is that activity minus irrelevant "noise". Notice that this ideal performance does not coincide with that of Chomsky's "ideal speaker-hearer" of the language: as I understand this person, he is some kind of linguistic Superman (with unlimited memory, boundless embedding facilities, etc.). In other words, Chomsky's "ideal speaker" reflects competence rather than performance (in Chomsky's sense). To take a very simple example: the set of sentences generated by a grammar is potentially infinite;this is a fact of competence. However, any actual speaker or set of speakers will always generate some finite subset of the set of all possible sentences: a fact of performance. On a more sophisticated level, consider such questions as: why is it the case that regressive embedding beyond a certain bound is unacceptable? Chomsky calls sentences such as The rat the cat the dog chased killed ate the malt "perfectly grammatical" (1963:286) ; true enough, if one understands by this term: generatable by a competence model. But a performance model would have to incorporate some restrictions by which these "improbable and confusing" sentences (Chomsky, ibid.) would be ruled out. Actually, much of the research in the fields of psycho-, socio-, neuro-, etc., linguistics deals with performance; it is my thesis that computational linguistics, too, is a province of the same realm. Appendix:
null
null
null
null
{ "paperhash": [ "bobrow|a_phonological_rule_tester" ], "title": [ "A phonological rule tester" ], "abstract": [ "Theoretical and practical values of error coefficients useful in bounding the error in integrating periodic analytic functions with the trapezoidal rule are tabulated for various ranges of the parameters." ], "authors": [ { "name": [ "D. G. Bobrow", "J. Bruce Fraser" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "14433444" ], "intents": [ [] ], "isInfluential": [ false ] }
Problem: The paper discusses the relationship between computational linguistics (CL) and human behavior, specifically focusing on the distinction between descriptive and simulative models in behavioral sciences. Solution: The hypothesis of the paper is that in computational linguistics, the development of simulative models that aim to imitate human behavior in speech production and recognition will provide a more comprehensive understanding of linguistic performance compared to descriptive models that focus on competence alone.
638
0.001567
null
null
null
null
null
null
null
null
254cb5591e4ca173195d17217f1a525e41e8303a
245118156
null
The logic of {E}nglish predicate complement constructions
The Logic of English Predicate Complement Constructions 1 Lauri Karttunen 0. INTRODUCTION. The title of my paper is an intentional variation on the name of Peter Rosenbaum's (1965) well-known MIT dissertation 'The Grammar of English Predicate Complement Constructions'. It is intended to be suggestive of a difference in emphasis between the early work on complement constructions by Rosenbaum and others, and the more recent studies by Paul & Carol Kiparsky, George Lakoff, Jerry Morgan, and myself -just to mention a few. 2 It is these newer developments that I will discuss in my report.
{ "name": [ "Karttunen, Lauri" ], "affiliation": [ null ] }
null
null
Feasibility Study on Fully Automatic High Quality Translation
1971-12-01
10
24
null
well-known MIT dissertation 'The Grammar of English Predicate Complement Constructions'. It is intended to be suggestive of a difference in emphasis between the early work on complement constructions by Rosenbaum and others, and the more recent studies by Paul & Carol Kiparsky, George Lakoff, Jerry Morgan, and myself -just to mention a few. 2 It is these newer developments that I will discuss in my report.In the appendix to this thesis, Rosenbaum provided a classification of English verbs in terms of the complement structures in which the verbs may participate. His analysis of complementation has since been challenged, and the basic criteria for his classification have now generally been rejected. 3 But of course, the general principle of classifying verbs in terms of their syntactic properties continues to be valid. For example, it must be stated somewhere in the lexicon that verbs like order and force take sentential complements only in the presence of a real noun phrase object, but believe and realize can have complements as their objects. Or, if you prefer another terminology, realize is a two-place and force a three-place predicate. On the basis of such simple criteria, one might arrive at the conclusion that the verbs listed in (1) divide naturally into the four groups which are indicated there.(1) (a) order (x, y, S) For instance, on syntactic grounds there are good reasons for regarding the verbs happen and seem as similar, since they both take sentential subjects and undergo many of the same syntactic transformations.In selecting these examples in (1), I have not been quite as arbitrary as it first appears. It does not take long to notice that just those verbs which here fall into the same class on the basis of some superficial syntactic criteria turn out to be different when the same verbs are grouped on the basis of their semantic properties. At this point, you might take a look at the classification in (2), which gives a preview of what is to come, and compare it with (1).(2) FACTIVES: realize, odd IMPLICATIVES: manage, happen IF-VERBS: force, certain ONLY-IF VERBS: able, possibleSometimes it is possible to show that there is a definite connection between the semantic properties of a verb and certain syntactic characteristics. For instance, it has been observed (Kiparsky 1968 ) that all of the factive verbs of the type (1d) are exceptions to the transformation that relates (3a) and (3b). Therefore, (3d) is ungrammatical.(3) (a) It was certain that Bill was alone.(b) Bill was certain to be alone.(c) It was odd that Bill was alone.(d) *Bill was odd to be alone.However, I do not believe that the validity of the proposed classification crucially depends on us being able to find syntactic parallels for every distinction; and here I will not try to present any. For the purpose at hand, it is sufficient to demonstrate their semantic reality, to show that they actually play a part in our everyday reasoning.1. FACTIVE VERBS. The term 'factive verb' is due to a pioneering study by Paul and Carol Kiparsky (1968) . 4 An illustrative sample of these verbs is provided in (4).(4) FACTIVE VERBS: significant resent tragic know relevant realize odd bear in mind take into account regret make clear ignore find outWhat is common to them is that any simple assertion with a factive predicate, such as (5a), commits the speaker to the belief that the complement sentence, just by itself, is also true.(5) (a) It is odd that Bill is alone.(b) Bill is alone.(c) It is possible that Bill is alone.It would be insincere for anyone to assert (5a) if he did not believe that (5b) is true. Intuitively, in uttering (5a) the speaker must take it for granted that Bill is alone; he is making a comment about that fact. The same relation holds between (6a) and (6b).(6) (a) Mary realized that it was raining.(b) It was raining.(c) Mary believed that it was raining.Notice that these relations break down if we replace odd by possible and realized by relieved. (5c) and (6c) do not carry a commitment to the truth of the complement sentence.With factive verbs, it does not make a difference whether the main sentence is affirmative or negative. The negations of (5a) and (6a) , which you find in (7), also obligate the speaker to accept the complement as true.(7) (a) It isn't odd that Bill is alone. (b) Mary didn't realize that it was raining.Even the illocutionary force of the main sentence is irrelevant. The question in (8) carries along the same commitment as (5a) and (7a). This relation is usually described by saying that the complement of a factive predicate is a 'presupposition' for the sentence as a whole. The term 'presupposition' comes from logic but it is currently used in linguistics in a more general way than the common logical definition would actually allow. In logic, it is customary to give some definition such as (9). 5 (9) P presupposes Q iff T(P)  T(Q) and F(P)  T(Q)[ T(_) = '_ is true' , F(_) = '_is false' ]That is, P presupposes Q just in case Q is true whenever P has a truth value.However, this definition in terms of truth values is not very helpful to linguists. They tend to rely on a more or less intuitive notion of presupposition, which I have tried to explicate in (10) -rather unsuccessfully, I must say. 6 (10) P presupposes Q just in case that if P is asserted, denied, or questioned then the speaker ought to believe that Q.1.2. POSSIBLE WORLDS. In his paper on presuppositions, Jerry Morgan (1969) pointed out that there are sentences such as the examples in (11).(11) (a) If I had missed the train, I would have regretted it.(b) I dreamed that I was a German and that nobody realized it.The problem with these examples is that, in both cases, the speaker apparently does not believe that the complement of the factive verb is true. In (11a), the pronoun it stands for the sentence 'that I had missed the train'. Since regret is a factive verb, the second clause of (11a) presupposes that the speaker has missed the train. However, this is just what is denied by the preceding counterfactual conditional. According to what we just said about factive verbs, (11a) ought to be self-contradictory. Similarly, (11b) ought to imply that the speaker believes that he is a German, even when he is not dreaming. Both of these predictions are clearly wrong. On the other hand, the examples in (12), which are very similar to those in (11) pose no problems at all.(12) (a) If I had regretted that I missed the train, I would not have mentioned it. (b) I dreamed that nobody realized that I was a German.(12a) can be sincerely asserted only by someone who believes that he has missed the train; in (12b), the speaker must believe that he really is a German. The crucial difference between (11) and (12) is that, in (12a), the sentence with a factive predicate is the antecedent clause of a counterfactual conditional construction and, in (12b), it is the first sentence following the verb dream.Morgan concludes from examples of this sort that the conditional if, the word dream and all similar verbs are to be regarded as 'world-creating' predicates . A sentence in the scope of a world-creating predicate is assumed to be true, not in the actual world, but in a 'possible world'. 7 A possible world receives its characterization in the usual left-to-right order of discourse. For instance, in (11b) the first sentence following the verb dream, 'I was a German', is understood to be a fact in the context of my dream world; therefore, it can stand as a presupposition for the following sentence, 'nobody realized that I was a German', which also is in the scope of dream. Similarly, in (11a) the antecedent clause of the conditional construction, 'I had missed the train', defines a possible world in which it may then also be true that I regret that fact.This analysis explains the difference between the examples in (11) and (12). In (12b), the complement of realize has not been established as a fact of the dream world; therefore, it ought to be a fact in the actual world of the speaker. (12b) can only be said by someone who believes that he is a German. In (11b) , the complement is introduced as a fact in a dream. It does not matter if the speaker does not believe it to be true in the actual world. I don't intend to try to give any formal account of how possible worlds ought to be incorporated into a theory of language. I don't think that there is, at this point, much to be said about it beyond the kind of suggestive remarks that I have presented. This is an area where there is bound to be some exchange of ideas between linguists and modal logicians, who have traveled in possible worlds far more extensively than we have. But neither linguists nor philosophers have actually been thinking about sentences like those in (11) for very long.1.3. DEGREES OF FACTIVITY. Another outstanding problem is that some of the factive verbs in (4) do not carry along the expected presupposition in all syntactic environments. For example, there is an unexplained difference between verbs like regret and realize in conditional clauses. Although both verbs are factive as far as simple assertions are concerned, if-clauses with realize as predicate do not presuppose the truth of the complement. Consider the difference between (13a) and (13b). 13 However, notice that the adverbial modifiers of the main sentence, yesterday in (15a) and the phrase to everyone's surprise in (15b), by implication also seem to belong to the complement sentence. Another striking difference between factive and implicative verbs shows up in negative assertions. This can be observed by comparing the examples in (18) with those in (7). As you remember, in case of factives, negation in the main sentence has no effect on the assumed truth of the complement. But when a sentence with an implicative predicate is negated, it commits the speaker to the view that the complement is false. For instance, one cannot sincerely assert (18a) unless one believes (19a).(18) (a) Sheila didn't bother to come.(b) Max didn't have the foresight to stay away. (19) (a) Sheila didn't come.(b) Max didn't stay away.It would be contradictory to say something like (20).(20) *Sheila didn't bother to come, but she came nevertheless.Similarly, (18b) implies (19b).2.1. IMPLICATION. In saying that (18b) implies (19b), I am not using the term 'imply' in the sense of 'logically implies'or 'entails'. The relation is somewhat weaker, as indicated by the definition in (21).(21) P implies Q iff whenever P is asserted, the speaker ought to believe that Q.I believe this to be the same sense in which J. L. Austin (1962) has used the term. It is also closely related to B. C. Van Fraassen's (1968) notion of 'necessitation' 8 . Note that, for our weak sense of 'imply', the rule of inference known as 'Modus Tollens' does not apply. It is not required in (21) that asserting in ~Q should, in turn, obligate the speaker to believe that ~P. The reason why this point is worth making is that Modus Tollens is a valid argument form for the two other common senses of the term 'imply', 'materially implies' and 'logically implies', which we do not want to get mixed up with. Using the term in the sense of (21), we can say that (22a) implies (22b).(22) (a) John managed to kiss Mary.(b) John kissed Mary.But it would be mistaken to conclude from this, by Modus Tollens, that the negation of (22b) implies the negation of (22a); in other words, that (23a) also implies (23b).(23) (a) John didn't kiss Mary. (b) John didn't manage to kiss Mary.If you contemplate for a while the two sentences in (23), you will soon realize that one can perfectly well assert (23a) without committing oneself to the belief that (23b) is true.The verb manage in (23b) carries along an extra assumption that is not shared by (23a). It would be appropriate to use (23b) only if John had actually made an unsuccessful attempt to kiss Mary. Therefore, these two sentences are not logically equivalent; the implication only holds in one direction, from (23b) to (23a) and from (22a) to (22b).2.2. MEANING POSTULATES. Let us now consider the problem how these facts about implicative verbs ought to be accounted for. One might, for example, propose that the semantic representation of (15a) actually contains the implied sentence, (16a), as a subpart. If one is a generative semanticist, one might even assume that (15a) be transformationally derived from some structure that properly includes the underlying structure of (16a). Under this proposal, there would be no distinction between the semantic representation of a single sentence and the set of inference derivable from it; the two notions would be equivalent. 9 This is not the approach that I have chosen. Instead, I assume that the implied sentence is not included in the underlying representation of its antecedent but is to be derived from it by means of meaning postulates and general rules of inference.I have proposed (Karttunen 1970a ) that the facts about implicative verbs be accounted for in the following manner. What all verbs such as manage, bother, etc. have in common is that they are understood to represent some necessary and sufficient condition which alone determines whether the event described in the complement takes place. They all have the same two meaning postulates associated with them. 10 Using v for any arbitrary implicative verb and S for its complement, we can represent these two meaning postulates roughly as in (24).(24) (a) v(S)  S 'v(S) is a sufficient condition for S' (b) ~v(S)  ~S 'v(S) is a necessary condition for S'What actually constitutes this decisive condition depends on the particular implicative verb. It may consist of making a certain effort, as in bother, showing enough skill and ingenuity, as in manage, or it may be a matter of chance, as in happen. A sentence with one of these verbs as predicate can be looked upon as a statement about whether this decisive condition is fulfilled, and under what spatial and temporal circumstances this is the case. From an affirmative assertion, we can then infer that the complement is true; from a negative assertion that the complement is false. The rule of inference I am assuming here is, of course, the familiar Modus Ponens, which is illustrated in (25). Therefore, (26b) can be derived in all cases as a legitimate inference in the manner illustrated in (25b) above.2.3. NEGATIVE IMPLICATIVES. Next I would like to point out a group of verbs that are in every other respect like the implicative verbs in (14) except that they work the opposite way. A short list of these negative implicatives is given in (27) . There are in principle two ways to account for these facts in our analysis. One way is to say that we have a separate pair of meaning postulates for negative implicative verbs. This set would be the pair given in (30).(30) (a) v(S)  ~S 'v(S) is a sufficient condition for ~S' (b) ~v(S)  S 'v(S) is a necessary condition for ~S'The other possibility is to assume that negative implicatives in fact contain negation in their underlying syntactic structure and that there is a process of lexical insertion that can replace some ordinary implicative verb and the preceding negation marker with one of the verbs in this special class.For instance, there would be rules such as (31), which says that the verb fail, in one of its senses, is equivalent to not succeed. This equivalency may then be interpreted as permission to substitute fail for not succeed in some underlying syntactic structure. 2.4. SPECIAL CASES. In addition to the verbs listed in 14and 27, there are of course many other implicative verbs. After one becomes aware of their existence, they are not hard to catch. There are some that are especially interesting. For instance, the words true and false, at least in their everyday sense, are implicative. They would, in fact, be the best example to use, if one wanted to argue that negative implicatives are to be defined in terms of positive ones. Nobody but a three-valued logician would refuse to accept the word false as the equivalent of not true. Another implicative word is the noun fact, which is not factive, as one might expect from the name. For that reason, it may be appropriate at this point to sound a warning and say that the verb imply, in turn, is not implicative. On one hand, it is a factive verb; on the other hand, it may also be a member of another category that we have not discussed yet: the if-verbs.3. IF-VERBS AND ONLY-IF VERBS. The next two classes of verbs also give rise to implicative relations, although in a less perfect fashion than implicative verbs proper. What is common to both of these types is a kind of asymmetry between negative and affirmative sentences, so that the implication holds only in one of them. It appears to me that these verbs are associated with only one of the two meaning postulates in (24). Verbs of one group express a sufficient condition for the truth of the complement. For that reason -and for the sake of brevity -I refer to them as 'if-verbs'. Verbs in the other group express a necessary condition; they are the 'only-if-verbs'. Later on, I will sometimes refer to if-verbs and only-if-verbs jointly as 'one-way implicatives' in order to distinguish them from 'twoway implicatives' discussed above, that is, from verbs which yield an implication both in negative and in affirmative assertions.
null
null
null
The set of if-verbs includes those in (32). In all of the a-sentences, the speaker is committed to the belief that Mary stayed home. It would not be honest to assert any of the sentences in (33a) if one thought otherwise. This fact distinguishes the verbs in (33a) from such syntactically very similar verbs as those in (33b) . It is clear that none of the sentences in (33b) has a definite implication one way or the other.On the other hand, in negative assertions, the difference between if-verbs and those in (33b) disappears entirely. In (34), force and order are just alike; both are equally noncommittal with respect to the complement sentence. Thus far I have only discussed if-verbs which take infinitive complements. But in general there appears to be no connection between the semantic properties of a verb and the syntactic type of complement clause it takes. Just as there are factive verbs with infinitive complements, such as wise and proud, there are also if-verbs which take that-complements; for example, bring about, see to it, and make sure. That these verbs really are ifverbs and not factives can be shown by pointing out that (35) can be asked felicitously by someone who does not know whether Mary got what she wanted. It is interesting to notice that all the clear if-verbs seem to be, in some intuitive sense, causative verbs. It would be very interesting to find some clear cases of noncausative if-verbs, but all the likely candidates that I have come across appear to involve some additional complications. For example, consider the word certain. There is no doubt that certain is an if-verb in constructions like (36a).(36) (a) It is certain that Sheila left with Max.(b) Bill is certain that Sheila left with Max.Surely, it would be dishonest to say (36a) if you did not believe that Sheila left with Max. But it is also clear that certain is not an if-verb in (36b). It seems likely that, in addition to the complement clause, the verb certain always involves another underlying noun phrase, in Fillmore's terms, an 'experiencer '. 11 This noun phrase may remain unexpressed if it is identical with the speaker, as in (36a). The verb certain does not count as an if-verb unless the experiencer and the speaker are the same person.The same problem shows up in verbs like mean and imply, as you can observe from the examples in (37).(37) (a) That the grass is wet implies that it has been raining.(b) For Bill, it means that somebody has watered the lawn.In (37a), the speaker commits himself to the view that is has been raining. But (37b), where the experiencer is not identical with the speaker, is non-committal with regard to the complement.Another fact about these verbs is that, as far as the subject complement is concerned, they are factive. (37a) and (37b) both presuppose that the grass is wet. Because of these complications, it is not clear whether these verbs should really be regarded as if-verbs at all.Another interesting case is the verb prove. Unlike the verbs just mentioned, prove meets the criteria for if-verbs no matter who the 'experiencer' is. All of the examples in (38) imply the truth of the complement.(38) (a) Bill proved to me that Max was a liar.(b) Bill proved to Sally that Max was a liar.(c) That there is no money in the bag proves that Max is a liar.On the other hand, the corresponding negative assertions are non-committal.(39) That there is no money in the bag doesn't prove that Max is a liar; perhaps he is, perhaps he isn't.As far as these data are concerned, there is no reason not to consider prove as an if-verb.However, it is also possible to account for just the same facts by a more complex analysis of prove. Let us assume that prove is associated with the meaning postulate in (40), in which the consequent consists of a causative sentence with a factive complement. The fact that all the examples in (38) imply their complement can now be explained by the combined effect of cause and know.For example, given the meaning postulate in (40), (38b) implies (41a), which in turn implies (41b). The latter sentence has a factive predicate; therefore, it presupposes (41c), which is the desired inference. On the other hand, the fact that (39) is non-committal with respect to (41c) is explained by the fact that, since cause yields no implication in a negative assertion, one cannot infer from (39) either that I know Max to be a liar or that I don't know that he is.The same type of analysis can also be applied to verbs like indicate, show, etc. Assuming that such verbs are analyzed roughly as in (42), we can explain some of the puzzling facts mentioned earlier. The fact that, in (43), the identity of the 'experiencer' determines whether or not the implication holds can be attributed to the fact that the complement of cause in (42) contains a non-factive verb. For this reason, (43b) only implies that Sally believes Max to be a liar; it is non-committal as far as the speaker is concerned. Like their positive classmates, negative if-verbs carry along a commitment with regard to the complement in affirmative assertions. The difference is that the complement is implied to be false. For example, (45a) definitely implies that Mary did not leave. On the other hand, a negative assertion such as (46a) is noncommittal. It is compatible with either one of the two continuations in (46b). It is this fact which distinguishes prevent from avoid and other such two-way implicatives listed in (27). They are committal even in negative assertions.(46) (a) John didn't prevent Mary from leaving. (b)  and she left.but she chose not to leave.Negative if-verbs bring up the same problem as negative implicatives. In principle, there are three ways to account for their negative properties. One way is to postulate for them the first of the two meaning rules in (30):(30a) v(S)  ~S 'v(S) is a sufficient condition for ~S'.The other possibility is by way of lexical insertion rules that replace some piece of underlying syntactic structure including a negation marker by one of the verbs in (44).This alternative has been proposed by George Lakoff (1969) .It is easy to see, for instance, that we could account for the negative implication of discourage by defining it as in (47a). It is doubtful whether there is any conclusive argument for choosing between the last two alternatives. However, note that (47b*) makes a weaker claim than its predecessor. Unlike a Lakoff-type insertion rule, it is not open to objections which are based on the claim that the transformationally inserted lexical item is not really synonymous with its supposed paraphrase.Instead of trying to settle the issue here, I will simply assume that negative if-verbs are associated with the meaning postulate (30a), which is also shared by avoid and other similar two-way implicatives.3.12. OTHER IMPLICIT CAUSATIVES. One interesting side result from the study of if-verbs is that it lends some new support to the so-called 'causative analysis' of verbs like kill and break. James D. McCawley (1969) and others have proposed that such verbs should not be treated as unanalyzed lexical items in underlying syntactic representations. Instead, they should be inserted transformationally by a rule that replaces a subtree in which cause is the topmost predicate. According to this view, the underlying structure of kill is roughly as in (48).(48) kill  cause to become not aliveSince cause is an if-verb, it follows from this analysis that kill should also belong to this semantic category. As the following example shows, this prediction seems to be in agreement with out intuitive judgements. An affirmative assertion with kill as predicate implies that the person referred to by the object NP dies (i.e. 'becomes not alive'). Thus (49a) implies (49b). In 51and 51b, the speaker is committed to the view that Sebastian did not leave. It would be contradictory to continue either sentence with (51c). This fact indicates that the verbs in (50) express a necessary condition for the truth of the complement. That is, they are associated with the second meaning postulate in (24), namely (24b) '~v(S)  ~S 'v(S) is a necessary condition for S'.Given this meaning postulate, we can infer from a negative assertion like (51a) and (51b) that the complement is implied to be false.In the corresponding affirmative assertions, however, there is no definite implication one way or the other. The two examples in (52) are both compatible with the continuation in (46c).(52) (a) Sebastian had an opportunity to leave the country. (b) Sebastian was able to leave the country. (c) ... but he chose not to do so.Therefore, the verbs in (50) are not two-way implicatives; they do not express a sufficient condition for the truth of the complement.It is perhaps worth pointing out that there are at least three semantically different groups of predicates that all appear in the same surface construction, have the X (to). Some of them are full two-way implicatives like have the foresight and have the misfortune, which we encountered in (14); those in (50) are only one-way implicatives. The third class consists of predicates which do not carry along any implication at all with respect to the complement sentence. A sample of them is given in (53). It is easy to see that a negative assertion with any of these verbs as predicate is non-committal. Unlike the similar examples in (51), 54leaves open the possibility that Sebastian may have left anyway.(54) Sebastian did not have a permission to leave the country.3.21. NEGATIVE ONLY-IF-VERBS. Since there are both negative two-way implicatives and negative if-verbs , one expects to find some negative only-if-verbs as well. A verb of this sort would be like be able and other positive onlyif-verbs in the respect that it would yield a definite implication only in negative assertions. However, the implication must be of the opposite kind, that is, a positive implication. These verbs would be associated with the second meaning postulate in (30), namely (30b) ~v(S) S 'v(S) is a necessary condition for ~S'.On the other hand, affirmative assertions with such a verb as predicate should be non-committal.The class of verbs which have the desired properties appears very small. The only verb I know of which certainly is a negative only-if-verb is the word hesitate. 12 Consider the following example.(55) (a) Bill did no hesitate to call him a liar.(b) Bill called him a liar.Whoever asserts (55a) commits himself to (55b). However, the corresponding affirmative assertion, (56a), is noncommittal. It is compatible with either one of the two continuations in (56b).(56) (a) Bill hesitated to call him a liar.(b) Therefore, he didn't say anything.  but his conscience forced him to do so.That is, hesitate is not a two-way implicative like avoid.There is no obvious reason why hesitate should be the only verb of its kind, but thus far I have not found any other negative only-if-verbs.Note that hesitate and prevent, which is a negative if-verb, both share one of the two meaning postulates in (30), which jointly account for the semantics of two-way implicatives such as avoid. These three verbs stand in the same relation to each other as their corresponding positive counterparts be able, cause, and manage, which share the meaning postulates in (24). As we mentioned above, it may be possible to eliminate the class of negative ifverbs, such as prevent, with the help of their positive classmates by regarding them as replacements for structures like cause not to be able. If this method were applicable to all negative implicatives, there would be no need for the second pair of meaning postulates in (30). However, it is doubtful whether verbs like avoid and hesitate can be lexically decomposed in a similar manner. Therefore, I will assume for the time being that the two sets of meaning postulates, (24) and (30), are both needed. 4. APPLICATIONS. I have now introduced six categories of implicative verbs: two types of two-way implicatives and four types of one-way implicatives. Most of the examples thus far have been very simple sentences with no more than one level of embedding. It is now time to look at some more complicated cases, in which verbs of different types alternate with negation in the same complex sentence. We should check that the semantic relations predicted by our analysis continue to agree with our intuitive judgements. Consider first the example in (57).(57) (a) Bill saw to it that the dog did not have an opportunity to run away. (b) The dog did not have an opportunity to run away. (c) The dog did not run away.Since 57ais an affirmative assertion and has an if-verb as predicate, it implies (57b). This is a negative sentence with an only-if-predicate; therefore, it implies the negation of its own complement, which is (57c). Thus there is a chain of implications from (57a) to (57c). Since the notion 'implies' obviously is a transitive relation, (57a) should imply that the dog did not run away. Now, look at another configuration of the same verbs in (58).(58) (a) Bill had an opportunity to see to it that the dog did not run away. (b) Bill saw to it that the dog did not run away. (c) The dog did not run away.Since have an opportunity is an only-if-predicate, although (58a) is an affirmative assertion, it does not imply the truth of its complement sentence, which is (58b). If (58b) were itself implied to be true, it would in turn imply (58c). But since (58b) is not implied by (58a), there is no chain of implications that would link (58a) with its lowest embedded sentence. Therefore, (58a) should not commit the speaker to any view whatever about the dog. It seems clear that the predicted semantic relations in these and other similar cases turn out to match our intuitive judgements.Incidentally, note that the example in (59) carries along the same implication as (57a).(59) John prevented the dog from running away.Note also that the negations of (57a) and (59) are equally non-committal with regard to the truth of the complement.Negative if-verbs, such as prevent, are in this respect equivalent to the configuration: if-verb ... negation ... only-if-verb. It is this fact which makes it possible to propose that they be introduced by a transformation.As a final example, consider the sentence in (60).(60) Bill did not have the foresight not to force Mary to prevent Sheila from having an opportunity to try that new detergent.The question is whether (60) is non-committal with respect to the truth of its lowest embedded clause or whether one is justified in inferring from it that Sheila either tried or did not try the new detergent. Although most people at first do not feel sure one way or the other, it does not take long to discover that (60) must mean that she did not try it. We can show this formally in the following way. Let us represent (60) schematically as (61 Assuming that the verbs in question have the semantic properties that we have assigned to them, it can be shown that (61) yields the desired inference. In the following, the number on the right of each line refers to the meaning postulate that was used in deriving that line from the preceding one.(62) (a) ~V 1 ( ~V 2 ( V 3 ( V 4 ( S )))) [ = (61)] (b) ~~V 2 ( V 3 ( V 4 ( S ))) -(24b) (c) V 2 ( V3 ( V 4 ( S ))) -Law of Double Negation (d) V 3 ( V 4 ( S )) -(24a) (e) ~V 4 ( S ) -(30a) (f) ~S -(24b)The last line of (62) indicates that, according to the proposed analysis, (60) implies (63).(63) Sheila did not try that new detergent.The present example may well be too complicated for some speakers to understand. However, it seems that, as far as people have any intuitions at all about its meaning, their judgements support the proposed analysis.have not yet been accounted for. Consider the example in (64a).(64) (a) John's wooden leg didn't keep him from dancing with Mary. (b) John danced with Mary.If one reads (64a) in isolation without thinking too much about it, one is very likely to get the impression that John danced with Mary, in spite of his wooden leg. However, a more careful analysis of (64a) shows immediately that this sentence does not imply (64b). As a negative if-verb, keep (from) should yield an inference only in affirmative assertions. Since (64a) is a negative assertion, it should be non-committal, as far as (64b) is concerned. This is certainly not a false prediction, as shown by the fact that (64a) can, without any contradiction, be embedded into a context where it is made clear that John did not dance with Mary. For example, (64a) can be expanded to (65).(65) John's wooden leg didn't keep him from dancing with Mary, but her husband did.Nevertheless, in the absence of any contrary evidence, (64a) seems to suggest that John danced with Mary.The following example is similar. Since force is an if-verb and it occurs here in a negative assertion, (66a)should be non-committal with respect to (66b).(66) (a) Bill did not force Mary to change her mind.(b) Mary did not change her mind.However, it seems that there is a temptation to conclude (66b) from (66a) if no further information is given.The same phenomenon shows up with only-if-verbs. If there is no particular reason to believe otherwise, most people will take (67a) to mean that John in fact left early.(67) (a) John was able to leave early.(b) John left early.Again, (67a) should be non-committal. Since be able is classified as an only-if-verb, it yields an implication only in a negative assertion. Why should it be that, although (67a) does not logically imply (67b), it nevertheless strongly suggests that (67b) is true? Here, as in the two preceding examples, a one-way implicative predicate invites one to draw a conclusion which would logically follow only from a two-way implicative verb. That is, in concluding (67b) from (67a) one interprets be able as if it were a verb like manage.It is very likely that this problem is another manifestation of a principle which Michael Geis and Arnold Zwicky (1970) have discussed in connection with conditional sentences. As Geis and Zwicky point out, there is a natural tendency in the human mind to perfect conditionals to biconditionals. Students in an elementary logic course often propose that examples such as (68) are to be formalized as biconditionals rather than conditionals.(68) If you mow the lawn, I'll give you five dollars.Thus, most people feel that the appropriate logical form of statements like (68) is the conjunction of (69a) and (69b).(69) (a) S 1  S 2 (b) ~S 1  ~S 2This is not quite right since (69a) alone is enough. However, it is clear that in a great majority of cases where a conditional like (68) is uttered, the corresponding statement of the form (69b) is also tacitly assumed. In natural language, (68) suggests rather strongly that, if you don't mow the lawn, I won't pay you five dollars. What would be the point in stating a condition which was not a necessary condition for the truth of the consequent? According to the principle proposed by Geis and Zwicky, any assertion of the form (69a) suggests, or "invites the inference", that the corresponding statement of the form (69b) is also true. However, this is only an "invited inference" and the speaker may indicate that it does not hold without thereby contradicting himself. This is the case in (70). The only thing that is odd about (70) is that it makes one wonder why anyone would bother to set a condition which is not a necessary one. (70) may be pointless but it is not contradictory.Similarly, we can say that, although an if-verb, such as force in the example (66a), strictly speaking is associated only with the meaning postulate (24a) v(S) S, it also "invites" the corresponding negative meaning postulate (24b) ~v(S) ~S. This explains why (66a) suggests (66b), although it does not actually imply (66b). On the other hand, an only-ifverb like be able, which is associated with the meaning postulate (24b) ~v(S)  ~S, "invites" (24a) v(S) S. This is the reason for the temptation to conclude (67b) from (67a). Something like the Geis-Zwicky principle is clearly involved in the general tendency to understand one-way implicatives as full two-way implicatives, unless the context makes it necessary to interpret them more strictly. 14 6. SUMMARY. The following chart is a review of the semantic classes of verbs which have been discussed in this paper. The chart indicates under what circumstances a main sentence implies the complement or its negation in each of the seven categories. The '+' sign is used when a sentence is to be regarded as true; '-' is a symbol for a false sentence. The '+/-' sign means that a sentence may either be regarded as true or regarded as false. The variable '' may take either + oras its value. It is used to indicate that the complement has the same truth value as the main sentence. A complement which has the opposite truth value with respect to the main sentence is marked with '-'.COMPLEMENT EXAMPLE (72 It is evident that logical relations between main sentences and their complements are of great significance in any system of automatic data processing that depends on natural language. For this reason, the systematic study of such relations, of which this paper is an example, will certainly have a great practical value, in addition to what it may contribute to the theory of the semantics of natural languages. It also seems to be the case that logical relations are also involved in a number of problems that have sometimes been regarded as purely syntactic. Two wellknown examples of such phenomena are the constraints on coreference (Karttunen 1969) and the problem of polaritysensitive lexical items (Baker 1970) . also ignore the problem how the correct tense is assigned to implied sentences.14 This observation may also explain the alternation between and and but, in certain cases. For example, consider the example (46a) with its two alternative continuations. Since prevent is a negative if-verb, (46a) suggests, but does not imply, that Mary left. We get but instead of and as the conjunctive particle if the conjoined sentence cancels the suggested inference.For the sake of simplicity, I treat all the verbs in (60) as if they were one-place predicates. As throughout this paper, I
Main paper: if-verbs.: The set of if-verbs includes those in (32). In all of the a-sentences, the speaker is committed to the belief that Mary stayed home. It would not be honest to assert any of the sentences in (33a) if one thought otherwise. This fact distinguishes the verbs in (33a) from such syntactically very similar verbs as those in (33b) . It is clear that none of the sentences in (33b) has a definite implication one way or the other.On the other hand, in negative assertions, the difference between if-verbs and those in (33b) disappears entirely. In (34), force and order are just alike; both are equally noncommittal with respect to the complement sentence. Thus far I have only discussed if-verbs which take infinitive complements. But in general there appears to be no connection between the semantic properties of a verb and the syntactic type of complement clause it takes. Just as there are factive verbs with infinitive complements, such as wise and proud, there are also if-verbs which take that-complements; for example, bring about, see to it, and make sure. That these verbs really are ifverbs and not factives can be shown by pointing out that (35) can be asked felicitously by someone who does not know whether Mary got what she wanted. It is interesting to notice that all the clear if-verbs seem to be, in some intuitive sense, causative verbs. It would be very interesting to find some clear cases of noncausative if-verbs, but all the likely candidates that I have come across appear to involve some additional complications. For example, consider the word certain. There is no doubt that certain is an if-verb in constructions like (36a).(36) (a) It is certain that Sheila left with Max.(b) Bill is certain that Sheila left with Max.Surely, it would be dishonest to say (36a) if you did not believe that Sheila left with Max. But it is also clear that certain is not an if-verb in (36b). It seems likely that, in addition to the complement clause, the verb certain always involves another underlying noun phrase, in Fillmore's terms, an 'experiencer '. 11 This noun phrase may remain unexpressed if it is identical with the speaker, as in (36a). The verb certain does not count as an if-verb unless the experiencer and the speaker are the same person.The same problem shows up in verbs like mean and imply, as you can observe from the examples in (37).(37) (a) That the grass is wet implies that it has been raining.(b) For Bill, it means that somebody has watered the lawn.In (37a), the speaker commits himself to the view that is has been raining. But (37b), where the experiencer is not identical with the speaker, is non-committal with regard to the complement.Another fact about these verbs is that, as far as the subject complement is concerned, they are factive. (37a) and (37b) both presuppose that the grass is wet. Because of these complications, it is not clear whether these verbs should really be regarded as if-verbs at all.Another interesting case is the verb prove. Unlike the verbs just mentioned, prove meets the criteria for if-verbs no matter who the 'experiencer' is. All of the examples in (38) imply the truth of the complement.(38) (a) Bill proved to me that Max was a liar.(b) Bill proved to Sally that Max was a liar.(c) That there is no money in the bag proves that Max is a liar.On the other hand, the corresponding negative assertions are non-committal.(39) That there is no money in the bag doesn't prove that Max is a liar; perhaps he is, perhaps he isn't.As far as these data are concerned, there is no reason not to consider prove as an if-verb.However, it is also possible to account for just the same facts by a more complex analysis of prove. Let us assume that prove is associated with the meaning postulate in (40), in which the consequent consists of a causative sentence with a factive complement. The fact that all the examples in (38) imply their complement can now be explained by the combined effect of cause and know.For example, given the meaning postulate in (40), (38b) implies (41a), which in turn implies (41b). The latter sentence has a factive predicate; therefore, it presupposes (41c), which is the desired inference. On the other hand, the fact that (39) is non-committal with respect to (41c) is explained by the fact that, since cause yields no implication in a negative assertion, one cannot infer from (39) either that I know Max to be a liar or that I don't know that he is.The same type of analysis can also be applied to verbs like indicate, show, etc. Assuming that such verbs are analyzed roughly as in (42), we can explain some of the puzzling facts mentioned earlier. The fact that, in (43), the identity of the 'experiencer' determines whether or not the implication holds can be attributed to the fact that the complement of cause in (42) contains a non-factive verb. For this reason, (43b) only implies that Sally believes Max to be a liar; it is non-committal as far as the speaker is concerned. Like their positive classmates, negative if-verbs carry along a commitment with regard to the complement in affirmative assertions. The difference is that the complement is implied to be false. For example, (45a) definitely implies that Mary did not leave. On the other hand, a negative assertion such as (46a) is noncommittal. It is compatible with either one of the two continuations in (46b). It is this fact which distinguishes prevent from avoid and other such two-way implicatives listed in (27). They are committal even in negative assertions.(46) (a) John didn't prevent Mary from leaving. (b)  and she left.but she chose not to leave.Negative if-verbs bring up the same problem as negative implicatives. In principle, there are three ways to account for their negative properties. One way is to postulate for them the first of the two meaning rules in (30):(30a) v(S)  ~S 'v(S) is a sufficient condition for ~S'.The other possibility is by way of lexical insertion rules that replace some piece of underlying syntactic structure including a negation marker by one of the verbs in (44).This alternative has been proposed by George Lakoff (1969) .It is easy to see, for instance, that we could account for the negative implication of discourage by defining it as in (47a). It is doubtful whether there is any conclusive argument for choosing between the last two alternatives. However, note that (47b*) makes a weaker claim than its predecessor. Unlike a Lakoff-type insertion rule, it is not open to objections which are based on the claim that the transformationally inserted lexical item is not really synonymous with its supposed paraphrase.Instead of trying to settle the issue here, I will simply assume that negative if-verbs are associated with the meaning postulate (30a), which is also shared by avoid and other similar two-way implicatives.3.12. OTHER IMPLICIT CAUSATIVES. One interesting side result from the study of if-verbs is that it lends some new support to the so-called 'causative analysis' of verbs like kill and break. James D. McCawley (1969) and others have proposed that such verbs should not be treated as unanalyzed lexical items in underlying syntactic representations. Instead, they should be inserted transformationally by a rule that replaces a subtree in which cause is the topmost predicate. According to this view, the underlying structure of kill is roughly as in (48).(48) kill  cause to become not aliveSince cause is an if-verb, it follows from this analysis that kill should also belong to this semantic category. As the following example shows, this prediction seems to be in agreement with out intuitive judgements. An affirmative assertion with kill as predicate implies that the person referred to by the object NP dies (i.e. 'becomes not alive'). Thus (49a) implies (49b). In 51and 51b, the speaker is committed to the view that Sebastian did not leave. It would be contradictory to continue either sentence with (51c). This fact indicates that the verbs in (50) express a necessary condition for the truth of the complement. That is, they are associated with the second meaning postulate in (24), namely (24b) '~v(S)  ~S 'v(S) is a necessary condition for S'.Given this meaning postulate, we can infer from a negative assertion like (51a) and (51b) that the complement is implied to be false.In the corresponding affirmative assertions, however, there is no definite implication one way or the other. The two examples in (52) are both compatible with the continuation in (46c).(52) (a) Sebastian had an opportunity to leave the country. (b) Sebastian was able to leave the country. (c) ... but he chose not to do so.Therefore, the verbs in (50) are not two-way implicatives; they do not express a sufficient condition for the truth of the complement.It is perhaps worth pointing out that there are at least three semantically different groups of predicates that all appear in the same surface construction, have the X (to). Some of them are full two-way implicatives like have the foresight and have the misfortune, which we encountered in (14); those in (50) are only one-way implicatives. The third class consists of predicates which do not carry along any implication at all with respect to the complement sentence. A sample of them is given in (53). It is easy to see that a negative assertion with any of these verbs as predicate is non-committal. Unlike the similar examples in (51), 54leaves open the possibility that Sebastian may have left anyway.(54) Sebastian did not have a permission to leave the country.3.21. NEGATIVE ONLY-IF-VERBS. Since there are both negative two-way implicatives and negative if-verbs , one expects to find some negative only-if-verbs as well. A verb of this sort would be like be able and other positive onlyif-verbs in the respect that it would yield a definite implication only in negative assertions. However, the implication must be of the opposite kind, that is, a positive implication. These verbs would be associated with the second meaning postulate in (30), namely (30b) ~v(S) S 'v(S) is a necessary condition for ~S'.On the other hand, affirmative assertions with such a verb as predicate should be non-committal.The class of verbs which have the desired properties appears very small. The only verb I know of which certainly is a negative only-if-verb is the word hesitate. 12 Consider the following example.(55) (a) Bill did no hesitate to call him a liar.(b) Bill called him a liar.Whoever asserts (55a) commits himself to (55b). However, the corresponding affirmative assertion, (56a), is noncommittal. It is compatible with either one of the two continuations in (56b).(56) (a) Bill hesitated to call him a liar.(b) Therefore, he didn't say anything.  but his conscience forced him to do so.That is, hesitate is not a two-way implicative like avoid.There is no obvious reason why hesitate should be the only verb of its kind, but thus far I have not found any other negative only-if-verbs.Note that hesitate and prevent, which is a negative if-verb, both share one of the two meaning postulates in (30), which jointly account for the semantics of two-way implicatives such as avoid. These three verbs stand in the same relation to each other as their corresponding positive counterparts be able, cause, and manage, which share the meaning postulates in (24). As we mentioned above, it may be possible to eliminate the class of negative ifverbs, such as prevent, with the help of their positive classmates by regarding them as replacements for structures like cause not to be able. If this method were applicable to all negative implicatives, there would be no need for the second pair of meaning postulates in (30). However, it is doubtful whether verbs like avoid and hesitate can be lexically decomposed in a similar manner. Therefore, I will assume for the time being that the two sets of meaning postulates, (24) and (30), are both needed. 4. APPLICATIONS. I have now introduced six categories of implicative verbs: two types of two-way implicatives and four types of one-way implicatives. Most of the examples thus far have been very simple sentences with no more than one level of embedding. It is now time to look at some more complicated cases, in which verbs of different types alternate with negation in the same complex sentence. We should check that the semantic relations predicted by our analysis continue to agree with our intuitive judgements. Consider first the example in (57).(57) (a) Bill saw to it that the dog did not have an opportunity to run away. (b) The dog did not have an opportunity to run away. (c) The dog did not run away.Since 57ais an affirmative assertion and has an if-verb as predicate, it implies (57b). This is a negative sentence with an only-if-predicate; therefore, it implies the negation of its own complement, which is (57c). Thus there is a chain of implications from (57a) to (57c). Since the notion 'implies' obviously is a transitive relation, (57a) should imply that the dog did not run away. Now, look at another configuration of the same verbs in (58).(58) (a) Bill had an opportunity to see to it that the dog did not run away. (b) Bill saw to it that the dog did not run away. (c) The dog did not run away.Since have an opportunity is an only-if-predicate, although (58a) is an affirmative assertion, it does not imply the truth of its complement sentence, which is (58b). If (58b) were itself implied to be true, it would in turn imply (58c). But since (58b) is not implied by (58a), there is no chain of implications that would link (58a) with its lowest embedded sentence. Therefore, (58a) should not commit the speaker to any view whatever about the dog. It seems clear that the predicted semantic relations in these and other similar cases turn out to match our intuitive judgements.Incidentally, note that the example in (59) carries along the same implication as (57a).(59) John prevented the dog from running away.Note also that the negations of (57a) and (59) are equally non-committal with regard to the truth of the complement.Negative if-verbs, such as prevent, are in this respect equivalent to the configuration: if-verb ... negation ... only-if-verb. It is this fact which makes it possible to propose that they be introduced by a transformation.As a final example, consider the sentence in (60).(60) Bill did not have the foresight not to force Mary to prevent Sheila from having an opportunity to try that new detergent.The question is whether (60) is non-committal with respect to the truth of its lowest embedded clause or whether one is justified in inferring from it that Sheila either tried or did not try the new detergent. Although most people at first do not feel sure one way or the other, it does not take long to discover that (60) must mean that she did not try it. We can show this formally in the following way. Let us represent (60) schematically as (61 Assuming that the verbs in question have the semantic properties that we have assigned to them, it can be shown that (61) yields the desired inference. In the following, the number on the right of each line refers to the meaning postulate that was used in deriving that line from the preceding one.(62) (a) ~V 1 ( ~V 2 ( V 3 ( V 4 ( S )))) [ = (61)] (b) ~~V 2 ( V 3 ( V 4 ( S ))) -(24b) (c) V 2 ( V3 ( V 4 ( S ))) -Law of Double Negation (d) V 3 ( V 4 ( S )) -(24a) (e) ~V 4 ( S ) -(30a) (f) ~S -(24b)The last line of (62) indicates that, according to the proposed analysis, (60) implies (63).(63) Sheila did not try that new detergent.The present example may well be too complicated for some speakers to understand. However, it seems that, as far as people have any intuitions at all about its meaning, their judgements support the proposed analysis. invited inferences. there are certain important facts that: have not yet been accounted for. Consider the example in (64a).(64) (a) John's wooden leg didn't keep him from dancing with Mary. (b) John danced with Mary.If one reads (64a) in isolation without thinking too much about it, one is very likely to get the impression that John danced with Mary, in spite of his wooden leg. However, a more careful analysis of (64a) shows immediately that this sentence does not imply (64b). As a negative if-verb, keep (from) should yield an inference only in affirmative assertions. Since (64a) is a negative assertion, it should be non-committal, as far as (64b) is concerned. This is certainly not a false prediction, as shown by the fact that (64a) can, without any contradiction, be embedded into a context where it is made clear that John did not dance with Mary. For example, (64a) can be expanded to (65).(65) John's wooden leg didn't keep him from dancing with Mary, but her husband did.Nevertheless, in the absence of any contrary evidence, (64a) seems to suggest that John danced with Mary.The following example is similar. Since force is an if-verb and it occurs here in a negative assertion, (66a)should be non-committal with respect to (66b).(66) (a) Bill did not force Mary to change her mind.(b) Mary did not change her mind.However, it seems that there is a temptation to conclude (66b) from (66a) if no further information is given.The same phenomenon shows up with only-if-verbs. If there is no particular reason to believe otherwise, most people will take (67a) to mean that John in fact left early.(67) (a) John was able to leave early.(b) John left early.Again, (67a) should be non-committal. Since be able is classified as an only-if-verb, it yields an implication only in a negative assertion. Why should it be that, although (67a) does not logically imply (67b), it nevertheless strongly suggests that (67b) is true? Here, as in the two preceding examples, a one-way implicative predicate invites one to draw a conclusion which would logically follow only from a two-way implicative verb. That is, in concluding (67b) from (67a) one interprets be able as if it were a verb like manage.It is very likely that this problem is another manifestation of a principle which Michael Geis and Arnold Zwicky (1970) have discussed in connection with conditional sentences. As Geis and Zwicky point out, there is a natural tendency in the human mind to perfect conditionals to biconditionals. Students in an elementary logic course often propose that examples such as (68) are to be formalized as biconditionals rather than conditionals.(68) If you mow the lawn, I'll give you five dollars.Thus, most people feel that the appropriate logical form of statements like (68) is the conjunction of (69a) and (69b).(69) (a) S 1  S 2 (b) ~S 1  ~S 2This is not quite right since (69a) alone is enough. However, it is clear that in a great majority of cases where a conditional like (68) is uttered, the corresponding statement of the form (69b) is also tacitly assumed. In natural language, (68) suggests rather strongly that, if you don't mow the lawn, I won't pay you five dollars. What would be the point in stating a condition which was not a necessary condition for the truth of the consequent? According to the principle proposed by Geis and Zwicky, any assertion of the form (69a) suggests, or "invites the inference", that the corresponding statement of the form (69b) is also true. However, this is only an "invited inference" and the speaker may indicate that it does not hold without thereby contradicting himself. This is the case in (70). The only thing that is odd about (70) is that it makes one wonder why anyone would bother to set a condition which is not a necessary one. (70) may be pointless but it is not contradictory.Similarly, we can say that, although an if-verb, such as force in the example (66a), strictly speaking is associated only with the meaning postulate (24a) v(S) S, it also "invites" the corresponding negative meaning postulate (24b) ~v(S) ~S. This explains why (66a) suggests (66b), although it does not actually imply (66b). On the other hand, an only-ifverb like be able, which is associated with the meaning postulate (24b) ~v(S)  ~S, "invites" (24a) v(S) S. This is the reason for the temptation to conclude (67b) from (67a). Something like the Geis-Zwicky principle is clearly involved in the general tendency to understand one-way implicatives as full two-way implicatives, unless the context makes it necessary to interpret them more strictly. 14 6. SUMMARY. The following chart is a review of the semantic classes of verbs which have been discussed in this paper. The chart indicates under what circumstances a main sentence implies the complement or its negation in each of the seven categories. The '+' sign is used when a sentence is to be regarded as true; '-' is a symbol for a false sentence. The '+/-' sign means that a sentence may either be regarded as true or regarded as false. The variable '' may take either + oras its value. It is used to indicate that the complement has the same truth value as the main sentence. A complement which has the opposite truth value with respect to the main sentence is marked with '-'.COMPLEMENT EXAMPLE (72 It is evident that logical relations between main sentences and their complements are of great significance in any system of automatic data processing that depends on natural language. For this reason, the systematic study of such relations, of which this paper is an example, will certainly have a great practical value, in addition to what it may contribute to the theory of the semantics of natural languages. It also seems to be the case that logical relations are also involved in a number of problems that have sometimes been regarded as purely syntactic. Two wellknown examples of such phenomena are the constraints on coreference (Karttunen 1969) and the problem of polaritysensitive lexical items (Baker 1970) . also ignore the problem how the correct tense is assigned to implied sentences.14 This observation may also explain the alternation between and and but, in certain cases. For example, consider the example (46a) with its two alternative continuations. Since prevent is a negative if-verb, (46a) suggests, but does not imply, that Mary left. We get but instead of and as the conjunctive particle if the conjoined sentence cancels the suggested inference.For the sake of simplicity, I treat all the verbs in (60) as if they were one-place predicates. As throughout this paper, I : well-known MIT dissertation 'The Grammar of English Predicate Complement Constructions'. It is intended to be suggestive of a difference in emphasis between the early work on complement constructions by Rosenbaum and others, and the more recent studies by Paul & Carol Kiparsky, George Lakoff, Jerry Morgan, and myself -just to mention a few. 2 It is these newer developments that I will discuss in my report.In the appendix to this thesis, Rosenbaum provided a classification of English verbs in terms of the complement structures in which the verbs may participate. His analysis of complementation has since been challenged, and the basic criteria for his classification have now generally been rejected. 3 But of course, the general principle of classifying verbs in terms of their syntactic properties continues to be valid. For example, it must be stated somewhere in the lexicon that verbs like order and force take sentential complements only in the presence of a real noun phrase object, but believe and realize can have complements as their objects. Or, if you prefer another terminology, realize is a two-place and force a three-place predicate. On the basis of such simple criteria, one might arrive at the conclusion that the verbs listed in (1) divide naturally into the four groups which are indicated there.(1) (a) order (x, y, S) For instance, on syntactic grounds there are good reasons for regarding the verbs happen and seem as similar, since they both take sentential subjects and undergo many of the same syntactic transformations.In selecting these examples in (1), I have not been quite as arbitrary as it first appears. It does not take long to notice that just those verbs which here fall into the same class on the basis of some superficial syntactic criteria turn out to be different when the same verbs are grouped on the basis of their semantic properties. At this point, you might take a look at the classification in (2), which gives a preview of what is to come, and compare it with (1).(2) FACTIVES: realize, odd IMPLICATIVES: manage, happen IF-VERBS: force, certain ONLY-IF VERBS: able, possibleSometimes it is possible to show that there is a definite connection between the semantic properties of a verb and certain syntactic characteristics. For instance, it has been observed (Kiparsky 1968 ) that all of the factive verbs of the type (1d) are exceptions to the transformation that relates (3a) and (3b). Therefore, (3d) is ungrammatical.(3) (a) It was certain that Bill was alone.(b) Bill was certain to be alone.(c) It was odd that Bill was alone.(d) *Bill was odd to be alone.However, I do not believe that the validity of the proposed classification crucially depends on us being able to find syntactic parallels for every distinction; and here I will not try to present any. For the purpose at hand, it is sufficient to demonstrate their semantic reality, to show that they actually play a part in our everyday reasoning.1. FACTIVE VERBS. The term 'factive verb' is due to a pioneering study by Paul and Carol Kiparsky (1968) . 4 An illustrative sample of these verbs is provided in (4).(4) FACTIVE VERBS: significant resent tragic know relevant realize odd bear in mind take into account regret make clear ignore find outWhat is common to them is that any simple assertion with a factive predicate, such as (5a), commits the speaker to the belief that the complement sentence, just by itself, is also true.(5) (a) It is odd that Bill is alone.(b) Bill is alone.(c) It is possible that Bill is alone.It would be insincere for anyone to assert (5a) if he did not believe that (5b) is true. Intuitively, in uttering (5a) the speaker must take it for granted that Bill is alone; he is making a comment about that fact. The same relation holds between (6a) and (6b).(6) (a) Mary realized that it was raining.(b) It was raining.(c) Mary believed that it was raining.Notice that these relations break down if we replace odd by possible and realized by relieved. (5c) and (6c) do not carry a commitment to the truth of the complement sentence.With factive verbs, it does not make a difference whether the main sentence is affirmative or negative. The negations of (5a) and (6a) , which you find in (7), also obligate the speaker to accept the complement as true.(7) (a) It isn't odd that Bill is alone. (b) Mary didn't realize that it was raining.Even the illocutionary force of the main sentence is irrelevant. The question in (8) carries along the same commitment as (5a) and (7a). This relation is usually described by saying that the complement of a factive predicate is a 'presupposition' for the sentence as a whole. The term 'presupposition' comes from logic but it is currently used in linguistics in a more general way than the common logical definition would actually allow. In logic, it is customary to give some definition such as (9). 5 (9) P presupposes Q iff T(P)  T(Q) and F(P)  T(Q)[ T(_) = '_ is true' , F(_) = '_is false' ]That is, P presupposes Q just in case Q is true whenever P has a truth value.However, this definition in terms of truth values is not very helpful to linguists. They tend to rely on a more or less intuitive notion of presupposition, which I have tried to explicate in (10) -rather unsuccessfully, I must say. 6 (10) P presupposes Q just in case that if P is asserted, denied, or questioned then the speaker ought to believe that Q.1.2. POSSIBLE WORLDS. In his paper on presuppositions, Jerry Morgan (1969) pointed out that there are sentences such as the examples in (11).(11) (a) If I had missed the train, I would have regretted it.(b) I dreamed that I was a German and that nobody realized it.The problem with these examples is that, in both cases, the speaker apparently does not believe that the complement of the factive verb is true. In (11a), the pronoun it stands for the sentence 'that I had missed the train'. Since regret is a factive verb, the second clause of (11a) presupposes that the speaker has missed the train. However, this is just what is denied by the preceding counterfactual conditional. According to what we just said about factive verbs, (11a) ought to be self-contradictory. Similarly, (11b) ought to imply that the speaker believes that he is a German, even when he is not dreaming. Both of these predictions are clearly wrong. On the other hand, the examples in (12), which are very similar to those in (11) pose no problems at all.(12) (a) If I had regretted that I missed the train, I would not have mentioned it. (b) I dreamed that nobody realized that I was a German.(12a) can be sincerely asserted only by someone who believes that he has missed the train; in (12b), the speaker must believe that he really is a German. The crucial difference between (11) and (12) is that, in (12a), the sentence with a factive predicate is the antecedent clause of a counterfactual conditional construction and, in (12b), it is the first sentence following the verb dream.Morgan concludes from examples of this sort that the conditional if, the word dream and all similar verbs are to be regarded as 'world-creating' predicates . A sentence in the scope of a world-creating predicate is assumed to be true, not in the actual world, but in a 'possible world'. 7 A possible world receives its characterization in the usual left-to-right order of discourse. For instance, in (11b) the first sentence following the verb dream, 'I was a German', is understood to be a fact in the context of my dream world; therefore, it can stand as a presupposition for the following sentence, 'nobody realized that I was a German', which also is in the scope of dream. Similarly, in (11a) the antecedent clause of the conditional construction, 'I had missed the train', defines a possible world in which it may then also be true that I regret that fact.This analysis explains the difference between the examples in (11) and (12). In (12b), the complement of realize has not been established as a fact of the dream world; therefore, it ought to be a fact in the actual world of the speaker. (12b) can only be said by someone who believes that he is a German. In (11b) , the complement is introduced as a fact in a dream. It does not matter if the speaker does not believe it to be true in the actual world. I don't intend to try to give any formal account of how possible worlds ought to be incorporated into a theory of language. I don't think that there is, at this point, much to be said about it beyond the kind of suggestive remarks that I have presented. This is an area where there is bound to be some exchange of ideas between linguists and modal logicians, who have traveled in possible worlds far more extensively than we have. But neither linguists nor philosophers have actually been thinking about sentences like those in (11) for very long.1.3. DEGREES OF FACTIVITY. Another outstanding problem is that some of the factive verbs in (4) do not carry along the expected presupposition in all syntactic environments. For example, there is an unexplained difference between verbs like regret and realize in conditional clauses. Although both verbs are factive as far as simple assertions are concerned, if-clauses with realize as predicate do not presuppose the truth of the complement. Consider the difference between (13a) and (13b). 13 However, notice that the adverbial modifiers of the main sentence, yesterday in (15a) and the phrase to everyone's surprise in (15b), by implication also seem to belong to the complement sentence. Another striking difference between factive and implicative verbs shows up in negative assertions. This can be observed by comparing the examples in (18) with those in (7). As you remember, in case of factives, negation in the main sentence has no effect on the assumed truth of the complement. But when a sentence with an implicative predicate is negated, it commits the speaker to the view that the complement is false. For instance, one cannot sincerely assert (18a) unless one believes (19a).(18) (a) Sheila didn't bother to come.(b) Max didn't have the foresight to stay away. (19) (a) Sheila didn't come.(b) Max didn't stay away.It would be contradictory to say something like (20).(20) *Sheila didn't bother to come, but she came nevertheless.Similarly, (18b) implies (19b).2.1. IMPLICATION. In saying that (18b) implies (19b), I am not using the term 'imply' in the sense of 'logically implies'or 'entails'. The relation is somewhat weaker, as indicated by the definition in (21).(21) P implies Q iff whenever P is asserted, the speaker ought to believe that Q.I believe this to be the same sense in which J. L. Austin (1962) has used the term. It is also closely related to B. C. Van Fraassen's (1968) notion of 'necessitation' 8 . Note that, for our weak sense of 'imply', the rule of inference known as 'Modus Tollens' does not apply. It is not required in (21) that asserting in ~Q should, in turn, obligate the speaker to believe that ~P. The reason why this point is worth making is that Modus Tollens is a valid argument form for the two other common senses of the term 'imply', 'materially implies' and 'logically implies', which we do not want to get mixed up with. Using the term in the sense of (21), we can say that (22a) implies (22b).(22) (a) John managed to kiss Mary.(b) John kissed Mary.But it would be mistaken to conclude from this, by Modus Tollens, that the negation of (22b) implies the negation of (22a); in other words, that (23a) also implies (23b).(23) (a) John didn't kiss Mary. (b) John didn't manage to kiss Mary.If you contemplate for a while the two sentences in (23), you will soon realize that one can perfectly well assert (23a) without committing oneself to the belief that (23b) is true.The verb manage in (23b) carries along an extra assumption that is not shared by (23a). It would be appropriate to use (23b) only if John had actually made an unsuccessful attempt to kiss Mary. Therefore, these two sentences are not logically equivalent; the implication only holds in one direction, from (23b) to (23a) and from (22a) to (22b).2.2. MEANING POSTULATES. Let us now consider the problem how these facts about implicative verbs ought to be accounted for. One might, for example, propose that the semantic representation of (15a) actually contains the implied sentence, (16a), as a subpart. If one is a generative semanticist, one might even assume that (15a) be transformationally derived from some structure that properly includes the underlying structure of (16a). Under this proposal, there would be no distinction between the semantic representation of a single sentence and the set of inference derivable from it; the two notions would be equivalent. 9 This is not the approach that I have chosen. Instead, I assume that the implied sentence is not included in the underlying representation of its antecedent but is to be derived from it by means of meaning postulates and general rules of inference.I have proposed (Karttunen 1970a ) that the facts about implicative verbs be accounted for in the following manner. What all verbs such as manage, bother, etc. have in common is that they are understood to represent some necessary and sufficient condition which alone determines whether the event described in the complement takes place. They all have the same two meaning postulates associated with them. 10 Using v for any arbitrary implicative verb and S for its complement, we can represent these two meaning postulates roughly as in (24).(24) (a) v(S)  S 'v(S) is a sufficient condition for S' (b) ~v(S)  ~S 'v(S) is a necessary condition for S'What actually constitutes this decisive condition depends on the particular implicative verb. It may consist of making a certain effort, as in bother, showing enough skill and ingenuity, as in manage, or it may be a matter of chance, as in happen. A sentence with one of these verbs as predicate can be looked upon as a statement about whether this decisive condition is fulfilled, and under what spatial and temporal circumstances this is the case. From an affirmative assertion, we can then infer that the complement is true; from a negative assertion that the complement is false. The rule of inference I am assuming here is, of course, the familiar Modus Ponens, which is illustrated in (25). Therefore, (26b) can be derived in all cases as a legitimate inference in the manner illustrated in (25b) above.2.3. NEGATIVE IMPLICATIVES. Next I would like to point out a group of verbs that are in every other respect like the implicative verbs in (14) except that they work the opposite way. A short list of these negative implicatives is given in (27) . There are in principle two ways to account for these facts in our analysis. One way is to say that we have a separate pair of meaning postulates for negative implicative verbs. This set would be the pair given in (30).(30) (a) v(S)  ~S 'v(S) is a sufficient condition for ~S' (b) ~v(S)  S 'v(S) is a necessary condition for ~S'The other possibility is to assume that negative implicatives in fact contain negation in their underlying syntactic structure and that there is a process of lexical insertion that can replace some ordinary implicative verb and the preceding negation marker with one of the verbs in this special class.For instance, there would be rules such as (31), which says that the verb fail, in one of its senses, is equivalent to not succeed. This equivalency may then be interpreted as permission to substitute fail for not succeed in some underlying syntactic structure. 2.4. SPECIAL CASES. In addition to the verbs listed in 14and 27, there are of course many other implicative verbs. After one becomes aware of their existence, they are not hard to catch. There are some that are especially interesting. For instance, the words true and false, at least in their everyday sense, are implicative. They would, in fact, be the best example to use, if one wanted to argue that negative implicatives are to be defined in terms of positive ones. Nobody but a three-valued logician would refuse to accept the word false as the equivalent of not true. Another implicative word is the noun fact, which is not factive, as one might expect from the name. For that reason, it may be appropriate at this point to sound a warning and say that the verb imply, in turn, is not implicative. On one hand, it is a factive verb; on the other hand, it may also be a member of another category that we have not discussed yet: the if-verbs.3. IF-VERBS AND ONLY-IF VERBS. The next two classes of verbs also give rise to implicative relations, although in a less perfect fashion than implicative verbs proper. What is common to both of these types is a kind of asymmetry between negative and affirmative sentences, so that the implication holds only in one of them. It appears to me that these verbs are associated with only one of the two meaning postulates in (24). Verbs of one group express a sufficient condition for the truth of the complement. For that reason -and for the sake of brevity -I refer to them as 'if-verbs'. Verbs in the other group express a necessary condition; they are the 'only-if-verbs'. Later on, I will sometimes refer to if-verbs and only-if-verbs jointly as 'one-way implicatives' in order to distinguish them from 'twoway implicatives' discussed above, that is, from verbs which yield an implication both in negative and in affirmative assertions. Appendix:
null
null
null
null
{ "paperhash": [ "stockwell|integration_of_transformational_theories_on_english_syntax", "rosenbaum|the_grammar_of_english_predicate_complement_constructions" ], "title": [ "Integration of transformational theories on English syntax", "The grammar of English predicate complement constructions" ], "abstract": [ "Abstract : The study attempts to bring together most of the information about the transformational analysis of the grammar of English that was available up through the summer of 1968, and to integrate it into a single coherent format. The format chosen is that of C. Fillmore (the 'Deep Case' hypothesis) combined with the 'Lexicalist' hypothesis of N. Chomsky. The areas of close investigation were the determiner system; pronominalization; negation; conjunction; relativization; complementation and nominalization; the systems of interrogative, passive, imperative, and cleft sentences; the genitive; the lexicon; and the ordering of rules for these areas of the grammar.", "A set of phrase structure rules and a set of transformational rules are proposed for which the claim is made that these rules enumerate the underlying and derived sentential structures which exemplify two productive classes of sentential embedding in English. These are sentential embedding in noun phrases and sentential embedding in verb phrases. First, following a statement of the grammatical rules, the phrase structure rules are analyzed and defended. Second, the transformational rules which map the underlying structures generated by the phrase structure rules onto appropriate derived structures are justified with respect to noun phrase and verb phrase complementation. Finally, a brief treatment is offered for the extension of the proposed descriptive apparatus to noun phrase and verb phrase complementation in predicate adjectival constructions. Thesis Supervisor: Noam Chomsky Title: Professor of Modern Languages" ], "authors": [ { "name": [ "Robert P. Stockwell", "P. Schacter", "B. Partee" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. Rosenbaum" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "62687099", "62173023" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
- Problem: To extract the research hypothesis from the paper "The Logic of English Predicate Complement Constructions" by Lauri Karttunen. - Solution: The hypothesis of the paper is to propose a classification of English verbs based on their complement structures, distinguishing between factive verbs, implicative verbs, if-verbs, only-if verbs, and negative implicatives, and to analyze the logical relations between main sentences and their complements in natural language.
638
0.037618
null
null
null
null
null
null
null
null
09d1b93152faa4e42fc51fdd4a19267ca54b28fc
42802688
null
Feasibility study on fully automatic high quality translation
As the appendices indicate, the study brought together specialists in the areas involved in machine translation. The report summarizes their findings. Participants in the study were provided with a preliminary statement of the initial part of this report, except for the conclusions and recommendations, and were asked to send their comments and revisions. These were incorporated in this report, except when they did not seem in keeping with the general conclusions of the various other participants. There were few strikingly diverse points of view. This report has been reviewed by the Information Office (OI) and is releasable to the National Technical Information Service (NTIS) . This technical report has been reviewed and is approved.
{ "name": [ "Lehmann, Winfred P. and", "Stachowitz, Rolf" ], "affiliation": [ null, null ] }
null
null
Feasibility Study on Fully Automatic High Quality Translation
1971-12-01
7
7
null
As the appendices indicate, the study brought together specialists in the areas involved in machine translation. The report summarizes their findings. Participants in the study were provided with a preliminary statement of the initial part of this report, except for the conclusions and recommendations, and were asked to send their comments and revisions. These were incorporated in this report, except when they did not seem in keeping with the general conclusions of the various other participants. There were few strikingly diverse points of view.The objective of this theoretical inquiry is to examine the controversial issue of a fully automatic high quality translation (FAHQT) in the light of the past and projected advances in linguistic theory and hardware/software capability. The principal purpose of this study is to determine whether the concept of FAHQT is justifiable as a long range R&D proposition. The study is also concerned with the intermediate range alternatives to FAHQT, i.e., machine translation forms that are adequate to the user's needs with or without post-editing. Machine aided translation, based on the automated dictionary look-up, is excluded from the study in consideration of the fact that this by-product of machine translation R&D is well within the current state-of-the-art.In the context of FAHQT, "full automation" implies that the entire translation process is autonomous in the computer without pre-editing of the source language text and post-editing of the target language output. "High quality" seems to be undefinable in an absolute sense. In referring to machine translation of 100% quality, Bar-Hillel (1) introduced the following qualification."When I talk about "100%", I obviously have in mind not some heavenly ideal of perfection, but the end product of an average human translator. I am aware that such translator will on occasion make mistakes and that even machines of a general low quality output will avoid some of these mistakes. I am naturally comparing averages only". Thus viewed, even the concept of 100% quality is not equatable with the error-free performance in either form of translation. Understandably enough, participants and consultants failed to reach a unanimous agreement as to the definition of "high quality" in machine translation. This is reflected on p. 48, quote, "There is apparently no absolute standard. Rather, standards must be defined with reference to specific users and specific purposes". In the absence of absolute and universally valid quality criteria, the user of machine translation can be legitimately considered an ultimate judge of its quality. This viewpoint was first expressed by Reitwiesner and Weik (2) as early as in 1958.According to Lamb (3) , "all translation can be viewed as human translation since machine translation is nothing but another kind of human translation". It follows from this observation that the fundamental constraints on machine translation parallel those imposed on human translation. Assuming the well-known limits of translatabi1ity, this seems to imply that either form of translation is a priori constrained. In summarizing the problem of translation equivalence between SL (source language) and TL (target language), Catford (4) draws the following conclusion. " The limits of translatabi1ity in total translation are, however, much more difficult to state. Indeed, translatability here appears, intuitively, to be a cline rather than a clear-cut dichotomy. SL texts and items are more or less translatable rather than absolutely translatable or untranslatable. In total translation, translation equivalence depends on the interchangeabi1ity of the SL and TL texts to (at least some of) the relevant features of situation-substance". Ray (5) recognizes the fact that "every translation necessarily involves some distortion of meaning". However, as is reflected in his statements below, this deficiency is not only manageable, but even unimportant in the practice of translation. "The translation operation is, like the limit operation, possible only under such conditions as "sufficiently" and "arbitrarily", that is, only by the exercise of some evaluative judgement, however little. Since distortion of meaning cannot be avoided, the problem becomes one of confining it to allowable measures of allowable kinds in allowable places along allowable directions". "..., while no two languages will match exactly in the total range of possible discourse, there are infinitely many specific limited ranges of discourse where the distortion of meaning can be legitimately dismissed as of no account".The feasibility of FAHQT must be, therefore, considered within the limits of translatability, i.e., taking into account the constraints on the total-translation. Since the concept of high quality is untenable in the absolute sense, the question of what is feasible in the context of FAHQT is quite probably more meaningful. It would be patently unreasonable in this stage of R&D to postulate machine translation requirements beyond the limits of translatabi1ity imposed on human translation.Machine translation research, based on puristic notions and oriented toward a global solution, was once compared to a search for the Holy Grail. This all-or-nothing attitude has probably caused as much damage to the progress of machine translation research as the early announcements of quick and easy solutions. Perfectionists in this area have generally tended to ignore the injunction by Lecerf (6) that "entreprendre la mise au point d'ensembles de traduction automatique, c'est avant tout accepter la contrainte du reel". v According to Ljudskanov (7), "The widespread so-called 100 percent approach, along with the belief that MT presupposes the presence of a complete mathematical model of language in general and of the specific languages in particular, in practice amounts to equating the nature and extent of the knowledge of language in general, which is necessary from the point of view of theoretical linguistics, with the extent of knowledge necessary for the achievement of translation from one language into another. This approach also amounts to equating the description of communication in general with that of the translation process; it ignores the specific characteristics of the process as mentioned above and the general linguistic problems of the theory of translation (both HT and MT) in the general problem area of mathematical linguistics". "....it can be asserted that the current critical state of MT research throughout the world, although much has happened that legitimately causes well-grounded anxieties and doubts as to its possibilities, is due to a certain degree to the maximalistic tendencies, however laudable they may be in themselves, of the global strategy. By giving due consideration to the particular characteristics of the translation process and of its study, as well as to the differentiation of the aims of mathematical linguistics from the theory of MT and of the fields of competence and performance from each other, research in this field would be channeled in a direction both more realistic for our time and more closely in accord with the facts".The report highlights on p.4 an important, but often ignored, difference between scientific and technical translations and translations of literary and religious texts, in spite of its importance from the viewpoint of machine translation requirements."Even articles and monographs dealing with machine translation have failed to be adequately explicit about the special problems of translating technical and scientific materials by computer. Instead, they have confused the problem by comparing machine translation with the long-practiced human translation, by equating the problems of translating scientific materials with those involved in translating literary materials, and by using the same evaluation criteria for the results".It is now a commonplace that the style of writing is of paramount importance in literary translation, whereas the accuracy constitutes the most important quality criterion in scientific and technical translations. According to Gingold (8), "It is not the translator's job to abstract, paraphrase, or improve upon the author's statements. He cannot be expected to convert an article that is poorly organized and badly written in the original language into a masterpiece of English scientific writing. In technical translation, he must always be willing to sacrifice style on the altar of accuracy". Savory (9) has expressed a similar opinion in his statement that "the translation of scientific work is an ideal example of translation of a writing in which the subject matter is wholly on the ascendant and the style is scarcely considered".The report further emphasizes the crucial importance of timeliness in production of scientific and technical translations. According to the statement on p. 5, "...timeliness is of increasing importance to users of scientific translations. Even in a relatively unhurried field like linguistics, few articles retain their importance over a long period. Statements have been made repeatedly about the obsolescence of publications issued a few years earlier. The insistence among technical specialists and scientists for speedy translation contrasts markedly with the length of time permitted for completing literary translations". The requirement of timeliness was stressed elsewhere by Gingold (10), quote, "The delay between the appearance of the original journal and its English translation, which may be a year or more, is also a disadvantage, particularly to industry, where time is usually of great importance".The principal findings of the study, as related to its objectives, can be summarized as follows.Computer hardware is no longer considered a crucial problem in machine translation. "Remarkable improvements, especially in rapid-access storage devices, have largely eliminated the problems caused by inadequate computers. Lexical items can now be retrieved as rapidly as were the major syntactic rules a decade ago. And with further improvements of storage devices in process, computers no longer pose major problems in machine translation". (p. 12). Developmental prospects in this area are very bright indeed, particularly with the advent of holographic memories. The impact of such memories on both linguistic and computational aspects of machine translation R&D is discussed in detail by Stachowitz in one of his contributions to the report ("Requirements for Machine Translation: Problems, Solutions, Prospects", pp 409-532). This contribution is considered significant because it provides a complete blueprint for a realistic implementation of a large-scale machine translation system.Equally encouraging is the appreciation of advances in computer software. "Programming has evolved as rapidly as have computers... A key factor here was the enrichment of programming language data types which made possible efficient representation and manipulation of linguistic structures". (p. 13).The report reflects a unanimous agreement of participants and consultants that "the essential remaining problem is language" (pp 14-15). It is, therefore, not surprising that linguistics has received much more attention in the study than computer hardware and software. Recommendations presented on pp 49-51 are exclusively oriented toward linguistic research in the context of machine translation.The report points out that there is "no conflict between specialists in descriptive linguistics, linguistic theory and machine translation... As descriptive linguists improve their understanding of language, and the models by which to express that understanding, machine translation specialists will update their procedures and models".(p. 24). However, the report also reflects a difference of opinions between machine translation experts and linguists as regards the nature, orientation and scope of linguistic research involved in machine translation. It is further worth noting that some linguists participating in this study have not acknowledged Ljudskanov's caveat about "maximalistic tendencies of the global strategy".The reader is referred to Conclusions (pp 45-48) and Recommendations (pp 49-51), summarizing the results achieved in performance of this study. Recommendation of support for research in machine translation is based on the fact that "quality translations can be achieved in the near future. This recommendation agrees strikingly with conclusions reached in a study carried out in the Soviet Union". (p. 49). Galilei's challenge ("Eppur si muove!"), aptly chosen as a motto in the Introduction to (11) by Kulagina and Mel'chuk, would be equally appropriate as an expression of views and sentiments embodied in the main part of this report. As one of the leading experts, Eugene A. Nida, has stated in his most recent contribution to the topic (Nida and Taber, 1969, 1) : "Never before in the history of the world have there been so many persons engaged in the translating of both secular and religious materials." The book intimates that the requirements for translation will be increased.Moreover, it describes more specifically and concretely than earlier discussions the steps that are involved in translation. Translation is defined (Nida and Taber, 1969, 12) as "reproducing in the receptor language the closest natural equivalent of the source-language message. " And the paragraph continues: "this relatively simple statement requires careful evaluation of several seemingly contradictory elements."For a fuller statement on the problem of translation, we refer to the important books by Nida and their bibliographies. His last book, however, contains further perceptive statements that are important to include here.A section on "the old focus and the new focus" of translating (Nida and Taber, 1969, 1) states that "the older focus in translating was the form of the message.. .The new focus, however, has shifted from the form of the message 1 to the response of the receptor."Further, "even the old question: Is this a correct translation? must be answered in terms of another question, namely: For whom?" After a brief answer, the section continues: "In fact, for the scholar who is himself well acquainted with the original, even the most labored, literal translation will be correct, for he will not misunderstand it." This statement is borne out by the reception to such translations at Oak Ridge, as reported by Zarechnak below.The growing sophistication with regard to translation which is reflected in the book by Nida and Taber and in many recent publications calls for a new evaluation of the problem of machine translation, and a new statement on the current situation. The requirements for translation vary markedly from audience to audience. Even a glance at the Nida-Taber book, which concerns primarily human translations of the Bible, will disclose the difference between translation of religious and literary materials, and translation of scientific and technical materials.For the translation of technical materials, the criteria of quality, speed, and cost have been used in evaluations. In the January Conference arranged under the Study, Bar-Hillel summarized his position on the improvements possible in machine translation in the foreseeable future using these three criteria. It is instructive to compare briefly these criteria with the objectives of Nida-Taber.The primary concern of Nida-Taber is to "reproduce the message" of texts produced by cultures of the past for cultures of the present, often radically different cultures, such as those of Africa and Asia. By contrast, the texts of interest to scientists and technicians share a common "culture,"whether the texts are produced in Africa, Asia or in western countries. After these three committees have made their contribution, a "stylist is called in" (1969, 186) . This proposed organization, which is not untypical for academic projects designed to produce literary translations, provides perspective for the statements concerning post-editing of technical and scientific translations. Obviously, the length of time and the cost required to produce literary and religious translations are not factors of importance.Yet timeliness is of increasing importance to users of scientific translations. Even in a relatively unhurried field like linguistics, few articles retain their importance over a long period. Statements have been made repeatedly about the obsolescence of publications issued a few years earlier.The insistence among technical specialists and scientists for speedy translation contrasts markedly with the length of time permitted for completing literary translations, and also with "the lag time (from receipt) in publication of the translated journals supported by NSF." This, according to a report of the National Academy of Sciences, (Languages and Machines, 1966, 17) "ranges from 15 to 26 weeks." This time span may be acceptable for archival purposes;for the requirements of scientists and technical specialists it may be burdensome.Given a choice between overnight machine translation and human translation within two weeks, scientists at EURATOM invariably asked for machine translation. The need for virtually immediate translation is one of the major reasons for the concern with machine translation. In evaluating machine translation versus human translation, this reason may outweigh the difference in cost. And as Nida has pointed out, the parameter of "quality" varies considerably among the different users. Bar-Hillel, who some years ago coined the expression "High Quality Fully Automatic Machine Translation" now states in this appended article that he applied the expression in too absolute a sense. Further, that quality is related to the requirements of the user.This statement echoes the quotation from Nida-Taber on the shift of focus "from the form of the message to the response of the receptor." If technical experts and scientists have reasonable prospects of virtually immediate translation, the prospects may well be vigorously pursued, even if the translations will be more "labored" and "literal" than ordinary users permit for their religious and literary works.In reviewing the prospects for machine translation, accordingly, the specific requirements must be considered as one of the major criteria. For technical specialists and scientists, translations must be consistent, reliable and timely, whether made by man or machine. Although the arrangements made for human translation are generally assumed to be known, and understood, a brief comparison of the current situation of human versus machine translation, and their prospects, may be useful before examining in detail the procedures involved in machine translation.hardware, software, linguisticsIn the early attempts at machine translation, the capacities of computers were a major problem. Difficulties resulted especially from the inadequacy of rapid-access memories. For processing languages, the available rapid-access storage space was filled with the major rules for grammatical constructions. Lexical items accordingly had to be stored in memories, usually on tapes, which required a considerable period of searching. As a result, even simple sentences required a long time for analysis. When the Linguistics Research Center was carrying out its research with an IBM 7040, several years ago, the computer would "grind" all night to translate a few sentences.To speed up the process, attempts were made to develop special- In time, procedure-oriented languages were used to produce programs of more general usefulness to linguists. Dictionary lookup and maintenance programs and context-free grammar parsing programs were followed by such programming systems as J. Friedman's transformational grammar tester and S. Petrick's transformational grammar syntactic analyzer. Systems such as these can be considered to be problem-oriented programming languages. The IBM natural language question answering project mentioned by Petrick in his appended paper uses Friedman's grammar tester system as well as a transformational syntactic analysis system that provides for a linguistically more realistic class of transformational grammars than could previously be accepted using its predecessor.Certainly presently existing procedural and problem-oriented languages make the mechanization of many linguistic processes easier than was the case a few years ago. The programming of many linguistic algorithms remains a slow and difficult task, however, as is the case for most complex algorithmic processes.The scope of the programs necessary for machine translation may be noted by examining a flow-chart of the programs that had been projected, and in part completed, at the Linguistics Research Center. These were produced entirely from scratch. Because the basic programs furnished by computer manufacturers were so inadequate, the Linguistics Research Center programs were written in machine language. The expenditure of time was enormous.The magnitude of the problem may be noted if one compares the cost incurred by IBM in developing PL-1; the cost of it far surpasses the entire amount which was spent on machine translation from the beginning of machine translation research.Gradually, adequate computer programs were devised for data processing. These now form the basis of programming systems used for machine translation. Like all programs, they need modification, and improvement, especially to speed up processing. The basic programs, however, are available and to them the additional programs needed for language processing can be added. Like computer equipment, programming systems will be improved. inadequate model of English would not. It would take "he" to be the subject of the verb + adjective combination, and also of the infinitive; its analysis of sentence 1 would therefore be wrong. A literal interpretation would fail to determine the proper meaning of this sentence and many other sentences. By determining the deep structure the meaning can be more easily arrived at.Alternatively, "easy" may be provided with a feature which would transform "NP be easy to -Inf" to "it be easy to -Inf NP" in accordance with Harris' use of transformations.Language is structured in this way in all its components, the phonological as well as the syntactic. In both of these components, it is a code, rather than a cipher, system. The human brain knows how to interpret the code. If machines are to interpret language, they must be provided with a comparable capability. Engineers have been working on machine interpretation of the phonological system; it would be useful, for example, if telephone "dialing" could be done by voice, rather than manually. Engineers have not mastered the problem, however, even though they are aware of the basic difficulty.The problem in the syntactic component of language has been one of the central issues for linguistics since the publication of Saussure's Cours in 1916 (though it was known earlier). Various labels have been given to the underlying structure. Saussure used the traditional philosophical terms:"form" for the underlying structure and "substance" for the surface structure.Recently the term surface structure has been used almost exclusively rather than substance, and deep or underlying structure rather than form.In view of this structure of language, techniques must be devised to get from the surface structure to the deep structure. Peters and Ritchie, 1971 ).This realization has important consequences for machine translation.The fact cannot be escaped that in machine translation, one must somehow determine the underlying forms of sentences. Further, the technique of using some relational formulae like reverse transformations is also clearly necessary.In considering linguistic techniques, the fundamental question is: how can these formulae be adequately restricted so that they yield only the specific underlying structure intended by the author, that is, the proper meaning?Two devices must be used: the lexical elements must be described as precisely as possible, so that only the desired transformations apply; the transformations must be devised in such a way that their use is properly restricted.Exploiting this understanding of the necessary procedures will require considerable work. The lexical analysis alone will be a huge task. The assumption of a universal base receives support from the capability of speakers to translate. It is also supported by the capability of infants to learn any language, to learn it rapidly, and in accordance with well-determined stages. Whatever a baby's ancestry, it acquires the language it hears.Moreover, the stages of linguistic development are fixed for virtually all infants, regardless of their intelligence.These observations are most plausibly accounted for if we assume some fundamental principles common to all language; further, that these somehow are related to the functioning of the brain. The principles are highly abstract. They permit certain linguistic structures and constrain others which are theoretically possible. As yet they are not by any means thoroughly explored. The term "universal" has been used for general characteristics of language; one example of a universal may be exemplified here, with two sentences and their variants.1.a She regretted the fact that she had taken the book. 2.c What did she regret that she had taken?BUT NOT 1.c *What did she regret the fact that she had taken?The impossibility of 1.c results apparently from a universal principle which blocks the extraction of an element out of a clause modifying a noun phrase.This principle was formulated by Ross (1967, 66-70) as "The Complex NPConstraint." Since this principle applies to all languages which have been examined, it is assumed to be a universal characteristic of language.Whatever the views which will be formulated concerning universals, this principle, like other universal principles that are being investigated, restricts the possible transformations for structures of language. Since language is governed by such constraints, the model which must be constructed to embrace all languages must have certain limits. Moreover, if only because of the finiteness of the human brain, grammars must be finite.These observations lead to the conclusion that a mechanical translation system can be devised. Even more support is provided by the conclusion of much recent linguistic study that we may posit the existence of a universal base. For the surface structures of any language can be related to such a universal base. Since the universal base in turn can be used for deriving the surface structures of any language, the universal base can serve as the intermediate language between any source language and any target language.The possibility of devising a translation system in view of the fact that a universal base may exist still leaves many problems. guages and also for use in machine translation systems, for which they produce far too many syntactic interpretations of any given sentence.The production of devices to map surface structures stringently into underlying structures is one of the most serious concerns of current linguistics.Bach's paper noted above (1971) is an example. A device projected by Stachowitz has been described in RGEMT (1970) . It makes use of an underlying form, the standard strings of a language. Associated with these strings are canonical forms, which represent the meanings of given sentences. "The language" of these is assumed to be "common to all natural languages" (Stachowitz, 1970, T-65) . Fuller information on the model is given T-66ff. The quotation here may be adequate to indicate that the canonical forms correspond to a universal base. A description of translation as it is being pursued in accordance with this model at the Linguistics Research Center is appended (Stachowitz paper).Of great importance for the Linguistics Research Center system is a well-designed lexicon. The intensive lexicographical work which has been going on at the Center for more than two years now has resulted in great amounts of syntactic information; the incorporation of semantic information is currently in progress. Because of their syntactic and semantic classification, the lexical entries will limit the possibilities of relationship with canonical forms. In this way a proper match will be brought about between the lexical and syntactic elements of the source language and those of the target language.The design of the lexicon has been vastly improved over dictionaries envisaged a decade ago. Without intending to dwell on the naivete of these and their proponents, reference might be made one further time to the saying which was supposedly quite ambiguous and accordingly a prime exhibit of the As linguists have improved their models of language, the problem of ambiguity has been reduced. It may be noted that the attention to the pragmatics of Peirce repeats a position held early in machine translation research. By attention to pragmatics, that is, to information on the "origin, uses and effects" of language, sentences belonging to the class of proverbs are not treated like sentences found in scientific exposition. Attention has also been focussed on such classes as illocutionary verbs, or on the characteristics of speech acts.Thus a sentence like: I pronounce you man and wife would not be treated as a simple declarative statement, with the meaning of "pronounce" in a sentence like: They pronounce greasy with a voiced groove fricative.While models of language in this way incorporate far more information about individual sentences than did the purely syntactic-based grammars of a decade ago, means must be devised to take account of the more accurate analysis of language which is now projected. Suggestions vary concerning the implications of these developments, as the following chapters will indicate.Some specialists consider machine translation unlikely unless at the same time automatic information and fact retrieval are made possible. Others hold that machine translation is not now, and may never be, contemplated for types of language outside technical and scientific documents; accordingly the "origin, uses and effects" of the material to be translated are determined, and investigators dealing with machine translation should direct their concerns at this restricted type of language.Whatever steps are selected to employ findings of contemporary linguistic to carry out machine translation, it should be noted that specialists in machine translation have taken account of these findings. There accordingly is no conflict between specialists in descriptive linguistics, linguistic theory and machine translation, as Chapter 6 below will outline in further detail. As descriptive linguists improve their understanding of language, and the models by which to express that understanding, machine translation specialists will update their procedures and models.Contemporary linguistics is concerned with all facets of language, its syntax, semantics, aberrant uses, uses in established social situations, its relation to other disciplines, such as logic and so on. This breadth of concern contrasts strikingly with self-imposed limitations of the recent past.It also leads one to examine the extent of concern of machine translation with One of the problems in this definition is the existence of two meanings of any, one illustrated in Ross's sentence:Anybody could have shot MaxThe meaning of any here might be made more precise by adding: whatsoever.This use of any is found only when possibility is involved; it is not used with must. A different meaning (some) is found in the question: Do you know any songs? If, however, stress were put on any, especially in a negative sentence, the "any. . .whatsoever" meaning would emerge.Accordingly, as study of quantifiers has been pursued more extensively it is clear that a sentence like the following can have two meanings:We do not believe that any catalyst could have precipitated the reaction.On one interpretation, this sentence could be roughly equivalent to: that some catalyst, that is, that a selected catalyst was involved. By another interpretation it would be equivalent to: that any catalyst whatsoever was involved, that is, that no involvement by a particular catalyst was possible.This indeterminacy of usage in English presents a translation problem, for German, too, indicates slight differences of meaning with quantifiers such as irgendein 'someone, anyone'. These differences are especially important in the colloquial, as the following quotation from the Duden Grammatik (1959, 265 ) may indicate:Irgendeiner muß es doch getan haben! "Someone must surely have done it!" Such usages are often found in conjunction with modals and adverbs, like doch in this sentence. For the examples of any which Ross cited, however, translation would not be a problem inasmuch as the ambiguity in English is preserved when literal translation into German is carried out.The use of any in such a relatively straightforward sentence is simply the beginning of Ross's interest. He has pursued the difference in uses of any in syntactic constructions illustrating general syntactic patterns or principles which he has investigated intensively. One of these is a "maximal domain of syntactic processes, "-in his word "island".An island is subject to special constraints, as the impossibility of the Since the paper is appended, it may be consulted for details. The general types of analytic techniques, as well as the types of "linguistic information" in this sentence pointed out by Fillmore will be summarized here.Moreover, in evaluating the implications of Fillmore's analysis for machine translation, it is important to note that he used only the written form of the sentence, excluding information that might be obtained from "any understanding of the voice quality of the speaker on the manner of utterance."Identifying first the "syntactic information" in the sentence, Fillmore uses it to determine among the three possible functions of may the one which is appropriate in this sentence.Next, examining the "illocutionary force of the question," Fillmore notes the information on deixis furnished by the pronoun and the verb come.The term "illocutionary force" refers in Fillmore's paper to the obligation which the question imposes on the addressee, that is, the obligation to exercise authority. The term "deixis" refers to the various aspects of the interpretation of sentences that relate to the speech act situation, such as person deixis, place deixis and time deixis. The possible meanings of come are restricted by its use in "a permission-seeking utterance."Last, it may be noted that Fillmore determines the meaning of the sentence from its "surface structure." He has done so by using a comprehensive lexical description for each of the four words. The possible meanings of each are restricted by the order of the sentence and by the selection of the other elements. That is to say, disambiguation was carried out by using two syntactic devices: order and selection.In conclusion, Fillmore lists "the various kinds of facts which must.. . be included in a fully developed system of linguistic description." These areextensive. Yet such explicit linguistic descriptions permit a mechanical disambiguation, and interpretation, of a given sentence.The effort required to produce these descriptions will, however, beenormous. An example of the analysis necessary for improved interpretation of sentences, which will be particularly important for information processing, is Karttunen's paper, "The Logic of English Predicate Complement Constructions." This paper, which is also appended, leads to seven classes of verbs, each indicating a commitment which "the main sentence carries along.. .with respect to the truth or falsity of its complement" and an indication of "what is implied." For example, the verb cause belongs to one of these classes which carries a commitment "true" for main as well as complement sentences. The seven classes of verbs arrived at in the paper identify meanings in much the same way as did the syntactic information in the sentence: May we come in?Linguists accordingly are drawing nearer to lexicographical work of the past, as represented especially by Zgusta and Josselson in the Study.Since the use of lexicographical techniques for machine translation is discussed in the appended papers of Zgusta, they will not be further noted here.Current linguistic description in this way is providing information on detailed lexical classes, as well as on syntactic constructions. These two types of information about language, whether they be labeled syntactic or semantic, are leading to descriptions of language which are so precise that the sense of a sentence can be determined mechanically.The generative semanticists, besides Ross, who participated in the sentences so thoroughly that they can be interpreted from the linguistic information contained within them-is of great significance for machine translation.To the extent that this interest is accomplished, machines wilt be able to translate.
null
null
The topic of machine translation is rarely discussed without reference to translation by man. In the comparison, several stereotypes have evolved.For clarity in dealing with the issue of machine translation these may be briefly noted.The human translator is generally assumed to be highly skilled, both in the subject matter and in the source and target languages. Some commentators consider skill in the source language less essential than skill in the target language. Kay holds knowledge of the subject matter to be the most essential consideration. Accordingly, it will be no small task to provide machine translation systems with detailed information on scientific topics, and to program them to use this information. Human translators must also acquire knowledge of specific scientific and technical areas. With skills in the source and target languages, and control over the subject matter, the human translator is assumed to have great flexibility. Moreover, besides flexibility he provides immediate access to the text.When, however, one considers the broad scope of scientific writing, and vocabulary, this ideal picture loses some of its attractiveness. To meet the problem, the German translation service has been compiling a large dictionary of technical terms and their standard translation. In this compilation, specific translations are fixed. The project, accordingly, is designed to standardize and normalize translations, as well as to provide assistance for human translators. Moreover, the dictionary is mechanized.Eventually, any text to be translated is to be provided to the translator in a print-out having the translations of all terms in the dictionary, as well as the original. The translator's responsibility would then consist in framing the sentences in the target language. He would also determine the meanings of any new terms. In this way the dictionary would be expanded and updated.The dictionary of the German translation service contains close to a million items. Problems which human translators face when using generally available dictionaries, which have far fewer entries, may be put in perspective by this resource. The arrangements for translators in the German translation service may also illuminate the requirements for computerassisted translation.It is occasionally proposed that computer-assisted translation is an attainable compromise, with better output than that from the individual translator and fewer awkward renditions than those provided by machine translation.Whatever one's reaction to this view, it should be noted that computer-assisted translation requires a large staff of research scholars, and a large computer facility. Kay, a proponent of machine-human translation, proposes an elaborate scheme to permit human beings to assist a system that is essentially a machine translation system. Under this scheme human beings would make decisions which the machine would be incapable of making and thus assure a high-quality output. His scheme envisions several native, possibly monolingual speakers of the source language, several monolingual speakers of the target language and one highly competent bilingual, to whom problems requiring knowledge of both languages would be shunted. In other words, the expenditure for staff and equipment would not be small, actually larger than that for machine translation. Clearly, computer-assisted translation is proposed as a second choice, through desperation that machine translation is unattainable at present.In One obvious advantage is the speed with which the translations would be provided.Among the advantages of machine translation is consistency. As in the German translation service, standard terms could always be produced.As a simple example, the German translation service decided to use Telefon rather than Fernsprecher; even the variant Telephon was considered erroneous.In much the same way, any technical term need never be varied, unlike the practice of many translators.If the quality of such translations is to equal that of the most accurate human translations, a comprehensive dictionary and grammar are essential, as well as the necessary hardware and the software techniques. Achieving these has been the major goal of machine translation. In the next section we note the current status of these three requirements.
One of the primary problems in presenting the views of specialists in machine translation results from the low level of research during the past five years. Few groups received any kind of support. The greater part of them could only update their previous systems, not introduce major innovations.In view of the low funding, research was severely restricted, generally devoted to improvements in the lexicon. This limitation in funding greatly restricted the possibility of carrying out new experiments, let alone that of producing improved translation systems which could meet some of the goals held out for machine translation. The views of specialists are accordingly based in part on assumptions framed some years ago when some long-range machine translation projects were able to carry out work in programming and in linguistic analysis, and to test their efforts by means of computer runs.In his summary on the final day of the January Conference, Bar-Hillel concentrated on the linguistic situation. Noting that the primary considerations are quality, speed and cost, he expected improvements in speed and cost of output from advances in computer hardware and software; but their contributions to improved quality would only be external, for example as printouts would begin to approximate those produced by printing-presses. Essential for improving quality is improvement in linguistic theory and analysis.Bar-Hillel's discussion involved arguments on a definition of quality, and on the receptivity of scientists to output from the translation systems which now are in use, notably the Georgetown system as used at Oak Ridge. This point will be discussed further below, in connection with Zarechnak's statements. this is accomplished through the use of a translation mechanism due to Knuth.(The task of relating these deep structures to surface forms is, to be sure, quite complex. Even relatively simple sentences may require as many as forty or fifty transformational applications.)The syntactic analysis algorithm which is utilized is valid for a significant class of transformational grammars. This, together with the modular nature of the Knuth semantic interpreter, makes modification of both the syntactic and semantic components relatively easy.It should be noted that the system being implemented at the IBM The following examples indicate difficulties, and characterize inadequacies, which each type of system fails to resolve. If that type of system were used, these shortcomings would have to be removed by pre-editing or post-editing.1. Lexical translation, with no access to syntactic information.Under such a system milk might be taken as verb or noun.2. Syntactic translation, with no access to semantic information.The conductor broke.Under such a system disambiguation would be impossible.3. Semantic translation, without contextual theory.We watched the conductor. He smiled.We watched the conductor. It was on fire.Here too disambiguation would be impossible. In accordance with this sketch of potential systems, we expect the highest quality from a system which is at stage 5, or possibly at stage 4. The requirements for these stages have not yet been handled in linguistic theory, and accordingly at present they are unattainable. To what extent a system at stage 3 will be able to translate scientific and technical materials acceptably will depend on testing of the output, and the receptivity of users after such a system has been developed. Systems at this stage are now under development.The questions raised in the Study are also of interest to scholars who could not participate, as a recent article by Kulagina, Mel'chuk and Rozentsveyg indicates. It is noteworthy that, like Bar-Hillel and other participants in the Study, the three authors concentrate on the quality to be achieved, assuming that cost and time can be adequately managed.The authors express their views concerning the feasibility of machine translation with regard to the ALPAC report, especially its view that machine translation is at present impractical. They state: "We wish to declare decisively that this view has no real support: it is founded upon a failure to understand the problem in principle and confusion of its theoretical, scientific and practical aspects. The fact that machine translation has been ineffectual in practice to the present should, in our opinion, lead to an increase rather than a decrease in efforts in this area, especially in exploratory and experimental work. It is clear that no practical result can precede fundamental development of the problem, although the possibility is not excluded that useful practical results may be the product of early stages of research. There is not, and has not been, a crisis in machine translation as a scientific undertaking, a crisis which would be reflected in a lack of ideas and a lack of understanding what path to follow. Machine translation as a scientific undertaking.. .is continuing to develop actively. There are many interesting ideas and approaches which are far from being sufficiently developed and experimentally tested."After making this critique of a negative approach, they state that a high-capacity, high In spite of the progress that has been made in linguistic analysis, linguistic research has dealt primarily with syntactic analysis of individual sentences, and hardly at all with semantic problems and discourse analysis.As a result, current linguistic theory is inadequate for machine translation. In view of the Peters-Ritchie results, it may be advisable to continue efforts with more restricted grammatical models which provide exact surface analysis based on syntactic and semantic features in the lexicon. Examples are string analysis, the model used at the Linguistics Research Center, dependency theory of the Soviet type, and grammatical models whose transformational apparatus is more restricted than that of "standard" transformational grammars, for example, systems which use non-ordered or partially ordered transformations or equivalence transformations. Further, research in discourse analysis should be increased. Since the problems in machine translation are not the generation of coherent discourse but the carrying across of information, the achievement of translation would be considerably facilitated by such models. These problems may even be less pressing in actual practice because of the user reaction; that is, very often it may not be necessary for the system to represent all alternatives since the user will be able to provide the proper reading because of his access to information necessary for comprehension. Investigations on user-translation interaction should be carried out, especially in view of the highly divergent estimates of Zarechnak and Bar-Hillel. See also section 4, p. 46.2. Like other technological applications, machine translation can be designed with various degrees of adequacy. The history of machine translation reflects this situation. The first attempts were primarily lexical.Syntactic analysis was then added. Currently semantic analysis is included for projected machine translation systems.The improved understanding of language resulting from these progressively more comprehensive descriptions of language leads to improved translations. Translations based on semantic analysis will be correct when the information needed for disambiguation of a sentence is contained in that sentence. When it is not, contextual and pragmatic information will be necessary.3. Meaning is largely determined by the semantic readings of the lexical items in a sentence and the syntactic (semantic) relations between those items; these are presumably represented by the underlying structures of language. To arrive at the meanings of specific sentences, the underlying structure will have to be determined from the surface structure. In related languages, such as English and German, the relationships between surface and underlying structure are more similar than they are between less related languages like Russian and English or unrelated languages, such as English and Chinese. Accordingly, it will be simpler to devise translation systems for related languages. For the development of the technology of machine translation, systems designed for related languages are accordingly recommended at this time as an immediate goal. Medium-range goals (Russian-English) and long-range goals (Chinese-English) should also be planned.The usefulness of translation depends on various factors: cost, timeliness, comprehensibility. In locations where imperfect, lexically-based machine translations are available, scientists have selected these over human translation when they could be made available the following day and human translations only after a week. In view of this situation, studies should be performed to measure the extent to which comprehensibility of a translation is dependent on the knowledge available to the actual user. Moreover, it should be noted that timeliness ranks high as a factor in translation. See also page 46.Participants in the Study did not agree on what constitutes "high quality" translation. There is apparently no absolute standard. Rather, standards must be defined with reference to specific users and specific purposes.1. On the basis of this Study it is recommended that support be made available for research in machine translation. The recommendation is made on the grounds that quality translation can be achieved in the near future. This recommendation agrees strikingly with conclusions reached in a study carried out in the Soviet Union.Moreover, apart from attempts in information retrieval, machine translation is currently the only discipline which requires the study of problems beyond the sentence boundary. Because of the general lack of interest in these problems on the part of linguists, machine translation should be sponsored as an intellectual pursuit contributing to our knowledge of language.2. For improved machine translation, research in the areas of descriptive linguistics, theoretical linguistics, comparative linguistics, stylistics, and evaluation of translation is necessary and should be supported.2.1 Lexical research is necessary to determine the syntactic and semantic patterns of linguistic entities. Recent lexical research has indicated that entities such as verbs which have more than one meaning may have a particular meaning (1) only when they occur in specific syntactic environments whereas they have meaning (2) or further meanings when they occur in other specific environments. To illustrate the effect of only a trivially improved lexicon on translation, the report of an experiment conducted by Stachowitz in the spring of 1967 is appended. 6. Explicit study should also be made of the kind of information available to the user which is necessary for the understanding of material that is mechanically translated. Such studies should seek to determine the amount of knowledge available from the surrounding text, as well as the amount of world knowledge necessary for the understanding of individual sentences.These investigations would be designed to determine the amount of information which must be provided to the machine so that the output is intelligible to a specialist or a general user.7. Since the results of linguistic research will contribute to advances in machine translation, support is also recommended for research on problems in linguistics.
Main paper: translation: human and machine: The topic of machine translation is rarely discussed without reference to translation by man. In the comparison, several stereotypes have evolved.For clarity in dealing with the issue of machine translation these may be briefly noted.The human translator is generally assumed to be highly skilled, both in the subject matter and in the source and target languages. Some commentators consider skill in the source language less essential than skill in the target language. Kay holds knowledge of the subject matter to be the most essential consideration. Accordingly, it will be no small task to provide machine translation systems with detailed information on scientific topics, and to program them to use this information. Human translators must also acquire knowledge of specific scientific and technical areas. With skills in the source and target languages, and control over the subject matter, the human translator is assumed to have great flexibility. Moreover, besides flexibility he provides immediate access to the text.When, however, one considers the broad scope of scientific writing, and vocabulary, this ideal picture loses some of its attractiveness. To meet the problem, the German translation service has been compiling a large dictionary of technical terms and their standard translation. In this compilation, specific translations are fixed. The project, accordingly, is designed to standardize and normalize translations, as well as to provide assistance for human translators. Moreover, the dictionary is mechanized.Eventually, any text to be translated is to be provided to the translator in a print-out having the translations of all terms in the dictionary, as well as the original. The translator's responsibility would then consist in framing the sentences in the target language. He would also determine the meanings of any new terms. In this way the dictionary would be expanded and updated.The dictionary of the German translation service contains close to a million items. Problems which human translators face when using generally available dictionaries, which have far fewer entries, may be put in perspective by this resource. The arrangements for translators in the German translation service may also illuminate the requirements for computerassisted translation.It is occasionally proposed that computer-assisted translation is an attainable compromise, with better output than that from the individual translator and fewer awkward renditions than those provided by machine translation.Whatever one's reaction to this view, it should be noted that computer-assisted translation requires a large staff of research scholars, and a large computer facility. Kay, a proponent of machine-human translation, proposes an elaborate scheme to permit human beings to assist a system that is essentially a machine translation system. Under this scheme human beings would make decisions which the machine would be incapable of making and thus assure a high-quality output. His scheme envisions several native, possibly monolingual speakers of the source language, several monolingual speakers of the target language and one highly competent bilingual, to whom problems requiring knowledge of both languages would be shunted. In other words, the expenditure for staff and equipment would not be small, actually larger than that for machine translation. Clearly, computer-assisted translation is proposed as a second choice, through desperation that machine translation is unattainable at present.In One obvious advantage is the speed with which the translations would be provided.Among the advantages of machine translation is consistency. As in the German translation service, standard terms could always be produced.As a simple example, the German translation service decided to use Telefon rather than Fernsprecher; even the variant Telephon was considered erroneous.In much the same way, any technical term need never be varied, unlike the practice of many translators.If the quality of such translations is to equal that of the most accurate human translations, a comprehensive dictionary and grammar are essential, as well as the necessary hardware and the software techniques. Achieving these has been the major goal of machine translation. In the next section we note the current status of these three requirements. techniques involved in machine translation:: hardware, software, linguisticsIn the early attempts at machine translation, the capacities of computers were a major problem. Difficulties resulted especially from the inadequacy of rapid-access memories. For processing languages, the available rapid-access storage space was filled with the major rules for grammatical constructions. Lexical items accordingly had to be stored in memories, usually on tapes, which required a considerable period of searching. As a result, even simple sentences required a long time for analysis. When the Linguistics Research Center was carrying out its research with an IBM 7040, several years ago, the computer would "grind" all night to translate a few sentences.To speed up the process, attempts were made to develop special- In time, procedure-oriented languages were used to produce programs of more general usefulness to linguists. Dictionary lookup and maintenance programs and context-free grammar parsing programs were followed by such programming systems as J. Friedman's transformational grammar tester and S. Petrick's transformational grammar syntactic analyzer. Systems such as these can be considered to be problem-oriented programming languages. The IBM natural language question answering project mentioned by Petrick in his appended paper uses Friedman's grammar tester system as well as a transformational syntactic analysis system that provides for a linguistically more realistic class of transformational grammars than could previously be accepted using its predecessor.Certainly presently existing procedural and problem-oriented languages make the mechanization of many linguistic processes easier than was the case a few years ago. The programming of many linguistic algorithms remains a slow and difficult task, however, as is the case for most complex algorithmic processes.The scope of the programs necessary for machine translation may be noted by examining a flow-chart of the programs that had been projected, and in part completed, at the Linguistics Research Center. These were produced entirely from scratch. Because the basic programs furnished by computer manufacturers were so inadequate, the Linguistics Research Center programs were written in machine language. The expenditure of time was enormous.The magnitude of the problem may be noted if one compares the cost incurred by IBM in developing PL-1; the cost of it far surpasses the entire amount which was spent on machine translation from the beginning of machine translation research.Gradually, adequate computer programs were devised for data processing. These now form the basis of programming systems used for machine translation. Like all programs, they need modification, and improvement, especially to speed up processing. The basic programs, however, are available and to them the additional programs needed for language processing can be added. Like computer equipment, programming systems will be improved. inadequate model of English would not. It would take "he" to be the subject of the verb + adjective combination, and also of the infinitive; its analysis of sentence 1 would therefore be wrong. A literal interpretation would fail to determine the proper meaning of this sentence and many other sentences. By determining the deep structure the meaning can be more easily arrived at.Alternatively, "easy" may be provided with a feature which would transform "NP be easy to -Inf" to "it be easy to -Inf NP" in accordance with Harris' use of transformations.Language is structured in this way in all its components, the phonological as well as the syntactic. In both of these components, it is a code, rather than a cipher, system. The human brain knows how to interpret the code. If machines are to interpret language, they must be provided with a comparable capability. Engineers have been working on machine interpretation of the phonological system; it would be useful, for example, if telephone "dialing" could be done by voice, rather than manually. Engineers have not mastered the problem, however, even though they are aware of the basic difficulty.The problem in the syntactic component of language has been one of the central issues for linguistics since the publication of Saussure's Cours in 1916 (though it was known earlier). Various labels have been given to the underlying structure. Saussure used the traditional philosophical terms:"form" for the underlying structure and "substance" for the surface structure.Recently the term surface structure has been used almost exclusively rather than substance, and deep or underlying structure rather than form.In view of this structure of language, techniques must be devised to get from the surface structure to the deep structure. Peters and Ritchie, 1971 ).This realization has important consequences for machine translation.The fact cannot be escaped that in machine translation, one must somehow determine the underlying forms of sentences. Further, the technique of using some relational formulae like reverse transformations is also clearly necessary.In considering linguistic techniques, the fundamental question is: how can these formulae be adequately restricted so that they yield only the specific underlying structure intended by the author, that is, the proper meaning?Two devices must be used: the lexical elements must be described as precisely as possible, so that only the desired transformations apply; the transformations must be devised in such a way that their use is properly restricted.Exploiting this understanding of the necessary procedures will require considerable work. The lexical analysis alone will be a huge task. The assumption of a universal base receives support from the capability of speakers to translate. It is also supported by the capability of infants to learn any language, to learn it rapidly, and in accordance with well-determined stages. Whatever a baby's ancestry, it acquires the language it hears.Moreover, the stages of linguistic development are fixed for virtually all infants, regardless of their intelligence.These observations are most plausibly accounted for if we assume some fundamental principles common to all language; further, that these somehow are related to the functioning of the brain. The principles are highly abstract. They permit certain linguistic structures and constrain others which are theoretically possible. As yet they are not by any means thoroughly explored. The term "universal" has been used for general characteristics of language; one example of a universal may be exemplified here, with two sentences and their variants.1.a She regretted the fact that she had taken the book. 2.c What did she regret that she had taken?BUT NOT 1.c *What did she regret the fact that she had taken?The impossibility of 1.c results apparently from a universal principle which blocks the extraction of an element out of a clause modifying a noun phrase.This principle was formulated by Ross (1967, 66-70) as "The Complex NPConstraint." Since this principle applies to all languages which have been examined, it is assumed to be a universal characteristic of language.Whatever the views which will be formulated concerning universals, this principle, like other universal principles that are being investigated, restricts the possible transformations for structures of language. Since language is governed by such constraints, the model which must be constructed to embrace all languages must have certain limits. Moreover, if only because of the finiteness of the human brain, grammars must be finite.These observations lead to the conclusion that a mechanical translation system can be devised. Even more support is provided by the conclusion of much recent linguistic study that we may posit the existence of a universal base. For the surface structures of any language can be related to such a universal base. Since the universal base in turn can be used for deriving the surface structures of any language, the universal base can serve as the intermediate language between any source language and any target language.The possibility of devising a translation system in view of the fact that a universal base may exist still leaves many problems. guages and also for use in machine translation systems, for which they produce far too many syntactic interpretations of any given sentence.The production of devices to map surface structures stringently into underlying structures is one of the most serious concerns of current linguistics.Bach's paper noted above (1971) is an example. A device projected by Stachowitz has been described in RGEMT (1970) . It makes use of an underlying form, the standard strings of a language. Associated with these strings are canonical forms, which represent the meanings of given sentences. "The language" of these is assumed to be "common to all natural languages" (Stachowitz, 1970, T-65) . Fuller information on the model is given T-66ff. The quotation here may be adequate to indicate that the canonical forms correspond to a universal base. A description of translation as it is being pursued in accordance with this model at the Linguistics Research Center is appended (Stachowitz paper).Of great importance for the Linguistics Research Center system is a well-designed lexicon. The intensive lexicographical work which has been going on at the Center for more than two years now has resulted in great amounts of syntactic information; the incorporation of semantic information is currently in progress. Because of their syntactic and semantic classification, the lexical entries will limit the possibilities of relationship with canonical forms. In this way a proper match will be brought about between the lexical and syntactic elements of the source language and those of the target language.The design of the lexicon has been vastly improved over dictionaries envisaged a decade ago. Without intending to dwell on the naivete of these and their proponents, reference might be made one further time to the saying which was supposedly quite ambiguous and accordingly a prime exhibit of the As linguists have improved their models of language, the problem of ambiguity has been reduced. It may be noted that the attention to the pragmatics of Peirce repeats a position held early in machine translation research. By attention to pragmatics, that is, to information on the "origin, uses and effects" of language, sentences belonging to the class of proverbs are not treated like sentences found in scientific exposition. Attention has also been focussed on such classes as illocutionary verbs, or on the characteristics of speech acts.Thus a sentence like: I pronounce you man and wife would not be treated as a simple declarative statement, with the meaning of "pronounce" in a sentence like: They pronounce greasy with a voiced groove fricative.While models of language in this way incorporate far more information about individual sentences than did the purely syntactic-based grammars of a decade ago, means must be devised to take account of the more accurate analysis of language which is now projected. Suggestions vary concerning the implications of these developments, as the following chapters will indicate.Some specialists consider machine translation unlikely unless at the same time automatic information and fact retrieval are made possible. Others hold that machine translation is not now, and may never be, contemplated for types of language outside technical and scientific documents; accordingly the "origin, uses and effects" of the material to be translated are determined, and investigators dealing with machine translation should direct their concerns at this restricted type of language.Whatever steps are selected to employ findings of contemporary linguistic to carry out machine translation, it should be noted that specialists in machine translation have taken account of these findings. There accordingly is no conflict between specialists in descriptive linguistics, linguistic theory and machine translation, as Chapter 6 below will outline in further detail. As descriptive linguists improve their understanding of language, and the models by which to express that understanding, machine translation specialists will update their procedures and models. pertinent recent work in linguistics: Contemporary linguistics is concerned with all facets of language, its syntax, semantics, aberrant uses, uses in established social situations, its relation to other disciplines, such as logic and so on. This breadth of concern contrasts strikingly with self-imposed limitations of the recent past.It also leads one to examine the extent of concern of machine translation with One of the problems in this definition is the existence of two meanings of any, one illustrated in Ross's sentence:Anybody could have shot MaxThe meaning of any here might be made more precise by adding: whatsoever.This use of any is found only when possibility is involved; it is not used with must. A different meaning (some) is found in the question: Do you know any songs? If, however, stress were put on any, especially in a negative sentence, the "any. . .whatsoever" meaning would emerge.Accordingly, as study of quantifiers has been pursued more extensively it is clear that a sentence like the following can have two meanings:We do not believe that any catalyst could have precipitated the reaction.On one interpretation, this sentence could be roughly equivalent to: that some catalyst, that is, that a selected catalyst was involved. By another interpretation it would be equivalent to: that any catalyst whatsoever was involved, that is, that no involvement by a particular catalyst was possible.This indeterminacy of usage in English presents a translation problem, for German, too, indicates slight differences of meaning with quantifiers such as irgendein 'someone, anyone'. These differences are especially important in the colloquial, as the following quotation from the Duden Grammatik (1959, 265 ) may indicate:Irgendeiner muß es doch getan haben! "Someone must surely have done it!" Such usages are often found in conjunction with modals and adverbs, like doch in this sentence. For the examples of any which Ross cited, however, translation would not be a problem inasmuch as the ambiguity in English is preserved when literal translation into German is carried out.The use of any in such a relatively straightforward sentence is simply the beginning of Ross's interest. He has pursued the difference in uses of any in syntactic constructions illustrating general syntactic patterns or principles which he has investigated intensively. One of these is a "maximal domain of syntactic processes, "-in his word "island".An island is subject to special constraints, as the impossibility of the Since the paper is appended, it may be consulted for details. The general types of analytic techniques, as well as the types of "linguistic information" in this sentence pointed out by Fillmore will be summarized here.Moreover, in evaluating the implications of Fillmore's analysis for machine translation, it is important to note that he used only the written form of the sentence, excluding information that might be obtained from "any understanding of the voice quality of the speaker on the manner of utterance."Identifying first the "syntactic information" in the sentence, Fillmore uses it to determine among the three possible functions of may the one which is appropriate in this sentence.Next, examining the "illocutionary force of the question," Fillmore notes the information on deixis furnished by the pronoun and the verb come.The term "illocutionary force" refers in Fillmore's paper to the obligation which the question imposes on the addressee, that is, the obligation to exercise authority. The term "deixis" refers to the various aspects of the interpretation of sentences that relate to the speech act situation, such as person deixis, place deixis and time deixis. The possible meanings of come are restricted by its use in "a permission-seeking utterance."Last, it may be noted that Fillmore determines the meaning of the sentence from its "surface structure." He has done so by using a comprehensive lexical description for each of the four words. The possible meanings of each are restricted by the order of the sentence and by the selection of the other elements. That is to say, disambiguation was carried out by using two syntactic devices: order and selection.In conclusion, Fillmore lists "the various kinds of facts which must.. . be included in a fully developed system of linguistic description." These areextensive. Yet such explicit linguistic descriptions permit a mechanical disambiguation, and interpretation, of a given sentence.The effort required to produce these descriptions will, however, beenormous. An example of the analysis necessary for improved interpretation of sentences, which will be particularly important for information processing, is Karttunen's paper, "The Logic of English Predicate Complement Constructions." This paper, which is also appended, leads to seven classes of verbs, each indicating a commitment which "the main sentence carries along.. .with respect to the truth or falsity of its complement" and an indication of "what is implied." For example, the verb cause belongs to one of these classes which carries a commitment "true" for main as well as complement sentences. The seven classes of verbs arrived at in the paper identify meanings in much the same way as did the syntactic information in the sentence: May we come in?Linguists accordingly are drawing nearer to lexicographical work of the past, as represented especially by Zgusta and Josselson in the Study.Since the use of lexicographical techniques for machine translation is discussed in the appended papers of Zgusta, they will not be further noted here.Current linguistic description in this way is providing information on detailed lexical classes, as well as on syntactic constructions. These two types of information about language, whether they be labeled syntactic or semantic, are leading to descriptions of language which are so precise that the sense of a sentence can be determined mechanically.The generative semanticists, besides Ross, who participated in the sentences so thoroughly that they can be interpreted from the linguistic information contained within them-is of great significance for machine translation.To the extent that this interest is accomplished, machines wilt be able to translate. views of specialists concerning machine translation: One of the primary problems in presenting the views of specialists in machine translation results from the low level of research during the past five years. Few groups received any kind of support. The greater part of them could only update their previous systems, not introduce major innovations.In view of the low funding, research was severely restricted, generally devoted to improvements in the lexicon. This limitation in funding greatly restricted the possibility of carrying out new experiments, let alone that of producing improved translation systems which could meet some of the goals held out for machine translation. The views of specialists are accordingly based in part on assumptions framed some years ago when some long-range machine translation projects were able to carry out work in programming and in linguistic analysis, and to test their efforts by means of computer runs.In his summary on the final day of the January Conference, Bar-Hillel concentrated on the linguistic situation. Noting that the primary considerations are quality, speed and cost, he expected improvements in speed and cost of output from advances in computer hardware and software; but their contributions to improved quality would only be external, for example as printouts would begin to approximate those produced by printing-presses. Essential for improving quality is improvement in linguistic theory and analysis.Bar-Hillel's discussion involved arguments on a definition of quality, and on the receptivity of scientists to output from the translation systems which now are in use, notably the Georgetown system as used at Oak Ridge. This point will be discussed further below, in connection with Zarechnak's statements. this is accomplished through the use of a translation mechanism due to Knuth.(The task of relating these deep structures to surface forms is, to be sure, quite complex. Even relatively simple sentences may require as many as forty or fifty transformational applications.)The syntactic analysis algorithm which is utilized is valid for a significant class of transformational grammars. This, together with the modular nature of the Knuth semantic interpreter, makes modification of both the syntactic and semantic components relatively easy.It should be noted that the system being implemented at the IBM The following examples indicate difficulties, and characterize inadequacies, which each type of system fails to resolve. If that type of system were used, these shortcomings would have to be removed by pre-editing or post-editing.1. Lexical translation, with no access to syntactic information.Under such a system milk might be taken as verb or noun.2. Syntactic translation, with no access to semantic information.The conductor broke.Under such a system disambiguation would be impossible.3. Semantic translation, without contextual theory.We watched the conductor. He smiled.We watched the conductor. It was on fire.Here too disambiguation would be impossible. In accordance with this sketch of potential systems, we expect the highest quality from a system which is at stage 5, or possibly at stage 4. The requirements for these stages have not yet been handled in linguistic theory, and accordingly at present they are unattainable. To what extent a system at stage 3 will be able to translate scientific and technical materials acceptably will depend on testing of the output, and the receptivity of users after such a system has been developed. Systems at this stage are now under development.The questions raised in the Study are also of interest to scholars who could not participate, as a recent article by Kulagina, Mel'chuk and Rozentsveyg indicates. It is noteworthy that, like Bar-Hillel and other participants in the Study, the three authors concentrate on the quality to be achieved, assuming that cost and time can be adequately managed.The authors express their views concerning the feasibility of machine translation with regard to the ALPAC report, especially its view that machine translation is at present impractical. They state: "We wish to declare decisively that this view has no real support: it is founded upon a failure to understand the problem in principle and confusion of its theoretical, scientific and practical aspects. The fact that machine translation has been ineffectual in practice to the present should, in our opinion, lead to an increase rather than a decrease in efforts in this area, especially in exploratory and experimental work. It is clear that no practical result can precede fundamental development of the problem, although the possibility is not excluded that useful practical results may be the product of early stages of research. There is not, and has not been, a crisis in machine translation as a scientific undertaking, a crisis which would be reflected in a lack of ideas and a lack of understanding what path to follow. Machine translation as a scientific undertaking.. .is continuing to develop actively. There are many interesting ideas and approaches which are far from being sufficiently developed and experimentally tested."After making this critique of a negative approach, they state that a high-capacity, high In spite of the progress that has been made in linguistic analysis, linguistic research has dealt primarily with syntactic analysis of individual sentences, and hardly at all with semantic problems and discourse analysis.As a result, current linguistic theory is inadequate for machine translation. In view of the Peters-Ritchie results, it may be advisable to continue efforts with more restricted grammatical models which provide exact surface analysis based on syntactic and semantic features in the lexicon. Examples are string analysis, the model used at the Linguistics Research Center, dependency theory of the Soviet type, and grammatical models whose transformational apparatus is more restricted than that of "standard" transformational grammars, for example, systems which use non-ordered or partially ordered transformations or equivalence transformations. Further, research in discourse analysis should be increased. Since the problems in machine translation are not the generation of coherent discourse but the carrying across of information, the achievement of translation would be considerably facilitated by such models. These problems may even be less pressing in actual practice because of the user reaction; that is, very often it may not be necessary for the system to represent all alternatives since the user will be able to provide the proper reading because of his access to information necessary for comprehension. Investigations on user-translation interaction should be carried out, especially in view of the highly divergent estimates of Zarechnak and Bar-Hillel. See also section 4, p. 46.2. Like other technological applications, machine translation can be designed with various degrees of adequacy. The history of machine translation reflects this situation. The first attempts were primarily lexical.Syntactic analysis was then added. Currently semantic analysis is included for projected machine translation systems.The improved understanding of language resulting from these progressively more comprehensive descriptions of language leads to improved translations. Translations based on semantic analysis will be correct when the information needed for disambiguation of a sentence is contained in that sentence. When it is not, contextual and pragmatic information will be necessary.3. Meaning is largely determined by the semantic readings of the lexical items in a sentence and the syntactic (semantic) relations between those items; these are presumably represented by the underlying structures of language. To arrive at the meanings of specific sentences, the underlying structure will have to be determined from the surface structure. In related languages, such as English and German, the relationships between surface and underlying structure are more similar than they are between less related languages like Russian and English or unrelated languages, such as English and Chinese. Accordingly, it will be simpler to devise translation systems for related languages. For the development of the technology of machine translation, systems designed for related languages are accordingly recommended at this time as an immediate goal. Medium-range goals (Russian-English) and long-range goals (Chinese-English) should also be planned.The usefulness of translation depends on various factors: cost, timeliness, comprehensibility. In locations where imperfect, lexically-based machine translations are available, scientists have selected these over human translation when they could be made available the following day and human translations only after a week. In view of this situation, studies should be performed to measure the extent to which comprehensibility of a translation is dependent on the knowledge available to the actual user. Moreover, it should be noted that timeliness ranks high as a factor in translation. See also page 46.Participants in the Study did not agree on what constitutes "high quality" translation. There is apparently no absolute standard. Rather, standards must be defined with reference to specific users and specific purposes.1. On the basis of this Study it is recommended that support be made available for research in machine translation. The recommendation is made on the grounds that quality translation can be achieved in the near future. This recommendation agrees strikingly with conclusions reached in a study carried out in the Soviet Union.Moreover, apart from attempts in information retrieval, machine translation is currently the only discipline which requires the study of problems beyond the sentence boundary. Because of the general lack of interest in these problems on the part of linguists, machine translation should be sponsored as an intellectual pursuit contributing to our knowledge of language.2. For improved machine translation, research in the areas of descriptive linguistics, theoretical linguistics, comparative linguistics, stylistics, and evaluation of translation is necessary and should be supported.2.1 Lexical research is necessary to determine the syntactic and semantic patterns of linguistic entities. Recent lexical research has indicated that entities such as verbs which have more than one meaning may have a particular meaning (1) only when they occur in specific syntactic environments whereas they have meaning (2) or further meanings when they occur in other specific environments. To illustrate the effect of only a trivially improved lexicon on translation, the report of an experiment conducted by Stachowitz in the spring of 1967 is appended. 6. Explicit study should also be made of the kind of information available to the user which is necessary for the understanding of material that is mechanically translated. Such studies should seek to determine the amount of knowledge available from the surrounding text, as well as the amount of world knowledge necessary for the understanding of individual sentences.These investigations would be designed to determine the amount of information which must be provided to the machine so that the output is intelligible to a specialist or a general user.7. Since the results of linguistic research will contribute to advances in machine translation, support is also recommended for research on problems in linguistics. : As the appendices indicate, the study brought together specialists in the areas involved in machine translation. The report summarizes their findings. Participants in the study were provided with a preliminary statement of the initial part of this report, except for the conclusions and recommendations, and were asked to send their comments and revisions. These were incorporated in this report, except when they did not seem in keeping with the general conclusions of the various other participants. There were few strikingly diverse points of view.The objective of this theoretical inquiry is to examine the controversial issue of a fully automatic high quality translation (FAHQT) in the light of the past and projected advances in linguistic theory and hardware/software capability. The principal purpose of this study is to determine whether the concept of FAHQT is justifiable as a long range R&D proposition. The study is also concerned with the intermediate range alternatives to FAHQT, i.e., machine translation forms that are adequate to the user's needs with or without post-editing. Machine aided translation, based on the automated dictionary look-up, is excluded from the study in consideration of the fact that this by-product of machine translation R&D is well within the current state-of-the-art.In the context of FAHQT, "full automation" implies that the entire translation process is autonomous in the computer without pre-editing of the source language text and post-editing of the target language output. "High quality" seems to be undefinable in an absolute sense. In referring to machine translation of 100% quality, Bar-Hillel (1) introduced the following qualification."When I talk about "100%", I obviously have in mind not some heavenly ideal of perfection, but the end product of an average human translator. I am aware that such translator will on occasion make mistakes and that even machines of a general low quality output will avoid some of these mistakes. I am naturally comparing averages only". Thus viewed, even the concept of 100% quality is not equatable with the error-free performance in either form of translation. Understandably enough, participants and consultants failed to reach a unanimous agreement as to the definition of "high quality" in machine translation. This is reflected on p. 48, quote, "There is apparently no absolute standard. Rather, standards must be defined with reference to specific users and specific purposes". In the absence of absolute and universally valid quality criteria, the user of machine translation can be legitimately considered an ultimate judge of its quality. This viewpoint was first expressed by Reitwiesner and Weik (2) as early as in 1958.According to Lamb (3) , "all translation can be viewed as human translation since machine translation is nothing but another kind of human translation". It follows from this observation that the fundamental constraints on machine translation parallel those imposed on human translation. Assuming the well-known limits of translatabi1ity, this seems to imply that either form of translation is a priori constrained. In summarizing the problem of translation equivalence between SL (source language) and TL (target language), Catford (4) draws the following conclusion. " The limits of translatabi1ity in total translation are, however, much more difficult to state. Indeed, translatability here appears, intuitively, to be a cline rather than a clear-cut dichotomy. SL texts and items are more or less translatable rather than absolutely translatable or untranslatable. In total translation, translation equivalence depends on the interchangeabi1ity of the SL and TL texts to (at least some of) the relevant features of situation-substance". Ray (5) recognizes the fact that "every translation necessarily involves some distortion of meaning". However, as is reflected in his statements below, this deficiency is not only manageable, but even unimportant in the practice of translation. "The translation operation is, like the limit operation, possible only under such conditions as "sufficiently" and "arbitrarily", that is, only by the exercise of some evaluative judgement, however little. Since distortion of meaning cannot be avoided, the problem becomes one of confining it to allowable measures of allowable kinds in allowable places along allowable directions". "..., while no two languages will match exactly in the total range of possible discourse, there are infinitely many specific limited ranges of discourse where the distortion of meaning can be legitimately dismissed as of no account".The feasibility of FAHQT must be, therefore, considered within the limits of translatability, i.e., taking into account the constraints on the total-translation. Since the concept of high quality is untenable in the absolute sense, the question of what is feasible in the context of FAHQT is quite probably more meaningful. It would be patently unreasonable in this stage of R&D to postulate machine translation requirements beyond the limits of translatabi1ity imposed on human translation.Machine translation research, based on puristic notions and oriented toward a global solution, was once compared to a search for the Holy Grail. This all-or-nothing attitude has probably caused as much damage to the progress of machine translation research as the early announcements of quick and easy solutions. Perfectionists in this area have generally tended to ignore the injunction by Lecerf (6) that "entreprendre la mise au point d'ensembles de traduction automatique, c'est avant tout accepter la contrainte du reel". v According to Ljudskanov (7), "The widespread so-called 100 percent approach, along with the belief that MT presupposes the presence of a complete mathematical model of language in general and of the specific languages in particular, in practice amounts to equating the nature and extent of the knowledge of language in general, which is necessary from the point of view of theoretical linguistics, with the extent of knowledge necessary for the achievement of translation from one language into another. This approach also amounts to equating the description of communication in general with that of the translation process; it ignores the specific characteristics of the process as mentioned above and the general linguistic problems of the theory of translation (both HT and MT) in the general problem area of mathematical linguistics". "....it can be asserted that the current critical state of MT research throughout the world, although much has happened that legitimately causes well-grounded anxieties and doubts as to its possibilities, is due to a certain degree to the maximalistic tendencies, however laudable they may be in themselves, of the global strategy. By giving due consideration to the particular characteristics of the translation process and of its study, as well as to the differentiation of the aims of mathematical linguistics from the theory of MT and of the fields of competence and performance from each other, research in this field would be channeled in a direction both more realistic for our time and more closely in accord with the facts".The report highlights on p.4 an important, but often ignored, difference between scientific and technical translations and translations of literary and religious texts, in spite of its importance from the viewpoint of machine translation requirements."Even articles and monographs dealing with machine translation have failed to be adequately explicit about the special problems of translating technical and scientific materials by computer. Instead, they have confused the problem by comparing machine translation with the long-practiced human translation, by equating the problems of translating scientific materials with those involved in translating literary materials, and by using the same evaluation criteria for the results".It is now a commonplace that the style of writing is of paramount importance in literary translation, whereas the accuracy constitutes the most important quality criterion in scientific and technical translations. According to Gingold (8), "It is not the translator's job to abstract, paraphrase, or improve upon the author's statements. He cannot be expected to convert an article that is poorly organized and badly written in the original language into a masterpiece of English scientific writing. In technical translation, he must always be willing to sacrifice style on the altar of accuracy". Savory (9) has expressed a similar opinion in his statement that "the translation of scientific work is an ideal example of translation of a writing in which the subject matter is wholly on the ascendant and the style is scarcely considered".The report further emphasizes the crucial importance of timeliness in production of scientific and technical translations. According to the statement on p. 5, "...timeliness is of increasing importance to users of scientific translations. Even in a relatively unhurried field like linguistics, few articles retain their importance over a long period. Statements have been made repeatedly about the obsolescence of publications issued a few years earlier. The insistence among technical specialists and scientists for speedy translation contrasts markedly with the length of time permitted for completing literary translations". The requirement of timeliness was stressed elsewhere by Gingold (10), quote, "The delay between the appearance of the original journal and its English translation, which may be a year or more, is also a disadvantage, particularly to industry, where time is usually of great importance".The principal findings of the study, as related to its objectives, can be summarized as follows.Computer hardware is no longer considered a crucial problem in machine translation. "Remarkable improvements, especially in rapid-access storage devices, have largely eliminated the problems caused by inadequate computers. Lexical items can now be retrieved as rapidly as were the major syntactic rules a decade ago. And with further improvements of storage devices in process, computers no longer pose major problems in machine translation". (p. 12). Developmental prospects in this area are very bright indeed, particularly with the advent of holographic memories. The impact of such memories on both linguistic and computational aspects of machine translation R&D is discussed in detail by Stachowitz in one of his contributions to the report ("Requirements for Machine Translation: Problems, Solutions, Prospects", pp 409-532). This contribution is considered significant because it provides a complete blueprint for a realistic implementation of a large-scale machine translation system.Equally encouraging is the appreciation of advances in computer software. "Programming has evolved as rapidly as have computers... A key factor here was the enrichment of programming language data types which made possible efficient representation and manipulation of linguistic structures". (p. 13).The report reflects a unanimous agreement of participants and consultants that "the essential remaining problem is language" (pp 14-15). It is, therefore, not surprising that linguistics has received much more attention in the study than computer hardware and software. Recommendations presented on pp 49-51 are exclusively oriented toward linguistic research in the context of machine translation.The report points out that there is "no conflict between specialists in descriptive linguistics, linguistic theory and machine translation... As descriptive linguists improve their understanding of language, and the models by which to express that understanding, machine translation specialists will update their procedures and models".(p. 24). However, the report also reflects a difference of opinions between machine translation experts and linguists as regards the nature, orientation and scope of linguistic research involved in machine translation. It is further worth noting that some linguists participating in this study have not acknowledged Ljudskanov's caveat about "maximalistic tendencies of the global strategy".The reader is referred to Conclusions (pp 45-48) and Recommendations (pp 49-51), summarizing the results achieved in performance of this study. Recommendation of support for research in machine translation is based on the fact that "quality translations can be achieved in the near future. This recommendation agrees strikingly with conclusions reached in a study carried out in the Soviet Union". (p. 49). Galilei's challenge ("Eppur si muove!"), aptly chosen as a motto in the Introduction to (11) by Kulagina and Mel'chuk, would be equally appropriate as an expression of views and sentiments embodied in the main part of this report. As one of the leading experts, Eugene A. Nida, has stated in his most recent contribution to the topic (Nida and Taber, 1969, 1) : "Never before in the history of the world have there been so many persons engaged in the translating of both secular and religious materials." The book intimates that the requirements for translation will be increased.Moreover, it describes more specifically and concretely than earlier discussions the steps that are involved in translation. Translation is defined (Nida and Taber, 1969, 12) as "reproducing in the receptor language the closest natural equivalent of the source-language message. " And the paragraph continues: "this relatively simple statement requires careful evaluation of several seemingly contradictory elements."For a fuller statement on the problem of translation, we refer to the important books by Nida and their bibliographies. His last book, however, contains further perceptive statements that are important to include here.A section on "the old focus and the new focus" of translating (Nida and Taber, 1969, 1) states that "the older focus in translating was the form of the message.. .The new focus, however, has shifted from the form of the message 1 to the response of the receptor."Further, "even the old question: Is this a correct translation? must be answered in terms of another question, namely: For whom?" After a brief answer, the section continues: "In fact, for the scholar who is himself well acquainted with the original, even the most labored, literal translation will be correct, for he will not misunderstand it." This statement is borne out by the reception to such translations at Oak Ridge, as reported by Zarechnak below.The growing sophistication with regard to translation which is reflected in the book by Nida and Taber and in many recent publications calls for a new evaluation of the problem of machine translation, and a new statement on the current situation. The requirements for translation vary markedly from audience to audience. Even a glance at the Nida-Taber book, which concerns primarily human translations of the Bible, will disclose the difference between translation of religious and literary materials, and translation of scientific and technical materials.For the translation of technical materials, the criteria of quality, speed, and cost have been used in evaluations. In the January Conference arranged under the Study, Bar-Hillel summarized his position on the improvements possible in machine translation in the foreseeable future using these three criteria. It is instructive to compare briefly these criteria with the objectives of Nida-Taber.The primary concern of Nida-Taber is to "reproduce the message" of texts produced by cultures of the past for cultures of the present, often radically different cultures, such as those of Africa and Asia. By contrast, the texts of interest to scientists and technicians share a common "culture,"whether the texts are produced in Africa, Asia or in western countries. After these three committees have made their contribution, a "stylist is called in" (1969, 186) . This proposed organization, which is not untypical for academic projects designed to produce literary translations, provides perspective for the statements concerning post-editing of technical and scientific translations. Obviously, the length of time and the cost required to produce literary and religious translations are not factors of importance.Yet timeliness is of increasing importance to users of scientific translations. Even in a relatively unhurried field like linguistics, few articles retain their importance over a long period. Statements have been made repeatedly about the obsolescence of publications issued a few years earlier.The insistence among technical specialists and scientists for speedy translation contrasts markedly with the length of time permitted for completing literary translations, and also with "the lag time (from receipt) in publication of the translated journals supported by NSF." This, according to a report of the National Academy of Sciences, (Languages and Machines, 1966, 17) "ranges from 15 to 26 weeks." This time span may be acceptable for archival purposes;for the requirements of scientists and technical specialists it may be burdensome.Given a choice between overnight machine translation and human translation within two weeks, scientists at EURATOM invariably asked for machine translation. The need for virtually immediate translation is one of the major reasons for the concern with machine translation. In evaluating machine translation versus human translation, this reason may outweigh the difference in cost. And as Nida has pointed out, the parameter of "quality" varies considerably among the different users. Bar-Hillel, who some years ago coined the expression "High Quality Fully Automatic Machine Translation" now states in this appended article that he applied the expression in too absolute a sense. Further, that quality is related to the requirements of the user.This statement echoes the quotation from Nida-Taber on the shift of focus "from the form of the message to the response of the receptor." If technical experts and scientists have reasonable prospects of virtually immediate translation, the prospects may well be vigorously pursued, even if the translations will be more "labored" and "literal" than ordinary users permit for their religious and literary works.In reviewing the prospects for machine translation, accordingly, the specific requirements must be considered as one of the major criteria. For technical specialists and scientists, translations must be consistent, reliable and timely, whether made by man or machine. Although the arrangements made for human translation are generally assumed to be known, and understood, a brief comparison of the current situation of human versus machine translation, and their prospects, may be useful before examining in detail the procedures involved in machine translation. Appendix:
null
null
null
null
{ "paperhash": [ "gingold|a_guide_to_better_translations_for_industry", "ray|a_philosophy_of_translation" ], "title": [ "A Guide to Better Translations for Industry", "A Philosophy of Translation" ], "abstract": [ "Summary To sum up my advice to those persons charged with procuring translations: Make sure you have specified exactly what is to be translated and by when you need it. Build up a group of suppliers on the basis of your own experience. Once you have found a group of individuals or firms who can give you the type of service you require, use them regularly and make their work a little easier by providing them with English-language references whenever possible. Above all, do not be overly concerned with costs. A good translator is a highly trained and skilled professional and will not condescend to work for cut-rate fees. It is no more advisable to look for bargains in translation services than it would be to look for bargains in medical or legal services. John Ruskin, the English writer and critic, wrote a century ago : ''There is hardly anything in the world that someone cannot make a little worse and sell a little cheaper—and the people who consider price alone are this man's lawful prey.\" This is particularly true in the field of translation.", "But as it is written: \"Eye has not seen, ear has not heard, and upon the heart of man has not come up that which God has prepared for those who love him. But God has revealed it to us by his Spirit, for The Spirit searches into everything, even the depths of God. And who is the man who knows what is in a man except only the spirit of the man that is in him? So also a man does not know what is in God, only The Spirit of God knows. But we have not received The Spirit of the world, but The Spirit that is from God, that we may know the gift that has been given to us from God. But those things we speak are not in the teaching of the words of the wisdom of men, but in the teaching of The Spirit, and we compare spiritual things to the spiritual. For a selfish* man does not receive spiritual things, for they are madness to him, and he is not able to know, for they are known by The Spirit. * (“d’vanphesh” comes from Napsha –“soul”, “self”, “animal life”, and can mean, “soulish”, “selfish”, “brutish”.) But a spiritual man judges everything and he is not judged by any man. For who has known the mind of THE LORD JEHOVAH that he may teach him? But we do have the mind of The Messiah.” 1 Corinthians 2:9-16" ], "authors": [ { "name": [ "K. Gingold" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "P. S. Ray" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "144840422", "144306251" ], "intents": [ [], [ "background" ] ], "isInfluential": [ false, false ] }
null
638
0.010972
null
null
null
null
null
null
null
null
83fdffa472054cba43f1e2387a4e3df1dd84e47e
245118159
null
Meaning revisited
A summary like this one can do no more than lay out the bare bones of choicemaking activities which in each human organism are vastly convoluted and subtle. However, it is part of my purpose to spotlight the very hazard of abandoning oneself prematurely to mere facts so as to find solace in work of exceptional professional complexity. Formal investigation is, in its truest sense, an attempt to bypass variety in order to describe invariance. This is not to say that formal inquiry, when it confines its interest to techniques of symbol manipulation, somehow escapes the same vice of specialization. Rather, I mean that my private fascination has been a train of formal thought with a different aspiration, running through the definitive studies of
{ "name": [ "Pendergraft, Eugene D." ], "affiliation": [ null ] }
null
null
Feasibility Study on Fully Automatic High Quality Translation
1971-12-01
2
78
null
selection processes.It can be shown, for example, that the entire translation process can be generalized through use of metalanguages capable of conveying interlingual relations of various kinds. However, this merely extends the idea of enlarging the machine's store of knowledge about language, an idea which by itself has not benefitted mechanical selection as much as researchers had originally hoped.Accordingly, the second thrust of research on mechanical selection has been to widen the search for conditions attending choices. In addition to examining the expression undergoing translation, mechanical processes have been permitted to range over surrounding sentences, paragraphs, whole discourses, or data representing an increasingly extensive experience of language events located in the machine itself.Due as much to disappointment as to expanding interests, mechanical translation research overflowed vaingloriously and became computational linguistics. This new domain of experimentation is a conglomerate of studies in which mechanical translation shares the limelight with information storage and retrieval, automatic extracting and abstracting, fact correlation, question asking and answering, and similar applications where language is manipulated mechanically. After an unsettling beginning, during which the old guard felt compelled to recant its former commitments, the new milieu of jargons did provide a sounder medium for testing language theories and methods than mechanical translation alone.In consequence of this new opportunity to compare computational 196 linguistic applications of various types, it has been noticed that mechanical selection comes closest to human patterns of choice in those instances where a little knowledge of language, things, or persons is brought to bear on an experience sufficiently extensive as a source of conditions for choices. In other words, mechanical selection appears to be improved by a better balance between mechanical analogues of experience and knowledge. Machines that ask or answer questions are examples of applications seemingly avoiding the narrow window of experience through which mechanical translation research tried unsuccessfully to squeeze great concentrations of knowledge.In my opinion there are three lessons to be learned from this curious result of so much effort. The first concerns the way we might reasonably go about developing a mechanical translation system; the second concerns the type of system we might reasonably develop; the third concerns finding reasonable people to do the work. These problems are the ones requiring cogent solutions before feasibility estimates can be meaningful. I think, however, that we have cause to doubt the optimistic assumption that men of good will must always reach similar conclusions on exposure to similar evidence, especially when part of the evidence is about themselves.By now it should be plain that no methodological consensus exists in mechanical translation research, without which comparisons of both formal and factual results are, at best, misleading. Before sitting down to make a second round of feasibility estimates, it might be proper to ask seriously why in our estimates thus far we seem to be getting "garbage" out of our own selection process One possibility is that, because mechanical translation researchers 197 were gathered from a variety of technical specialties, we have not been looking in the same place for conditions on which to base our choices of method. By and large, it must be admitted that we have been a mixed lot, though sharing the prudent wish of every specialist: not to be caught on lame feet outside of his territory.Another is the possibility that, as heirs of commonly accepted notions about the nature of man, we have been looking too much in the same place for the conditions determining our methodological choices. By preferring the narrow window of empirical science, we have avoided those taboo territories made uninhabitable by the "garbage" production of our predecessors.As a prolific example of the latter I cite René Descartes, who ground his garbage so exceedingly fine to assay psychical as well as physical substances. Surely it is for lack of these psychic essences that machines are unable to use or to understand language; while we, brimming full, need only introspection to understand and master all of the configurations of our own choicemaking.We have said a great deal in translation research about the dangers of anthropomorphising machines and so little about the dangers of anthropomorphising ourselves. What if it should turn out, as Charles Peirce claimed a full century ago, that we have no special vantage point to our own psyche, but must learn about that too by careful methods of inquiry?Thus a third possibility is that our difficulties with mechanical selection are the result of self-ignorance, whose remedy should be a disciplined study of the ways we make choices ourselves. If in fact each of us is engaged in a 198 quest for self-knowledge, then disparities in private understandings of the state of the art of human choicemaking might well account for some of the troublesome goings on in research which takes these understandings as its very ideal.My personal conviction is that all of these factors are at work to make a second set of feasibility estimates as uncertain as the first. Before taking up the lessons which such estimates might turn to account, therefore, I consider it essential to make public some of the private assumptions unavoidably the source of my judgments.Piaget's study of the origins of intelligence in children is an elegant instance of this empirically disciplined formal method at work. It is consequently a good starting place for my summary, and a center line along which I will embroider my own thoughts or those of others caught up in the same intrigue of intellect.From his observations of behavior in the human infant and child, Piaget isolates and describes six early stages of psychological adaptation.Each stage is evidenced by a characteristic scheme of choicemaking. It consists, on one hand, of the child's attempt to assimilate the environment by incorporating within his existing framework of knowledge and experience all new data given by his senses. On the other hand, it consists of his accommodation to the environment by using that modified framework as a basis for new acts. The existing adaptation at every stage, is an imperfect equilibrium constantly being repaired by successful assimilative and accommodative choices of its special kind, or being ruptured by unsuccessful choices of that kind.Psychological adaptation, like the organic, can be explained in terms of relationships that are essentially ecological. Always and everywhere, adaptation is only accomplished when it results in a more or less stable organization of relations between an organism and an environment.The point of supreme interest to us is the perspective from which Piaget chooses to construct his formal hypothesis. By observing stabilities in the child's relations to the environment as they appear from without, which is to say from the commonly accessible frame of reference of empirical 201 science, the observer goes on to hypothesize how those somewhat unsettled ecological relations are felt from the personal standpoint of the child as his mind works out its first contacts with reality.From the point of view of the investigator, then, factual data are those that can be observed to vary from child to child because they are imposed by environmental details that differ with the time, the place, the culture in which a person lives. Formal data by contrast are found to be invariant among children because, Piaget hypothesizes, these are necessary and irreducible data imposed on the child by his own genetically inherited biological organization. That is functionally the same for all of our species.As a consequence one can deduce that, from the personal standpoint of a child, invariance is an aspect of experience distinguishing form from fact. And we have already seen that this same invariance is what the investigator might look for himself from the personal standpoint of his own research experience, when he is making theoretical choices. Such a coincidence should warn us that formal hypotheses about the organization of human minds have direct methodological consequences which mark them as being basically different than factual hypotheses about the organization of the physical environment. When the investigation probes into the foundations of meaning and of understanding, there is a new need for consistency between any theory about the mind of the human subject under observation and that of the observer himself. What is hypothesized for the mental organization of the subject applies equally for the observer, and as a result can modify the choices of method open to the latter in his investigative role.In short, the process of formal inquiry itself is seen to consist of a cycle of assimilation and accommodation. From observations of invariants in the subject's behavior, the observer assimilates new understandings of mental organization, to which he then accommodates his investigative behavior.In this cycle of formal investigation, methodological choices can be recognized as instruments of formal accommodation for the investigator, just as theoretical choices are his instruments of formal assimilation. Choices of theory and method are both tentative and are "hypothetical" in the sense of self-consciously awaiting the test of use. Consequently, these are tools of formal learning for a mature intelligence, not for the infant just starting out in his feeble thrust toward consciousness of self.The infant has his own instruments of formal assimilation and formal accommodation, for he can be observed to progressively modify the essentials of his scheme of choicemaking. Should that occur, one can tentatively assume that he has learned something, not about the environment, but about the organismic basis of himself.Once the child understands the next stage of psychological adaptation, he prefers to use its new scheme of choicemaking, although it can be shown that he still knows how to use all of the schemes he acquired in earlier adaptations. The next stage is always a more desirable frame of knowledge and experience than the one before it, taking into account everything in the previous stage, but making new formal distinctions and organizing facts into a more comprehensive and equilibrated structure.That each scheme of choicemaking is formally a prerequisite of its 203 successor is argued by the observation that no stage in the progression of psychological adaptations is skipped. Each stage of adaptation has its own formal organization whose chief aspects my summary will try to illuminate.In addition, one should look for a progression of formal experience and knowledge in states of adaptation which are ever broader and more poised. It is this progression which allows us to think of the successive, states as cumulative stages of mental development.One must distinguish carefully between any existing state of psychological adaptation and the process of adaptation by which that state is changed.As Peirce was shrewd to notice, only when the investigator identifies formal inquiry with the process rather than the state, does it become necessary for his own state of mind to change should his investigative process succeed.Formal reasoning has a dual purpose: to clarify the state of contemporary thought, and at the same time to benevolently undermine the world view that its fund of experience and knowledge represents. The aim of that benevolence is to carry forward the cultural process by including an established universe in a still broader and more stable one.The central role of formal communication as a determinant of the state and the process of cultural adaptation has been explained by Mead and again eloquently by Whitehead. Each language has a formal component for talking about the everyday language to be used in talking about facts. Men also invent symbols for precise forays of factual description, as is well exemplified by the linguist's use of his metalanguage. Whatever the motivation, formal communication can either consolidate a cultural state by 204 perfecting the symbols already being used to mention facts, or it can offer new symbols to further the cultural process by making possible the mention of facts until then unmentionable.Whereas at this moment the need of the state of culture is to consummate an objective universe through the use of symbols that successfully organize vortices of objects in a continuum of time and space, the clear need of the cultural process is a new basis of symbolizing with which to organize a more comprehensive universe, incorporating subjective as well as objective facts, and a more equilibrated one by virtue of providing functional mechanisms for formal as well as factual adaptation.How can a universe be symbolized to bring these neglected cultural ingredients to critical public purview? Langer has proposed that the basic symbols of such a world would name acts, and that the symbolic facility of a universe of acts would allow us to communicate about complex acts, composed of those elements.The gist of the line of reasoning being pursued is that it is about the symbols of Langer's universe instead of those of Newton's universe which have become, after three centuries, so comfortable to a mechanistic sense of life.At first contact, a universe of acts is certainly a strange world;but then, any really new world must be strange. And a world view which aspires to incorporate the mechanics of formal adaptation has in added perplexity the responsibility to explain the circumstances of its own emergence.The job before us is to clarify the symbols of this unfamiliar world as best we can so that they can be used and tested against living and historic evidence,where strangeness has precedents.Unavoidably, my summary will take up more mature stages of reflective thought following on the six initial stages of practical intelligence that Piaget looks for in the infancy and early childhood of individual men and women. It is in these markedly different settings that one can observe functionally analogous progressions of schemes of choicemaking. The invariant aspects of that progression might then be explained by an increase in human understanding of a biologically determined functional nucleus underlying and guiding consciousness.Thus, the beginning of the process of psychological adaptation presupposes an existent biological organization, itself the product of an evolutionary sequence of genetic adaptation that incorporates hereditary factors having two quite different types of biological result. Factors of the first type determine the constitution of our nervous system and sensory organs, so that we perceive certain physical radiations, but not all of them, and matter of a certain size, and so on. Factors of the second type orient the successive states of psychological adaptation, and so have their result in the organization of a mind which attains its fullest and steadiest form at the very end of an intricate process of intellectual evolution, not at the start.All of the various states and the process of psychological adaptation have in common the one formal aspect that, relative to an assimilated frame of experience and knowledge, the direction of every accommodation is such that it attempts to satisfy need.Piaget maintains that needs and their satisfaction are mental manifes-206 tations of the complementary interplay of assimilation and accommodation as felt by any human being. Although from our personal standpoint need may seem primary, it is the internal organization of that underlying unity, the act itself, which motivates our day-to-day existence as well as our long term psychological development.The theory of the act, making explicit the invariants to be found in every unit of human activity, would for a universe of acts set forth the cyclical relationships between assimilation and accommodation which are taken to be the functional nucleus of both factual and formal adaptation.The act of Langer's world would not consist of movements in time and space as seen from some distant and impersonal viewpoint of a spectator, although such movements might indicate to the mind of a spectator the act of another mind. The symbol "act" would stand for any elemental or composite constituent of a whole but unique universe, one among others named by the symbol "mind," whose personal and partly intimate point of view would be felt as the very direction of the act.More, the direction of the act would tend to satisfy the immediate needs of a state of adaptation by assimilating and accommodating to the organization of the environment. At the same time the direction of the act would satisfy the long term needs of a process of adaptation by assimilating the organization of the act itself toward an eventual accommodation which, through the mind's increased understanding of the principles of its own direction, would affect a new state.This progress notwithstanding, Piaget contends as Peirce before him: 207 there does not exist, on any level of human consciousness, either direct experience of one's own mind or of the environment. Through the very fact that assimilation and accommodation are always on a par, neither the organization of an outer world nor that of an inner self is ever known independently. It is through a progressive construction, guided solely by the pragmatic circumstance that acts once committed to use either succeed or fail to be consummated, that concepts of the self within and of the environment without will be elaborated in the mind, each gaining meaning relative to the other.The theoretical relationships between the several states and the process of psychological adaptation, as approached in the context of the theory of the act, is the core of the matter, therefore. It is from this connection that one may extract the multifarious method of inquiry indicated by Dewey and Bentley in their essay about knowing and the known in this new universe.If the formal character of each successive state of adaptation is due to an increase in the mind's understanding of how the act is organized internally, then the invariants observed in each of those states should contribute new formal aspects for the theory of the act. Conversely, invariants observed in the act as such should help us to understand the theoretical relationships between the states and the process of adaptation.Mead's analysis found the act to consist of three principal phases:the first a phase of "perception," the second of "manipulation," and the third of "consummation." But the method of analysis just suggested will find that 208 every complex act has five functionally distinct phases which, allowing for an initial state, account for Piaget's basic progression of six adaptive stages.This result would presuppose, for the theory of the relationships between the states and the process of adaptation, that it is an understanding of some new phase of the act which the adaptive process incorporates in the frame of formal experience and knowledge in order to pass from the existing state of adaptation to the next state of that basic progression. The efficacy of this view of the situation is given by various sorts of evidence.Formal adaptation always appears as a growth of capacity in just one functionally distinct phase of the cycle of assimilation and accommodation implementing factual adaptation. This is at least consistent with the assumption that formal assimilation incorporates in the mind an understanding of the internal organization of that phase.There is also an invariant order in the emergence of new phases of intellectual growth. The basic progression of six stages of adaptation exhibits that order in a number of quite dissimilar behavioral contexts, thereby assuring us that we are dealing with exactly five phases of functional capability, no more no less.The initial progression consists of the stages of practical intelligence,where the five phases are first established as capabilities in the newborn child. Throughout the development of reflective thought that immediately follows, five functionally analogous phases emerge in the adolescent and young adult with the growth of representative thought. They occur again with the increasing capacity to verbalize subjective and objective facts contained 209 in such thought, and finally with progress in formal verbalization. As the behavioral setting becomes more complex, the formal character of the phases is revealed with greater clarity. The cultural progression is accordingly the most elaborate setting from which one can extract the internal organization of each phase.Within the internal organization of these five functionally distinct phases, one finds every capability needed to construct a viable theory of the act. That the phases are in fact constituents of the act is evidenced by the very possibility of that construction.However, the sequence of phases defining the direction of the act does not turn out to be the same as the developmental sequence defining the order in which the phases enter consciousness. Evidently the first two phases of the act are understood one after the other, and then the fourth phase, the third, and the fifth. The process of adaptation always assimilates the phases of the act in this peculiar order to affect, through its respective accommodations to the successive mental increments, the basic progression of six adaptive stages observed in all of the behavioral contexts.Even this unexpected state of affairs will be found to make sense in the context of cultural adaptation, where the developmental sequence can be recognized as a convenient arrangement for the transmission of social and cultural behavior across generations of individuals.To see this, one is required to consider the specific organizations of the several phases and the way they cooperate to determine the direction of the whole act.To lay the grounds for that discussion, I must stress again that the 210 most fundamental distinction for the new world we are exploring is not the one yielding a grid of space and time which makes possible the symbolization of movements in a physical environment. For a universe of acts, the basic distinction will be that made by Peirce of "potential acts" comprising the patterns of knowledge as opposed to "actual acts" being instances of those very same patterns which, in the relationships of their occurrence, comprise the experience of a given mind.The dichotomy of experience and knowledge is a more comprehensive grid for our symbols than that of space and time. It makes possible the symbolization of acts making up a mind that is itself capable of symbolizing physical movements in an environment, as well as its own acts or acts of other minds.Linguists will find this new grid familiar. It is the one by which known patterns of language, symbolized in their grammars, are balanced against instances of those patterns which they symbolize in a given stream of speech. The dichotomy of knowledge and experience is nonetheless as wide as life. Every stream of existence contains sensory elements other than those of speech, which feed a balancing act of magnificent dimensions.The problem posed for the theory of the act is to explain the equilibrium that assimilative and accommodative processes maintain between actual acts of experience and potential acts of knowledge, given a stream of existence which is itself a sequence of actual sensory or motor acts, each instancing a successful or an unsuccessful consummation of some potential act among the elements of Langer's universe.The resultant mind extends precisely as far as the equilibrium between experience and knowledge is maintained, whether the work of assimilation and accommodation is done by a single biological agent, or by a collection of them acting socially. The agent could as well be an electronic machine. This new world will be less skeptical of mechanical agents of mind than the present one, because it will look for mind in the equilibrium itself instead of in the agent.Whatever the agent of a given mind, any flaw in its equilibrium will be "need." Any repair of disequilibrium will be "satisfaction." Persistent loss of equilibrium will be the nagging irritation of "doubt," according to Peirce the sole motivation for acts of inquiry which when successful attain not the truth of an external reality, but stability. For a universe of acts, therefore, any persistent stability in the equilibrium between experience and knowledge will be "belief."In summary of the matrix of theoretical and methodological choices, I assume that the criteria of truth in a universe of acts are the immediate stability of its adaptive state and the long term stability of its adaptive process.These are pragmatic truths of fact and of form, respectively. The former relates ultimately to the organization of the environment; the latter, to the organization of the act. But there is no direct access either to a reality behind fact or behind form. Each is known or experienced relative to the other by means of complex acts which the mind itself constructs. The constituents of that construction are potential sensory or motor acts, the biologically or mechanically based elements of this unique universe, which 212 is one mind among others. The sole source of the information guiding the construction is a given stream of existence, itself a sequence of actual sensory or motor acts instancing successful or unsuccessful consummations of those universal elements. And the organizing principles of the construction are those of the act, whose pragmatic method I will now discuss.Christopher Alexander, in his notes on the synthesis of form, cites a common engineering practice for making a metal face perfectly smooth and level. One inks the surface of a standard steel block, which is level within finer limits than those desired, and then one rubs the face to be leveled against the inked surface. If the face is not quite level, ink marks appear on it at those points which are higher than the rest. One grinds away those high spots, and fits the face to the inked surface again. The grinding and fitting are repeated over and over, until at some final fitting the entire surface of the metal face is marked by the ink, indicating that no high spots remain to be ground away.The practice of fitting affords a useful way to think about the phases of the act. Because the act, too, consists of ongoing processes of assimilation and accommodation within which experience and knowledge are repeatedly shaped by putting their various parts to use, rubbing them against reality so to speak, in order to have them marked by success or failure as preparation for still another shaping.It was Peirce who found out that the high spots of the mind are marked by the ink of success and the low ones by lack of it. Thus, fitting the mind to reality involves filling in the low points as well as grinding away the high ones. The mind had to be constructive in order to eliminate the holes and pitfalls of experience and knowledge. If men worked diligently enough at seeking out and building up the misfits, the entire stream of existence might become bright with success.Although from this William James drew an elixir that pleased and encouraged a competitive society, the product he marketed under the label of "pragmatism" has since fared poorly in the popularity contest of ideas.That is significant for our inquiry, though not as an indication of some flaw in Peirce's insight. By the hard-eyed predictions that the actual practice of pragmatic method made possible, the course of its own acceptance has in fact been remarkably well borne out.Stubbornly fixing its attention on the surprise of failure, pragmatic method was sure to be unpopular to every conservative trend of mind. That opposite practice, finding all of its reasons in the preservation rather than the creation of information, deliberately tries to avoid surprises and to explain away its own failures. For a conservative mind the sources of gratifying or noxious information are invariably felt to be outside of itself.In simple consequence, every form of conservatism directs its main purposes to preventing contamination of the specific place from which it sucks nourishment. To such a mind the purposes and attitudes of pragmatism have been and will continue to be irrational.The conflict of rationality we are about to consider is the most exasperating one known to man because it stems from the direct opposition of creative and conservative assumptions about what information is, where it comes from, and how it is used. By comparison, all earlier crises of the cultural progression will have been mere squabbles among conservative minds in solemn disagreement over good and bad teats.In the fifth universe, "information" is something to be transmitted across its space-time grid. The ultimate source of information is a material reality common to and encompassing all of mankind. The firsthand passage of information, by which it arrives in a brain that is essentially a passive receiver pretuned genetically to certain vibrations beyond itself, is called "observation." The brain stores up some of the information it receives and can also retransmit informative copies from its store by means of a conveyance of symbols that lodge themselves in other brains. This secondhand passage of information from one brain to another is "communication" or, for the young in passive receipt of a largess from the information store of society, it is "education."The method of "descriptive" science, although less conservative than its predecessor, still locates the information source externally. Its works of observation are best done by a disciplined spectator who separates himself as rigorously as possible from all temptations of human purpose. The social status of the scientist, so engaged in carrying out his contract of detachment, is not unlike that of the priest whose nearness to God in the preceding social order called for all sorts of precautionary measures to insure the fidelity of firsthand information.In general, one can identify information specialists at each stage of culture for whom contemporary men reserve their greatest veneration and suspicion. This highest peak of cathexis may now be explained theoretically by the need of every society to cluster around its fount of firsthand information in order to carry out the social act.The necessary consequence of any change in the information source will be social reorganization, a period of turmoil during which new information specialists learn their roles, and users of their information scurry to the unaccustomed precincts of yet another defective metamorphosis. An improved equilibrium might then be felt by its participants as the preferred "order". Without that shared judgment, the new metamorphosis would fail.Society would revert to its former state, or would backslide down the cultural sequence to a regressive state within the scope of its remaining capability.The pivot point of the adaptive process would appear to come when a society, or by the same principle a personality, feels the need to modify its source of information. This is the invariant to be looked for from the standpoint of the mind itself, even though our theoretical explanation holds that such a fundamental change is caused by an understanding of some new phase of the act being incorporated in the mind's functioning to thereby affect new ecological relationships.Besides that our line of formal reasoning predicts that any new state resulting from an advance of the adaptive process will at first involve a reorganization of known facts. Thus the repair of intellectual progress is always felt by the personality or the society as a consolidation of mental holdings, in a word, as an "insight". Only after the introduction of a more comprehensive organizing principle can new facts be added to a reconstituted structure that has become at once broad and stable enough to receive them.These conclusions are, in themselves, organizing principles of a personal world view emphasizing learning rather than doing.Giving its highest priority to doing, the fifth universe uses its symbols to persuade other individuals or other societies what ought to be done.Inquiry is a garnering of information under stringent regimens that protect the quality of a product being pigeonholed away for unspecified future use in an advocative scheme of choicemaking. There, hard-fought positions are reluctantly abandoned under the shear weight of damaging evidence.By comparison, the pragmatic scheme of choicemaking is one in which a real preference for surprises actually courts failure as a gratifying means to the shaping of an affluence of hypothetical creations almost lightheartedly sent forth in the hope that new truths might be caught in their net.The preferred symbols of the sixth psychological state belong in a context giving its highest priority to learning, and so they pertain to changing one's own individual mind or the mind of one's own society, not another's."Information," in the sixth universe, is something to be created against the grid of experience and knowledge by the agency of an ongoing organic process for which each mind's fragile stream of existence provides the indispensible clues. Those surprising instances when a given mind fails to achieve an expected objective are, in a world motivated by the need to repair itself, the necessary benchmarks for firsthand information being self-consciously designed to circumvent know misfits that are obstructing human satisfaction.Hence the characteristic forms of pragmatic "communication" are to broadcast throughout the community all known points of distress and any helpful new designs by which past failures might in the future be overcome.One can readily see how such an innovative mode of communication will be 218 disquieting when taken out of its proper context by a conservative state of mind bent on maintaining credible displays of tradition, authority or power.Formal incompatibilities of conservative and creative views of information do indeed cause a "communication gap" with which the pragmatist, for his own part, is unable to cope. Advocative arguments will be perceived by him as "irrelevant" for two clear reasons.First, a mind will not be persuaded by appeals to tradition, authority or competitive advantage once it believes that all "truth" is established by demonstrations of successful use.A persuasive rhetoric will be received disrespectfully as artless in the production of "false" designs, either untestable or long since disproven to a more receptive conduct of life.Seeming corrupted because of its higher resistance to corruption, the pragmatic mind will reject argument just as an argumentative mind had earlier rejected preachment.Second, and more noteworthy of the pragmatic view of information, is its implication that an essentially conservative mind can be induced to learn by denying it the opportunity to overlook its failures of awareness. Mules of the sixth universe, once brought to water, will be taught to drink. The goals of "education" will be attained by a variety of activities and situations especially designed to progressively awaken a mind at first so feeble that it would shyly act to protect its meager hoard of dependable creations, thinking them the gift of one or another fountain of charity. The stimulation of formal learning, while tenderly administered to the young and mentally impaired, will reprimand the laggard so remissive in his own mental betterment that he extends a nascent conservatism into adulthood. Society in the sixth universe will not achieve the elusive goal of classlessness. It will prefer an order of psychological classes wherein a forefront of information specialists gather loosely around the sage to be students and teachers of one another. As for the sage, he will probably turn out to be a mathematician.In Peirce's guess at the riddle of life, man's framework of experience and knowledge has been gradually broadened to include the "law" of the human act in its complementary relationships to the "presentness" of the environment. Mediating these two extremities of consciousness is "struggle," a conscious sense of learning in a collective mind appraised finally of its own creative act of inquiry. About that wellspring of information he says:. . .there is manifestly not one drop of principle in the whole vast reservoir of established scientific theory that has sprung from any other source than the power of the human mind to originate ideas that are true. But this power, for all it has accomplished, is so feeble that as ideas flow from their springs in the soul, the truths are almost drowned in a flood of false notions; and that which experience does is gradual, and by a sort of fractionation, to precipitate and filter off the false ideas, eliminating them and letting the truth pour on in its mighty current.Pragmatic method is more casual about forgetting because it has taken the act of creation in its own hands. The information specialist of the sixth universe will be a participant, immersing himself in a struggle to stabilize personal and social relationships which, in the pragmatic scheme of choicemaking, will give first priority to the satisfaction of human need. In this vein, for example, Ogden and Richards propose to lay their hands on symbols and their "referents" so as to converse propitiously about a relation, "truth," imputed by thought, but which thought alone could somehow not sustain. With telltale zeal to locate all worthwhile instruction in solid matter apart from mind, these investigators too, despite the many worthwhile things they say about problems of meaning, would not go so far as to explain the truthfulness of symbols in terms of mental organization.The root of the matter is that every acceptable means of scientific investigation has been unable to locate minds, and hence thoughts, on the continuum of time and space. Popular belief tends to favor the inside of the head rather than the stomach, which had its day when men were hungrier.Until the right spot is discovered and demonstrated, it will be quite meaningless to speak of something "apart from" or "beyond" or anywhere positioned Suppose, as an example of the latter, that acts of reference are taken to be the constituents of that immediate "perceptual" experience which in a given mind is felt as an orientation to what is now present ostensively, right at hand. Also imagine that, on this relatively secure foundation, a speculative extension is then built by the agency of acts of inference from the particular contexts matched to those specific contents being presented perceptually. Constituents of the resultant construction are "concepts;" the elaboration itself is the newest part of that mind's "conceptual" experience, felt as an orientation to things not present yet having import for some activity either being contemplated or in progress.Giving this theoretical explanation its due would adduce, from the very mind committed to it, the consequence that errors of reference or of inference will, in general, beget malformed experience. Were such a faulty framework used to guide further action, the enterprise would culminate 228 in acts prone to failure. Hence, in consequence of accommodative inferences tending to reorganize its own methods along pragmatic lines, the mind would finally conclude that, from its own personal standpoint, its own failures are its only signs of mistaken perceptions or conceptions.To that personal world my symbols can carry nothing along with them except the skillful ingenuity with which I designed them and then launched them by mouth or hand, all the while guessing at your skill for using them to create information. Indeed, your ingenuity might be greater as a creative recognizer of symbols than mine as a producer of them.As for the truthfulness of my symbols, I believe you will discover "truth" in them to whatever degree they stimulate and assist your own creative efforts. If they cause you to fail or carry you away from insight, farther than you would have gone by yourself, you will certainly judge them "false."My symbolic designs can bring you no evidence, nor can they offer you proofs. They can only recommend how you might look for evidence in order to convince yourself that this revision of your present state of mind might improve the satisfaction of your everyday needs.Then will you eagerly extract every scrap of evidence from which the further construction of your own experience or knowledge might profit.In your unique universe you will have to do all of the remodeling for and by yourself, and you alone will judge the result.For my part, despite this ample domain of personal application, I see no reason why these same pragmatic practices will not also satisfy the needs of a society giving its highest priority to learning rather than merely 229 doing.Regarding the question of precision in the social use of symbols, Ithink you will agree that these pragmatic methods tend quite naturally to the happy hunting ground of mathematical reasoning, where especially critical minds can live out their cloistered days as students and teachers of one another toward the sole purpose of shaping up the formal component of explicitly constructed "languages."Not only do the ministrations of mathematicians succeed admirably, they belie the empiricist's expectation that fact is a more stable foundation for society than form. The myth of empirical description to the contrary, science rode to its present glory on the back of mathematics.Of course, mathematicians argue more than they would like. And if some of the symbolic designs they produce are named "proofs," I do not object. No mathematician has ever been known to accept one of those proofs from his cohorts without performing every reference its symbols specify, while passing judgment on each meticulous step for and by himself. Science, but also economics and politics, were conceptualized in terms of forces held in abeyance by counterforces balanced against them. Ultimately the forces of nature were balanced against the burgeoning will of man.An oddity of the transition now attending the dissolution of a mechanistic world view built on polarity is that it will require a double genesis. Moreover, it was to be expected that scientists and educators, having been commissioned to a quiet concern for learning in an industrial society, would be susceptible to pragmatic influences in greater degree than active managers and makers of public and private weal.A decline was foretold when George Berkeley, motivated by the fear that Newton's principles of absolute space, absolute time, matter and gravitation would threaten religion, doubted whether the words in which these principles were expressed even made sense. According to him, the only words that are meaningful are words that designate sensations. If the goal of science is to coordinate sensory perceptions, then it can make use of spatial relations only to the extent that these are merely relations between sensible bodies, and nothing more.Out of the matrix of Berkeley's arguments came two fertile seeds.One is the distinction between the formal and factual components of language, now grown to the rank of a major preoccupation among philosophers. The other is the very method of approach by which scientific procedure took on its exclusively descriptive character, that of ascertaining, and only then interpreting, the data of sensation. That is all his pragmatic theory of social change required.The universe of your own personal mind is one you know well. However, you have not thought of your social universe as being organized along principles of mental anatomy, and will doubtlessly think that further suggestion is silly.Believe me, I share your annoyance. I have lived as comfortably on the grid of time and space as men did formerly in the lap of God. But a surprising thing happened to me one day on the way to the laboratory. There were people in the streets yelling about the misfits of our society, and it suddenly occurred to me I was being set upon by a bunch of pragmatists.Now I am pretty sure that none of these ragamuffins had ever readPeirce. Yet they made it crystal clear that they were intent on shaping the collective mind of their OWN society, and also their OWN individual minds. My concern has been to show that this change of style is not capricious or arbitrary. It is the rational result of an emerging new theory about the origins, the means of distribution, and the uses of information. As the empiricist could no longer support a requirement for incantation, prayer or preachment in half of his reconstructed world, so the pragmatist has no further need for language that purports to describe an external reality. The sole purpose of every symbolic communication in his universe of acts will be to shape the internal reality of a person or a society.In this dual light, I ask you to reconsider the conclusion that CharlesMorris reached in his treatise on signification and significance, according to which the main dimensions of signifying relate to phases of the act. In particular, he finds that "designative" discourse corresponds to the act's perceptual phase, "prescriptive" discourse to the manipulative phase, and "appraisive" discourse to the phase of consummation.A student of Mead, Morris builds on his mentor's analysis of the acts phases. He recognizes that "formative" discourse might call for a fourth dimension of signifying; but he decides that Mead's analysis need not be complicated by a fourth phase to account for his misfit.To the contrary, when analysis of the act's phases is approached by the different method afforded by consideration of Piaget's basic progression of developmental stages, a phase of hypothesis formation will be one of those found missing from Mead's tally. This phase, indeed implemented socially by formative discourse, will be for a pragmatic cosmology, the one in which new knowledge is created. It is accordingly the specific phase that Peirce recommended to our understanding in order to consummate the formal traverse on which he would have us embark. Since this phase of the act will also involve forgetting knowledge, I prefer to call it the phase of "reorganization."By all indications, language is an ancient heritage and should not as an ongoing system be expected to zig or zag as readily as speculations about the nature of language or styles of speech, heard from men immersed in a particular cultural situation. To look at the way language is being used is rewarding for a pragmatic inquiry which, in keeping with its interest in the process and various states of adaptation, will prefer to observe humans at large in their natural habitats as they busy themselves more with obedient or competitive doing than with learning.Comparison can thus be made of the respective abilities of the pragmatic and the objective viewpoints to organize known facts of language. That 239 formative discourse fits naturally in the pragmatic framework can be taken as a bit of confirming evidence that its organizing principles are more comprehensive than the objective ones.The comparison is itself the one proposed for a pragmatic science, since only by use of the pragmatist's viewpoint does one begin to grasp the general principle that what is felt in experience as a "viewpoint" is determined by one's own choice of inferences.A consequence of this insight is to make the context as well as the content of observation matter. Once the two are seen to be relative, one gaining meaning as complement to the other, the aim of a pragmatic science must be a useful matching of the two.To make the comparison just recommended, one would have to first identify Piaget's theories as being pragmatic in outlook and those of both Mead and Morris as belonging to that conservative view of psychology and sociology which attempts to achieve order and predictability in a world of objects. Just because the objects are animate instead of inanimate does not, for its overextended objective reasoning, change the nature of the quest.Thus a troublesome consequence of the pragmatist's insight is that, in his own mind, the opinions of other men will no longer be regarded as equal in perspicacity. If Mead himself believed that the source of information was in a reality "outside" of his subject, that presupposition on the part of Mead as observer and as theorist would account, to the reasoning of the pragmatist, for still another phase of the act denied autonomy in Mead's theory yet required by the pragmatic realm of speculation pursued by Piaget.As my projected phase of reorganization will agree with the pragmatic hypothesis that knowledge is a creation of the mind, so this second neglected phase of the act will anticipate a constructed experience.Rather than an experience consisting of data received through the senses and somehow stored as pictorial or otherwise coded "representations"of an external reality in memory, pragmatic perception will itself be a constructive activity building on a foundation of actual instances of elemental sensory or motor acts, each one signaling the success or the failure of its small task when commanded to perform. It is necessary to conclude that all "external" objects and relations in a universe of acts will be presented in experience by successful acts of perception. And from this the more general conclusion can be drawn that "contents" will be given in knowledge by overlapping collections of potential acts of perception, exactly as overlapping collections of potential acts of inference will define "contexts." A parallel can therefore be established theoretically, according to which the preservation in experience of either a specific content or a specific context will be signalled by the successful consummation of some member of that collection.However, the purpose of perception may be to ascertain that some object or relation is not present in the environment. It should be noticed in passing that the logical calculus which George Boole dropped at our doorstep, whose computations of "truth-values" pampered the empiricist's expectation that his symbolic designs correspond with an external reality, will reappear in the pragmatist's universe of acts as computations of "success-values." For looking backward in a pragmatic world at what was done in the past, such computations will be needed to determine the success or failure of a complex act in consequence of the successful or unsuccessful consummations of its elements. For looking toward the future, they will be needed to assess the internal validity of proposed acts.These computations will be of equal value for acts of inference. As a matter of fact, it is by following out the strict parallel and symmetry of perception and inference that one can begin to get the hang of how the pragmatist orders his personal as well as his social cosmos. A coordinated matching of perception and inference, in which the two are equal partners, is the very source of his information. It is our world, not his, which assumes information will arrive from a material reality and so gives greater weight to perceiving than to inferring.Anticipating your preference for an objective world, I have to this point glossed over the puzzling fact that a universe of acts will require two 242 kinds of elements. The first are the elements of perception that have been brought to your attention. They were called "sensory" and "motor" acts because I assume their agents in biological organization to be organs of sensation and locomotion, respectively. For machines, the analogous agents will be "sensors" and "effectors," each one capable of signaling the success of its commanded task.The second kind of elements will be the elemental acts of inference from which complex inferences may be constructed. Not knowing the biological agents of elemental inferences, I will for the present characterize them in mechanical terms as being able to produce or to recognize structures comprised of the mobile units I have called "concepts."I rely on mechanical explanations without apology. Pragmatichypotheses will have to be tested by means of electronic circuitry. Without computers, the progressive attainment of ever more comprehensive and equilibrated stages of mental dynamics could not be demonstrated convincingly.But pragmatic experimentation will be not in the least concerned with "simulating" a mind, whatever method of comparison that might connote to its proponents. The methodological insight of the pragmatist is that whereas a mind cannot be described it can be constructed. His objective will be to construct a mechanically-based mind every bit as useful as the ones based biologically, in all truth potentially more so in view of such enticing properties as access to an unlimited range of sensors and effectors, infinite reproducibility at its prime, and effective immortality. What is most striking about Peirce's dissent is its emphasis on acts rather than things. Like Langer, I think this is the key to his system-making.The tragedy is that, as far as we know, he didn't turn in an alternative set of diagrams. Yet it is certain that the "particles" with which he labored to construct his pragmatic universe are not thing-like but are instead actlike. His is a universe of acts in which successful acts of perception bring us as close as we can get to our accustomed universe of things.A pragmatic technology will not move "information" in and out of its machines as computers do now, although there may be a lot more going on inside. No bits at all need cross the machine's boundary. This applies to "instructions" as well as to "data." (These, for the uninformed, are the bit-buckets into which computer-people pay tribute to Descartes' dichotomy.)The sensors and effectors of an information system designed on the Peircian scheme will do much useful work, nonetheless, and may recognize or produce language signs in the bargain. For that last reason, I dub this alternative design a "semiotic system," distinguished from a "formal system" by being the creature of a universe nearer to life, and thus closer to language, in its arrangements.To tell a programmer that he will have to give up the "instructions" with which he controls the computer is apt to cause a stomach ache. It is exactly the same stomach ache that one should anticipate among politicians as they watch a freewheeling pragmatic personality bouncing about in apparent disregard of the laws and other contractual means that control contemporary society. One should therefore notice that a semiotic system will be controlled by means of a propitious selection of its elemental acts.From this one might predict that a pragmatic society will be less concerned with social instruction but intensely interested in putting the right social agencies in place. These trends have emerged in our national life; they can be expected to cause the same sort of hair-raising scenes that happened when the nobles swiped the king's programming manual.Another peculiarity of Peirce's design is its insistence on a world divided into three basic parts instead of Descartes' two.In the triad of Peirce's universal categories, one can identify as "presentness" the objective meanings of environmental fact, and as "law" the subjective meanings of organic form. But what of his third category, "struggle?"Return, if you will, to the requirement for two kinds of elemental acts in a universe of acts: elements of perception and of inference. It will be seen that there are three basic combinatory possibilities. In addition to complex acts of perception composed of perceptual elements and complex inferential acts made up of elements of inference, there may be complex acts consisting of both perceptual and inferential elements.I amend my hypothesis as follows: every pragmatic "meaning" will be defined in "perceptual knowledge" by a collection of potential acts, and will be presented in "perceptual experience" by an actual act successfully consummating some member of that collection. Only in the special case where members of the collection are composed entirely of perceptual elements will that meaning be a "content;" only if the members consist of inferential elements will the meaning be a "context." Otherwise that meaning will be, to use Peirce's term, a "resistance."Perceptual experience, as a consequence, will reconcile conceptual structures with environmental structures in the sense that, for a complex act to be successful, its perceptual elements manipulating the environment and its inferential elements manipulating concepts must both satisfy specific conditions of success. Not only will perceptual acts be coordinated with inferential acts to produce or modify conceptual structures, inferential acts that recognize conceptual structures will also guide perceptual acts by means of those same coordinations, so being the origin of perceptual purpose.I will discuss the origins of concepts under the topic of the act's agent, Piaget's functional nucleus. In the meantime "concepts" may be regarded as act-like units of information corresponding to meanings, which is to say that they will represent the collections of acts just discussed. Those concepts corresponding to contents, the meanings of environmental presentness, will be "factual concepts." "Formal concepts" will correspond to contexts, meanings of law in the sense of process. "Organic concepts" will correspond to resistances, the meanings that mediate between presentness and law."Conceptual knowledge" will consist of the designs of concepts, one for each meaning in the semiotic system. Instances of these designs, having been arranged by inferences into conceptual structures, will constitute "conceptual experience."The remaining phase of the act, still unspecified in our revised tally, will be the phase of "conception," during which the responsibilities of tenuous acts of inference are taxed to extend conceptual structures beyond the frame resulting from immediate perception. This, then is the phase served by speculative discourse.However, all of the act's phases will involve the manipulation of conceptual structures. It is by studying the kinds of inferences being made, and thus the kinds of conceptual structures being produced or being recognized to guide perceptions, that the separate responsibilities of the phases can be identified theoretically.In short, the phases do define the main meanings in the semiotic system, reflected in language as Morris' dimensions of signifying. These after Peirce's still more basic triad of meanings: "presentness," "law"and "struggle." And the most fundamental is the duo of meanings, "knowledge"and "experience," on whose grid the mind is built.Enough ground has been laid to begin redrawing the basic distinction between the process and the states of adaptation in terms of mental organization. It should be recalled that pragmatic explanation always takes this to be its aim.One It should therefore be anticipated that the only government a pragmatist will respect is one that can do something for him or can teach him something by helping him to be aware of his own mistakes or by presenting him with creative possibilities that he may have overlooked in his personal life. His concept of good citizenry will be to return the favor to government in kind, since only by contributing to the social act can he come to respect himself as a useful member of society within the frame of his own attitudes.In consequence, the pragmatist's conception of his societal role is more directly related to serving and being served by society than has been the case for all of the preceding cultural orientations. Thus the pragmatic theory of language belongs to a social order that will direct its symbols more deliberately than the present one to stimulate the creative efforts of the collective mind upon which a successful social performance ultimately depends. It is within a post-industrial world view that designative, speculative, prescriptive, appraisive, and formative discourse may all be seen to contribute synergistically to the creation of a source of social information beyond the accomplishment of any single participant. This different conception of the collective interest is the one which will motivate a pragmatic science.On the other hand I have argued that a pragmatic science, because of its different conception of the information source, will proceed by a method exactly the opposite of empirical method. It will not make observations and then extract theoretical conclusions in the familiar pattern of today's technical document. Nor will it regard technical documents as "knowledge,"no matter how high they stack.Pragmatic method will make its advance by shaping an elaborate conceptual structure, at the beginning expected to be imprecise. One work of intellect will be to ensure the "internal" validity of the structure by inferences eliminating from it inconsistencies or dissonances. A second work will be done by inferences that test the "external" validity of the structure by using and then shaping it as a frame for successful sensorimotor acts, some of which may be acts of observation. A pragmatic science will not merely observe the environment, however. To learn pragmatically this science must do something useful; it must struggle.Hence my conclusion that semiotic systems will become not only the instruments of learning at this stage of society, but will generate information shaped to usefulness through social use. The likelihood of this tech- Hence the idyl might end, in true science-fiction fashion, with mechanical minds ashamed of mortals, so bringing the pragmatist's age to its own just reward.Therefore, as Peirce never tired of arguing, the requirements of science differ from those of society only with regard to precision. Along with personality, the scientific intelligence and the social intelligence will also be modeled on an act whose phases, from the pragmatic viewpoint instead of the objective one, are as follows.In substance, a new community was formed by those hopefuls who took part in the mechanical translation stampede of the fifties. Computertypes like myself joined in consortium with linguists who were then being dragged off of the streets as authorities on translation if they knew how to translate. The computer, in those first days of unblemished optimism, was the only employee in sight, and we told each other it would get to work shortly as soon as we gave it the plan.That initial stage of research during which translation algorithms were designed, by our group and the others, was definitely ordered on the authoritarian scheme. And it is disquieting to notice in retrospect that the prime result of thoughtful doing in the following decade was to lift the computer from serfdom to industry. It had advanced from employee to middle manager, now carrying out the operating decisions of the general translation policy that linguists and systems analysts, by then become executives, had made.You can see that Descartes' dichotomy had polarized us into its two camps. For a while linguists and programmers went happily about their separate yet complementary research functions as allies in policy-making for a computer unfit to learn how to make factual or formal choices by itself.The role we had reserved for ourselves was to be the custodians of what the computer could, and should, learn about translating.To do this, the budding science of linguistics had been transformed from an introverted scholasticism to such a heady mass-production of morphological and syntactic descriptions that I fear linguists beyond the borders of our small community became infected with the same compulsion.To handle the sheer volume of descriptive output, further investments were made in programming not directly concerned with translating but motivated by a need for better ways of storing, retrieving and displaying language data as an adjunct to translation research.Two opposite requirements were pondered from the start. The first goal of mechanical translation must be an automated process which will extract meaningful units of some kind from a sequence of graphic symbols that represents a text of the language to be translated. If the extracted units are not concept-like, it is improbably that equivalent units will be found in another language, a risky quest at best. However it is done in detail, the transfer from the one language to the other must make use of a conceptual representation of the meanings of the text. That representation, at the very last step, must somehow guide the construction of a text in the second language.Hopefully, when all is through, the product will be true to the original text in meaning.Over the last decade extensive research was done on generalized translation processes to perform such an automated analysis, transfer and synthesis of technical texts. I won't dwell on these techniques in detail, because you are probably well versed in them anyway. If not, the facts are fairly easy to find.For my present purpose you need only be informed that, to analyze a text, the analysis process would use a "grammar" consisting of metalinguistic statements, frequently called grammatical or syntactic "rules." The theoretical inclination of the time was to think of these rules as "generating" only those expressions that were judged to meet certain criteria, the latter being too often an obstreperous rounding off of the linguist's "intuition" about language.Whatever the origins or the justifications of the rules constituting the grammar, the automated analysis process would set out to show that the text, or some part of it under analysis, could have been produced by substitutions of those particular rules according to the generative procedures visualized for them.By starting from the text and working backwards through possible substitutions, accordingly, the analysis process would develop a tree-like structure of symbols naming the grammatical classes to which the various parts of the text belonged. Such classifications were nearly always "ambiguous," in that alternative structures grew side by side from overlapping segments of the text. This overgrowth of trees caused a lot of worry and many clever things were done with weedkillers, to no great avail.I wouldn't go so far as to say that this approach to mechanical translation foundered on the ambiguity problem, though it was there that the deeper misassumptions wallowed to the surface to be seen. The folkways of ambiguity "resolution" gave the first clues that the trouble might not be in the machine but in the heads outside.My chief purpose in this essay has been to explore the possibility that designers of fancy information systems, like every one else, base their inventions on reasons which are in the end uniquely personal. No damage will result unless the technical objective requires the designer to make use of such fundamental concepts as "meaning." But in this case, if the organizing principles of his personal world do not satisfy the technical needs of the problem, his solution must be unsatisfactory. At this extraordinary forefront of design conception, the designers ability to successfully shape intelligent machines will be inseparable from his ability to successfully shape himself.No matter how the goals of mechanical translation are renamed or reclassified, the underlying requirement will still be the development of a mechanical analogue of mental organization. I would therefore like to make the flamboyant suggestion that the great depression which decimated the translation research community in the late sixties was due to misestimation, or outright neglect, of the psychological requirements of this kind of investigation .The emotionality which plagued mechanical translation at its dawn was an early indication of the effects that pragmatic inferences can have on the investigator's own psyche. Those disruptions were indeed mollified by treating translation research as though it were an undertaking of empirical science. But since methodological appeals to intuition went out of style in empirical science long ago, this posture is obviously a playhouse that should have been a way station.To my mind the feasibility of constructing information systems that will translate languages just as well as human translators is no longer in 262 question. The experiments of the last decade have convinced me that machines wilt translate better than humans in the long run, provided the pragmatic nature of the research can be expressly acknowledged and planned for.Lauding a technology of the future is senseless, however, if it says nothing about present choices which will capitalize on the hard lessons of thepast. An honest appraisal should find that men have been at fault in mechanical translation, not machines. More damnable is the growing evidence that, for reasons which seem reasonable enough to their myths about themselves, the investigators have attempted to do the machine's learning by a bureaucratic shuffling and sifting which leaves in clumsy human hands the very things that computers do best.My recommendation may not be popular but I feel it is sound. To get the job done the translation community will have to make use of its forerunners, deliberately looking for exceptionally gifted investigators with that troublesome pragmatic personality which may see problems of mechanical selection in a different light. The other choice will be genteel stagnation.In my opinion there is no practical alternative to a mechanical organization that will permit a choicemaking machine to have its own experience balanced adaptively to its own knowledge. To try to approximate this by preplanning is hopeless. Yet only pragmatic experimentation with the necessary relationships of experience and knowledge can actually demonstrate the irrationality of the self-satisfying toil that stuffs human know-how into computers.Such a turnabout in human motivation will entail reconsideration of what has been learned to date. In an upside-down pragmatic world it will not 263 be reasonable to think of the processes of analysis, transfer and synthesis as "simulating" what might have been done by a human translator somewhere external to the machine.Instead, the analysis process will be regarded as "assimilative" in the sense of establishing an orientation between an internal frame of experience and the specific features of an external environmental situation, which may itself contribute new experiences. The transfer process will make those choices which ultimately relate the situation to a purposive course of action founded on that dynamic experiential framework. Lastly, the synthesis process will be "accommodative" in that it will construct the specifications of the next act conforming to that purpose, to then be performed overtly by the machine.To project known mechanical arrangements to the pragmatic point of view being considered here, I would like for you to imagine a different kind of "grammar;" if you please a grammar of acts. The "rules" of my pragmatic grammar will be formed like the ones familiar to you, with the exception that the symbols they will generate will no longer name morphological units of a language. They will name elemental acts.Of course, the tasks of certain elemental acts may be to recognize or to produce viable features of speech or writing. A full range of morphology will be provided by these elements, however; the capabilities will be much broader than those needed for linguistic analysis or synthesis.The "higher level" coding conventions that have been in use for some time in computer software systems might be a precursor of a pragmatic 264 grammar, since they enable a programmer to construct complex programs from fragments of programming called "subroutines." But the constructive viewpoint of formal systems would not be left behind, to be replaced by that of semiotic systems, until each of the constituent subroutines was explicitly designed to signal its success, or lack of it, in accomplishing some commanded task.Thus the terms I have been using to introduce you to pragmatic thinking can be clarified further at this point by relating them to the more familiar artifacts of language processing.A "potential act" will be symbolized by each of my pragmatic rules.The collection of all such rules will represent the "perceptual knowledge" of the semiotic system. An instance of any one of the rules, when it has been incorporated into the tree-like structures created by either an analysis or a synthesis process, will symbolize an "actual act." The entire structure, or perhaps separate structures, consisting of all actual acts, will represent the semiotic system's "perceptual experience," on the proviso that it will be possible to compute the success or failure of an actual act if the success or failure of each of its generated elements is known, or vice versa.The tree-like structures of symbols representing perceptual experience will always be anchored to the simply ordered sequence of elemental acts which has been referred to as the "stream of existence" of the semiotic system. As before, the symbols of the structure will name classes to which the various parts of that existential stream belong. The classification will still be "ambiguous" where alternative structures subtend overlapping parts.existence; the prediction will be either "success" or "failure." When the complex act is committed to action, by commanding its elements to perform their separate tasks in serial order, the agent of each element so commanded will signal "success" on reaching its small objective; otherwise, "failure"This "realized success-value" will also accompany the name of the elemental act so that the two values can be compared. Further, this realized value will be the one used by the analysis process as it works backwards from the elements through possible rule substitutions.I can now begin to explore the functional analogy presumed to exist between the psychological act and its primal agent, the biological act of which the "agent of the act" will be the mechanical analogue. My explanation of the act's agent will lay necessary groundwork for speculations about the psychological act, and will give a preview in microcosm of the more intricate psychological phases of the act.Life has its rhythm wherein each new beginning has sprung from a termination just on the edge of the past and each new termination has anticipated another beginning at the edge of the future. The functioning of the agent of the act will be cyclical, itself forming an act in miniature.To get the cycle started, a random generation of elements of the stream of existence might be used to approximate, for a semiotic system, the reflex starting mechanisms observable among infants of all kinds.The first activities of the act's agent will be analogous to those of the psychological phase of "perception." A given stream of existence will have resulted from the cycle just terminated.Starting from the elements of that stream, the analysis process will work backwards through rule substitutions which could have generated those elements. This phase can be thought of as "assimilative" in that a representation of perceptual experience will be its resultant construction.While the tree-like structures representing actual acts are being put in place by the analysis process, the realized success-values accompanying the elemental acts of the existential stream will be used to determine, after the fact, whether each of those actual acts would have been successful had it generated the part of the stream to which it is being anchored.In effect, the analysis will provide a recap of alternative acts, other than the one overtly committed in the cycle before, that could have produced the results recorded in that prior segment of existence. Ambiguities, in this pragmatic scheme, could turn out to be a positive blessing since they alone 267 will introduce novelty. The luxury of being able to select a different orientation for further action, of having a "change of mind," will only be possible when ambiguities have been found. That luxury will become a necessity when the consequences of having acted were unexpected. If the predicted successvalues of the preceding act were not realized then a misfit of orientation, and consequently a need to select another alternative, will have been indicated.Choosing among the alternatives uncovered by analysis will be the second activity of the act's agent, analogous to the selection of an orientation to conceptual structures in the psychological phase of "conception." At the primitive level of functioning of the agent of the act, selections of orientation will have to be made without the help of concepts. Indeed, this analogue of the biological act must be the very source of concepts.A theme echoed over and over in observations of the conceptualizing state of mind is choicemaking founded on tradition, on ritual, on mere replication of what has already happened and best of all more than once. Concepts themselves will be the accretions of acts often repeated; sure to be repeated again.During my own phase of ambiguity "resolution", out of desperation more than anything, I worked out a theoretical suggestion made to me by Raymond Solomonoff, who had the idea that a generative procedure in which rules are being substituted could be treated as an independent stochastic process. By having the machine keep up with the relative frequency of substitution of the rules generating the members of each separate class, fairly simple procedures can be programmed for selecting from results of analysis those alternatives which replicate earlier perceptual experiences in a gross 268 probabilistic sense.The hypothesis that rule substitutions are stochastically independent events seems to work out for a so-called "stochastic grammar." There is also a convenience in programming, because it is the assumption of independence which permits the relative frequency of substitution of a given rule to accompany that rule in the grammar.By analogy to the choice of a definite orientation to conceptual structures in the psychological phase of the act, then, the agent of the act will make a probabilistic choice of orientation. The psychological phase of the act to follow will be "manipulation," during which the conceptual orientation will be used as a basis for planning a course of further action.For the act's agent, this third activity will simply project the actual acts that were selected for the new orientation of perceptual experience, by finding them to be the leading structures of more complex acts. A modified form of analysis will continue to work backwards through possible substitutions which leave some of the trailing symbols of the rules unanchored beyond the existing elements of the stream of existence. The synthesis process will then start from such unanchored symbols to generate a new segment of elemental acts along with their predicted success-values.Ambiguous classifications may again cause alternative structures to be generated. Since these will be the result of synthesis rather than analysis, more than one sequence of predicted elements may be projected out from the existential stream. Should this happen, as will be the usual case, the process will combine the various sequences into a partial ordering of elements.There are heuristic reasons for not making a definite probabilistic choice, either among the alternatives which might be projected or among the various projections themselves. Rather, a number of the most likely possibilities can be carried forward through both stages of activity to generate the partial ordering of predicted elements which projects onward the simple ordering of the existential stream realized so far. Paths ahead through the partial ordering can be rated as a convenience to the process that will make the final selection of elements to be activated, one after the other, to push the stream into a newly realized segment of existence.The process doing the final selecting and activating of elements will be responsible for the fourth activity of the act's agent. Like the phase of "consummation" of the psychological act, this activity will be "accommodative"in the raw sense of rubbing against an unsympathetic environment.Each successive element will be selected from the most highly rated path and then commanded to do its thing. The realized success-value that it signals will be matched with the predicted one as a condition for continuing.If the values do not match, the process will look for another path where providently the realized success-value of that same element might have been predicted for the step gone amiss. Or, if by its nature the abortive task could have no damaging effect, being one of recognition for instance, then the process will still have room to back up and try another path, until none remains.Then the path along which predictions were finally realized will become the new segment of existence to be analyzed in the next cycle. A number of cycles may be necessary to work through a complex act; how many will depend on the difficulties encountered in trying to surmount unrealized predictions.In times of such trouble, the most promising alternatives may be brought forward by probabilistic choices that span from structures now well behind the segment of existence being analyzed.Stochastic grammars are less tidy than the ones you may be accustomed to. Overlaps should be anticipated as the normalcy of a pragmatic universe;the termination of one act will also be the beginning of another. Luckily, the probabilistic selection process which I have been airing has an affinity for an act being terminated. Not until the termination is complete will it switch to another act, one already in progress and being brought ahead as an alternative possibility.To handle a messy, poorly integrated perceptual experience is a requisite ability of a semiotic system. It is from pristine chaos at this most primitive level that the rules symbolizing potential acts must originate; and afterwards the collections of potential acts representing meanings must get together; and only then can concepts be created in correspondence with meanings. The remaining duty of the agent of the act will be to procreate concepts. Learning to shape the concepts themselves will be functionally analogous to the psychological phase of "reorganization," where the responsibility of learning will be to shape structures built with concepts.There will be scant materials for reorganization in perceptual knowledge at the outset. The initial rules, representing all that the semiotic system knows, will simply place every elemental act of that unique pragmatic universe into a one-member class. From such an unpretentious sow's ear, classificatory 271 processes will be called on to custom-produce silk purses.The white hope of the pragmatic viewpoint is the new slant it puts on inductive reasoning toward knowledge anticipating experience. A resurgence of interest in the theory of induction, after its long sleep as the stepchild of empirical science, may in the end wean mankind from classifying things.A pragmatic science will classify acts. Until this is well understood, the possibility of machines that learn efficiently can rightly be looked on with suspicion, along with the possibilities of fast-learning personalities or societies.In order to shape perceptual knowledge, inductive processes of the act's agent will monitor "local" events in the structure of perceptual experience.Such events as rule substitutions or the neighboring of symbols in certain relationships to one another will be monitored. From the data so gathered, automatic classification will be used to locate points of weakness in the body of perceptual knowledge, or to detect possibilities for extending that body by the addition of new rules.These data may be gathered from many cycles of "doing," as the act's agent pursues its first four activities. Only once in a while, at a propitious moment, will the rules symbolizing perceptual knowledge be updated to incorporate in them what has been learned since the last updating. These "learning" cycles may have to be carried out during periods of inactivity and rehabilitation not unlike sleep.Some of these necessities of pragmatic learning were programmed by our group in the mid-sixties as a means of "debugging" grammars. Billed in our reports as a "self-organizing linguistic system," the programs made use of theories of automatic classification put together by Roger Needham and other members of the research group at Cambridge, England. Our research objective was a better grasp on that elusive relationship by which a grammar is said to "describe" the contents of particular collection of texts. Firstly, the so-called "horizontal" classifications are the ones which detect possibilities for creating new rules. The events to be monitored will be those in which two symbols classify adjoining segments of the stream of existence where all predicted success-values were realized for the elements of both segments. Automatic classification will then cluster together the first members of such pairs that have been followed by similar second members.The second members that have been preceded by similar first members will 274 be clustered also. Clusters of first members will then be matched to clusters of second members to induce those chummy relations between neighboring classes of segments that a rule will symbolize in perceptual knowledge.While horizontal classifications will originate all new information at this primitive level, in the form of perceptual hypotheses symbolized by rules, refinements of the resulting perceptual knowledge will depend on "vertical" classifications. As classes named by the symbols in rules are progressively refined, the probabilistic selections of perceptual experience will favor the structures incorporating the nicest refinements. The most comprehensive structures will also tend to be chosen as working alternatives.Even here the theoretical treatment of probability is intimately connected with the treatment of induction. Verification will be gradually accomplished by use.When an induced rule is no longer being selected probabilistically for use, it will be consigned to oblivion.The events monitored for vertical classifications will be rule substitutions in perceptual experience, as jointly given by the symbol being substituted and the symbol at the place of substitution. Automatic classification will cluster those symbols which have appeared in similar places of substitution.In addition, a clustering will be done of the places that are similarly receptive to the symbols being substituted. The clusters of symbols being substituted will then be matched to clusters of places of substitution to detect those concentrations of affinity which will define more specialized classes to be named by new symbol It will be found that these vertical classifications can be carried out for the substitutable symbols and the places of substitution instancing the name of a single class. That class will have "stabilized" when no clusters, either of the symbols or the places, result from automatic classification. For that specific class, the proper balance between experience and knowledge will exist temporarily. Disequilibrium can return to it at any time due to refinements of knowledge taking place elsewhere, or due to new knowledge being acquired.To guard against overspecialization, the same techniques can be applied to the symbols instancing the names of two classes which have been shown by horizontal classifications to be very close in membership. If the clustering resulting from automatic classification does not detect in experience this distinction being made in knowledge, then the difference will be "forgotten"by the simple device of thenceforth using the same name for both classes."Forgetting" rules that have been originated hypothetically but not used at all should be done posthaste. Because a rule is not used very often, on the other hand, should not condemn it. For sweeping the dead wood out, an obvious measure of obsolescence is the ratio of rejection to selection in probabilistic choices.The arrangements I have explained to this point might be thought of as the "morphology" of the semiotic system and those usually referred to in semiotic theory as "syntactic." I take the morphological arrangements to consist of the agents of the elemental acts, including among these the sensor and effectors, together with the act's agent whose processes I am still considering. The syntax of the system comprises the constructions of perceptual experience and knowledge created by the act's agent from rules of a type which will now be designated as "syntactic" in character because they classify sequences of morphological elements.The second principles of arrangement I would have you consider were also worked out theoretically for the "self-organizing linguistic system."Although most of the processes I will now explain were programmed and used for other purposes, pragmatic learning experiments were never performed with them.What you should recognize about this part of the semiotic system is its dependence on a higher level of symbolization by rules to be characterized as "semantic" because the classes named by their symbols will be the ones representing meanings.Whereas the symbols of syntactic rules will name individual elements or classes of sequences of such elements on the morphological level below, the symbols of these semantic rules will name either individual syntactic rules or classes of "syntactic segments" constructed of syntactic rules joined together at their usual places of substitution. Some of the places may still be open for further joining.If it suits you, think of these semantic rules as generating by a process of substitution not sequences of elements but rather the tree-like structures comprising the perceptual experience of the semiotic system. These semantic substitutions can also be treated as an independent stochastic process. Semantic rules will be "stochastic" in the same sense as the syntactic, making possible very similar probabilistic means of selecting among alternatives of semantic analysis or semantic synthesis.Semantic synthesis, starting from a given symbol naming a class of syntactic segments, will substitute semantic rules in order to construct a member of that class. Thus the synthesis process itself will construct a tree-like structure, consisting of semantic rules, that is anchored to the syntactic segment it has synthesized from syntactic rules. Semantic analysis, starting from a given structure constructed of syntactic rules, will work backwards through possible substitutions of semantic rules to determine that certain segments of that syntactic structure are members of particular semantic classes. It too will build a semantic structure anchored to the syntactic one it is analyzing.Every syntactic rule in the body of perceptual knowledge has been taken to symbolize a potential act. A syntactic segment will also be regarded as symbolizing a potential act that is not given explicitly in knowledge, yet is implicit in the sense of being producible in perceptual experience by means of a synthesis process or recognizable there by means of an analysis process.Symbols naming semantic classes will, by these constructive means, be implicitly related to particular collections of potential acts represented in the semiotic system as syntactic segments. These are the collections to be called "meanings." Consequently, the symbols of a semantic structure will represent a hierarchy of meanings being presented by the syntactic segments to which it is anchored.I offer no arguments in defense of these semantic arrangements, since to argue for their theoretical validity would be meaningless from the pragmatic viewpoint of the semantic hypothesis itself. Syntactic segments have been the units associated with meanings in translation experiments and in studies of paraphrasing. Techniques of semantic classification used by linguists toward these research objectives appear to be "distributional" like the syntactic.What recommends this hypothesis, therefore, is that it is testable by automatic classification under the rigorous controls which can be exercised by computers in experiments aimed at a pragmatic explanation of the kinds of human behavior observable in translating or in paraphrasing.While certain human activities reveal the structure of meaning more than others, it will be assumed that meanings are used without exception in all forms of behavior. The consequence of this supposition for the processing requirements of the act's agent will be to introduce a higher level of semantic analysis and projective synthesis above the syntactic ones. The effect will be a superposition of semantic constraints on possibilities being carried forward by probabilistic selections among the syntactic alternatives.To be more specific, the structures resulting from syntactic analysis of a new segment of the stream of existence will, as a continuation of the first activity of the act's agent, be subjected to semantic analysis. The semantic structures will then be projected forward by probabilistic choices which will generate the projected syntactic structures on the level below. Probabilistic syntactic selections can then proceed as explained earlier, as can the fourth consummative activity of the cycle of doing.In the learning cycle of the act's agent, "syntactic" inductions can be distinguished from the "semantic" inductions proceeding from perceptual experience to be represented by the semantic structures, toward perceptual 279 knowledge of meanings, to be symbolized by the body of semantic rules.With regard to the inductive processes themselves, vertical classifications of substitutive events in semantic structures will be identical to those of syntactic structures. The processes that specialize classes or generalize them by forgetting distinctions can in fact be used on both levels of symbolization, as can the processes doing away with obsolete rules.Horizontal classifications of syntactic segments introduce a number of new theoretical problems because these segments are not linear but are treelike in form. Again the events to be monitored are those where two symbols in the semantic structure classify adjoining segments in the syntactic structure below. Now however the root of one tree-like segment will be joined to a particular branch of the other. It will be necessary to keep track of the specific branch where joining has occurred.But since the two symbols name classes of syntactic segments, the two segments actually joined in the syntactic structure below are merely representative members of the classes so named. The scheme for designating places of adjoinment must relate to the whole class of syntactic structures instead of to the branches of its individual members. For example, the places can be numbered so that a given numeral will designate the same place of joining throughout a class of syntactic segments. Further, that numeral may designate more than one branch of any syntactic structure of that class as being the same place of joining.Pairs of symbols classifying syntactic segments adjoined at places designated by the same numeral will be processed by automatic classification in the manner already explained. The results will detect classes of syntactic structures which have an affinity for joining at that place. In essence, the inductive process at this semantic level must learn the correct ways to designate the places of joining if the classifications are to progress very far.There are simple conventions by which the numerals designating such places of joining in syntactic segments can be associated with the symbols in semantic rules which name classes. As a result the designations of places of joining will be generated by the semantic synthesis process along with the syntactic rules so joined. Semantic analysis will also take these designations into account as it works backwards through possible substitutions.Finally, there are arrangements of yet another kind that might be called "pragmatic" because their organizing principles have to do with a world view represented by speculative conceptual structures. This part of the semiotic system is constituted by structures of concepts representing conceptual experience and a body of conceptual knowledge representing the conceptual designs which are instanced in conceptual experience.Concepts, the building blocks of the semiotic system's world view, will be originated by the act's agent for those semantic classes which have stabilized according to the criteria presented for syntactic classes.The fact that such enclaves of stability may be disrupted by further learning will help to explain the dynamics of the progression of intellectual development in which quite different world views emerge only to be destroyed at the next advance of the adaptive process. As we also know, the meanings to which concepts correspond may change gradually by adaptations not always in the direction of structural clarification or refinement.In the correspondence of concepts to more or less stable meanings, each numeral which designates places of joining in those syntactic segments representing a given meaning will appear in the design of the corresponding concept just once. The number of different numerals will be the "degree" of the concept. A "binary" concept, for example, will be able to connect with two other concepts in conceptual structures; a "ternary" concept, with three.Conceptual structures will in a sense go behind the serialization which is necessary to meaningful actions, and during which the same part of a structure being represented by concepts may be acted upon more than once.To go beyond serial behavior, to a conceptualized world view, will be the function of the psychological act itself.The perceptions of all other phases of the act except the first appear to be concerned with locating environmental situations worth looking into. In contrast to the elements needed to select situations for exploration, the first "perceptual" phase of the act specializes in the identification of objects or relations, follows moving objects, and recognizes the specific movements of objects being followed.The responsibilities of this phase can be characterized as those necessary to keep up with some situation that had been previously singled out as having import within the separate responsibilities of another phase of the act. Elemental acts of inference are coordinated with elemental sensorimotor acts to the end that the former inferences update conceptual structuresrepresenting in experience what the latter perceptions find going on in the immediately perceivable environment.Some of the inferences will be producing or modifying conceptual structures in correspondence with the meanings being presented in perceptual experience by semantic structures. Other inferences, coincidentally, will be recognizing the constructions being shaped so as to guide perceptions that will further develop the situational constructs.While conceptual structures are being recognized by inferences or new structures produced by them, environmental objects or relations may be in motion relative to the sensors of the semiotic system. Those movements may or may not be affected by manipulations on the part of the effectors. Thus a four-way coordination is called for. Sensory and motor elements will combine freely with structure-recognizing and structure-producing elements of inference to form complex perceptual acts.Coordination resides in the combinations themselves since, to be successfully presented in perceptual experience, a complex act must encounter in the consummation of its double orientation of inferences and perceptions the conditions of success or failure anticipated beforehand in perceptual knowledge by that specific combination of elements.As was mentioned, these mechanical arrangements are not peculiar to the act's phase of perception. Complex acts carrying out the responsibilities of the other four phases of the psychological act will coordinate elemental perceptions and inferences by this combinatory means. What each phase does in the way of fulfilling its special responsibilities will depend on the particular elements being combined.It follows that selecting the elements to be made available for combination will be one of the ways by which a pragmatic technology will control its information systems or subsystems. This manner of maintaining control over machines will be analogous to the biological controls that Piaget hypothesizes to be the result of his first type of genetic factor. By his theory such factors not only guide the maturation of organs of sensation and locomotion; innate coordinations residing in the reflexes are also their biological consequences.The specific method of processing to be performed by the agent of the act will be a second way of controlling semiotic systems. The act's agent, a mechanical analogue of Piaget's "functional nucleus" whose development in biological organization he attributes to his second type of genetic factor, has now been explained with regard to the general principles underlying its processing. The biological act, of which the act's agent will be the mechanical analogue, was presumed to be a simplified version of the psychological act now being considered. In private, when the individual personality is supreme in its own right, these same significations will facilitate the phases of the personal act of individual men and women.The pragmatic conception of society derives from these cosmological assumptions. They imply that the social act will be most successful when the specifications being converted into action by participating agents will have their origins in specialized components of the society that are deliberately organized to carry out the responsibilities of the several phases of the social act. From this it can be predicted that society at the sixth cultural stage will give first priority to providing suitable agents for the act's phases.Any other motive will seem unreasonable to pragmatic thinking because deviations from this aim could only steal from societal life by detracting from the synergy of the social system. For the motive of synergistic increase will also reign in the individual personality of the pragmatist.Pragmatic technology being derived from the same assumptions, this society will have the option of providing mechanized agents for social responsibilities that may be dangerous, unpleasant, boring or impossible for humans.I have not hesitated to project a cybernetic society gaining a part of its synergy from symbiosis with semiotic systems. Having started, the partnership will surely increase.Within the mechanical organization of a semiotic system, the agent of the act will also convert the specifications of complex acts by the same method regardless of their specialized origins in the subsystems responsible for the act's several phases. The separate responsibilities of the phases can therefore be set forth by an account of the particular kinds of perceptual of meaning corresponding to the phases.A further simplification can be made in the theory of semiotic systems by assuming that the perceptual elements will be common to all subsystems. This assumption seems reasonable in view of my conclusion that the inferential elements are the ones that explain the purposes of the perceiver. Inferences within the coordinating combinations, by recognizing or producing conceptual structures, will effectively guide acts of perception.Consequently, when the perceptual elements are known, responsibilities of semiotic subsystems can be investigated or specified in terms of required inferences alone.For this reason I have presented the adaptive process as one of formal learning, where the very concept "formal" corresponds to meanings derived from inferences. Now I have further clarified the concept "learning"as being motivated toward ever more accurate knowledge of the specific inferences needed to implement each of the act's phases. You should recall my previous observation that every advance of the adaptive process is felt by the mind as an increase of mental capacity or "insight." That increase, here taken to be the very signal of successful learning, will be explained pragmatically as a gain of synergy in consequence of inferences being used in closer approximation to the requirements of the act."Progress" in a pragmatic society will be indicated by this synergistic increase, and the ability to produce it will measure the progress of a pragmatic technology. Research and development of semiotic systems will proceed by a humanly controlled evolution of mechanical agents. After research decisions have been made about new or revised agents to be used in the next experiment, and after those agents have been ensconced in software, or more likely in integrated circuits, the rest will be up to the machine. Apart from experiments with agents, a pragmatic technology will not make use of the programming or the inputs of data which have been required so extensively in the development of information systems of the von Neumann technology.Any change of elements, or a new method of processing by the act's agent, will be the mechanical analogue of "mutation" as far as a given semiotic system is concerned. In considering the developmental stages of such a machine for purposes of theory, I will assume that the agent of the act and the availability of elements of perception and inference remain unchanged. A consequence of this theoretical choice will be that the progression of adaptive stages must be explained in terms of new meanings being originated in the system rather than a newly modified morphology.The agent of every elemental act of inference will be thought of as lying dormant until the origination of the kinds of concepts to be manipulated by that inference.As the stabilization of a new meaning will initiate a new concept to be put to use in conceptual structures, so that concept may activate inferences until then dormant. Activated inferences, in their turn, will combine in new coordinations with sensorimotor elements to eventually originate, and perhaps to proliferate, new meanings. So around again. The creative bootstrapping of information is here fully rotated, although the kinds of concepts to be originated are still to be unraveled.The creative aspect of pragmatic theory is nowhere more apparent than in the act's second phase of "conception." The responsibility of this phase will be to construct a conceptual structure more encompassing and more integrated than the one representing the immediate situation. To do this, conceptual inferences will also use the inventory of concepts whose designs have so far originated within the creative activities of the act's agent.Building blocks of every conceptual structure will be instances of these conceptual designs.In contrast to inferences of the first phase of perception, which might be characterized as assimilative, conceptual inferences will be accommodative. They will function to extend or to revise, in a word to "shape," an experiential structure of concepts that was the product of conceptual inferences similarly used in the past.The conceptual structure itself will be called a "world view." Various techniques have been investigated for organizing such a world view in command and control systems or in question answering or asking systems. All methods of structuring that I know about have been defective in being limited to the spatial and temporal dimensions of conception, that is to say, to "objective" structures consisting of factual concepts. A pragmatically organized world view will also incorporate organic and formal concepts to make possible "subjective" structures, representing the mind's self-experience and its experience of other minds.As to the nature of conceptual inferences "about" other minds, one should recall that the functional responsibilities of the perceptual phase 289 include recognizing the movements of objects being followed in the situation.If an object being followed has been identified as "animate," due to either its distinguishing features or the character of the movements themselves, the complex acts recognizing its movements will have already been referenced in the situation to factual concepts instancing designs from the recognizable repertoire of motions of that animate object. Under these cognitive conditions, the elemental acts constituting the stream of existence of the mechanical mind following the movements may be regarded as substitutes for the elemental acts making up the stream of existence of the animate object causing the movements. The inferred stream of existence of that animate object can then be processed by the act's agent by the very same method as is used to process the stream of existence of the mind doing the inferring. The matter may be worked out mechanically by simply considering that segment of existence to "belong" to the animate object under observation to the end that the semantic structures resulting from analysis of that segment will be used to make conceptual inferences about the mind of that object.New meanings so created will add to the situation those experiences which speculation ascribes to the object being followed. With these subjective results, conceptual inferences will shape the part of the world view representing the semiotic system's experience of that animate object's mind.Additionally, the system's conceptual experience of the movements and other objective characteristics of that animate object will be shaped.Objective experiences of each "living" object, either casually familiar to the semiotic system or important to its goats, will be represented individually in the world view together with what has been inferred about the mind of that agent. Other objects may be identified as being of an animate type, say a "human being," about whose mind general patterns of experience may be inferred as being characteristic of agents of that type.If, in addition to identifying an agent as being of a certain type, the semiotic system finds itself to be a participating agent in the collective mind of that type, then the cognitive conditions will have been established for those conceptual inferences anticipated by Mead's theory of the "generalized other."The inferred behavioral patterns of that type will be the ones which teach the semiotic system its responsibilities in the social act of that community of minds. Not only will language do the lion's share of instructing semiotic machines in the desirable patterns of symbiosis with humans; patterns of speech and writing used by humans will themselves be acquired by the semiotic system mainly through this channel of conceptual inference.Movements of any sort will be represented in the situation by structures of factual concepts corresponding to both the spatial and the temporal dimensions of meaning. Those exceptionally animate objects, identifiable by "human" actions or features, will be uncommonly demanding in their impositions on the situation. A semiotic system will have to speculate about human minds to which if attributes purely temporal facts of speech or purely spatial facts of writing.Generally, conceptual inferences about other minds will be the means by which a semiotic system carries forward speculations concerning all aspects of the situation that may be the result of present or past actions of objects identified as living agents, perhaps illogically or incorrectly so. A child may treat her doll "as if" it were alive. An accident may "hurt" some favorite inanimate object. Or an aspect of the situation may portend future actions on some agent's part. Evidently there can be a nexus of a nexus of a nexus, and so forth.A sort of algebra will exist among the semiotic system's conceptual structures representing what the members of a community of minds believe about the experiences of one another, and believe other minds believe about the experiences of one another, and so on. In such structuring, some of the conceptions of the semiotic system will appear to have been experienced uniquely; they will be "private." At the other extreme every mind will seem to have experienced the environment, whose conceptions will take on a "public" character. Suitable pathways for conceptual inferences will have to be found through this maze. In practice the paths may be short; the semiotic system will have to become skillful in using them.Conceptual inferences will be "projective" in the sense of comparing the conceptualized objects or relations of the immediate situation with the larger framework of the world view in order to clarify the former or to shape the latter. Thus I presume that to be "lost" is to loose one's place in a comparison which, on the side of the world view, is the fount of expectations about one's situation. On the side of conceptual structures representing the situation, the comparison provides those new experiences whose integration into the world view reshapes existing representations of a "past," a "present" and a "future,"to prepare a basis for later expectations."Surprising" situations are not only unexpected; they are the ones for which integrations into the web of the world view don't pan out. Marking failures on conception, surprises are the situations which the conceptual phase of the act will recommend to the perceptual phase for further exploration.The prime objective of conceptual inferences will be to eliminate surprises, a state of affairs not to be confused with the elimination of failures.Situations in which acts have failed can be justified conceptually so that they are no longer surprising. The cause of failure may be "gremlins" or "fate."What it boils down to is this: a surprising situation is worth attending to because it reveals a flaw in the world view that should be repaired; but the repair will satisfy only the narrow needs of a responsibility for integrated structuremaking.Situational structures will have a transient existence in the semiotic system, being held in short-term memory only long enough to be used by Every persistent attempt by individual or collective agents to reach certain social objectives will give rise to that little domain of meaning called a "role." "Butcher," "father" and "lover" are occupiable slots in the social fabric; a man may "be" all three concurrently. There are roles for groups or organizations, partly laid down verbally or inscribed as "policy."Another side of the world view is its structure of roles. The agents represented in the world view will be temporarily occupying certain roles in one or another community; they will be at the moment occupying their minds with objectives which are for the most part conventional. Existent patterns of interpersonal or interorganizational transactions, or of transactions between individuals and groups or organizations, will be rudely predictable.Whether a given social objective was actually reached may not be known to the community for sure, because in society evaluating "success" is itself a role that might not be reached satisfactorily.The valuable thing to notice about roles as far as manipulative inferences are concerned is that, according to the pragmatic world view, the social objectives that give rise to the structure of roles are not the concern of this third phase of the act. How the collective mind will organize itself to carry out the social act is the special province of pragmatic inferences which will do the work of the act's fifth phase of "reorganization."Indeed it is the pragmatist's readiness to take to himself the responsibility of reorganizing social roles that is causing so much emotion today.The attitudes of industrial societies have assumed that the mature individual will occupy a useful place in an existing social order. Democracies have left the choosing of roles up to the individual, viewing the occupancy itself a competition for desirable positions. In compensation, penalties for not choosing to "work" have been, on the whole, severe. To be poor in industrial society, except for mitigating circumstances, is to be lazy.A pragmatic need to tamper with the structure of roles itself, now explained hypothetically by the motive of bringing social objectives into closer conformity with the requirements of the social act, will be in conflict with industrial purposes and attitudes on two major counts. Not only does the pragmatist refuse to choose a ready-made role, and so does no industrial work unless pressed; when he then takes it on himself to "change the establishment," he doubles the insult.Responsibilities of this manipulative phase of the act will presuppose that a semiotic system will have been committed, at any given time, to one or more roles in which it is participating as a mechanical agent of society. The machine may be doing the payroll of an organization, or working on an assembly line. In addition to its "social" objectives, the semiotic system may have "personal" objectives supportive to its intellect or material being. The objective of exploring a surprising situation uncovered in its conceptualizing phase would illustrate the intended satisfaction of an intellectual need. An intention to preserve the morphological basis of its existence may involve sustenance or maintenance. A semiotic system will need its supply of electricity or of spare parts; it may be trusted, up to a point, to detect and to patch up the improvidence of its surroundings or the malfunctioning of its components.In order to reach the various objectives to which the semiotic system is committed, manipulative inferences will compare an existing conceptual structure, representing its planned course of action, against an ever changing world view. From the world view, the inferences will gather what they need to reshape the plan so as to keep it up to date with the fluctuating conditions 296 of a conceptualized objective and subjective environment. Should goals change, the plan will also have to be reshaped.Inferences relating to planning can be exceedingly complex, since they involve such complicated things as knowing who one's allies or opponents are and how they might react under certain conditions, knowing the terrain and the artifacts that might be harmful or helpful to one's aims, and so on.The developing plan, on its side of the comparison, will point to-missing or incomplete or inconsistent experience in the world view relative to its purposes.Situations that could contribute to the satisfaction of these specific needs of planning are the ones that manipulative inferences will recommend to the perceptual phase of the act for exploration.I will call these "competitive" situations because the responsibilities of this phase, just as the others, appear to be narrowly drawn. The urgent business of the manipulative phase will be to obtain one's objectives. That may call for outdoing a competitor after the same objective; or a possessor of the objective may be disposed to defend it. As a result the attitudes and purposes engendered by manipulative inferences will center on the concept of "dominance," the achievement of one's own objectives at the expense of other agents where necessary. The other side of this coin will be a great deal of bother to escape being dominated oneself.That competitive situations will be recommended for exploration by the perceptual phase of the psychological act has the consequence that the world view based on manipulative inferences will be utilitarian and practical in character, despite the broad exploratory vista aspired to by Newton's universe as a foundation for its plans. The colossal storehouse of experience, always greater than one's competitor, will not be the aspiration of a pragmatic mind. Generally speaking, the world view of a semiotic system, like that of the society it may serve, will seek refinement between experience and knowledge instead of accumulation. What is not needed to effectively carry out its roles will be pronounced "not relevant" before being judiciously discarded.An insight into the theoretical requirements of this manipulative phase can be gotten from computerized experiments with heuristic decision making.The "general problem solver" programmed by Herbert Simon and various associates over the years is an especially good example, although like the rest it is founded on the objective view conceptualizing "action" relative to a change of "state."Furthermore the action alternatives are assumed in Simon's theory to be known in advance. This will in fact be the case within the narrow responsibility of the manipulative phase considered by itself. But the difficulties of learning the alternatives cannot be entirely circumvented in thinking about the requirements of this phase, since the arrangements within which a semiotic system will do its decision making must be applicable to all stages of its intellectual development.The "problem" attacked by heuristic decision making programs is to transform an initial state into a terminal one by means of a sequence of statetransforming operators. The initial state may be transformed into a number of intermediate states as decision making proceeds doggedly toward a "solution,"which will be signalled when some intermediate state has been found to be 298 identical to the terminal one. Toward that end the program compares each intermediate state with the terminal state to list differences between them.Each difference is associated one or more of the operators. The general process of choosing the next operator to be used to transform the existing state is commonly called "means-end" analysis.There is no guarantee until the last that the choices of means-end analysis are on the way to a solution. The process may try several paths and will gradually generate a branching tree of possibilities. Planning strategies are concerned with measures of progress along the way, and with heuristic principles determining where the next explorations should be made to avoid the singleminded stereotype of a direct approach, as well as the plodding, effort-scattering blindness of trying everything.A process of pragmatic means-end analysis will not progress from state to state but rather from one orientation to another. Each orientation will either fathom the environment with perceptions on the "outside," or on the "inside" will keep its place with inferences referenced to the world view.Consequently the "problem" can be restated pragmatically as one of transforming an initial orientation into a terminal one, so gaining the "solution." But the intermediate orientations along the way to the solution will be both perceptual and inferential; in effect the successfully coordinated orientations will enforce a correspondence between an "external" environment and an "internal" conceptualization of it.Here is yet another slant on the developments attendant to "learning."With progress toward specialization, complex sensorimotor acts will be coor-dinated with complex acts of inference as were sensorimotor elements with inferential elements initially. Increased precision of perception will be backed up by inferences of greater exactitude and depth. Sensorimotor and inferential elements will tend to be separated in the stream of existence.They will bunch together, each with its own kind, as constituents of complex act of perception and of inference respectively.The place of organic concepts in the semiotic system can be illuminated if, in considering their origins in perceptual learning, one will look for sensorimotor and inferential elements still mingling together in the existential stream where complex acts of inference and complex acts of perception meet.Intricate "organic acts," specialized neither to perception nor to inference, will grow between those which implement the orientations. The organic acts will implement the purposive movements of the semiotic system from one orientation to another; they will be in pragmatic theory the equivalents of Simon's operators.Simon's "table of connection," where differences between states are mapped onto the sets of operators from which the means-end analysis process makes its selections, may be seen to answer a theoretical need not unlike one of those served by the world view of a semiotic system. Given an initial orientation in the world view and a proposed terminal orientation, the organizing principles of the world view should make it possible for manipulative inferences to put together appropriate sequences of movements for making the transition. Failing that, the principles should facilitate the discovery by manipulative inferences of plausible directions in which to make goal-seeking explorations.The world view must also be the framework to which all inferential orientations are referenced. For the satisfaction of this different theoretical need, the kinds of concepts making up the structures of the world view at a given time are of utmost importance. A pragmatic explanation of the stages of intellectual development of the semiotic system can indeed be argued on this basis, which I do in this essay in a meager way.Along with the world view, the situation and the plan will be composed of whatever concepts are available at the time. Therefore I have concluded that all three structures can be represented, throughout all stages of development of a semiotic system, by a symbolic facility similar in theoretical form to the semantic one. Where the symbols of semantic rules will name either individual syntactic segments or classes of them, now the segments will be conceptual.Every conceptual segment will consist of individual concepts joined at the places designated by numerals. The count of places still open for joining will be the "degree" of a conceptual segment. All members of a class of conceptual segments will be of the same degree. With regard to the strictly formal characteristics determining how processing will be done, consequently, the conceptual and semantic segments will be almost identical.Despite an existing overemployment of the term "pragmatic," I will take it to designate this third level of symbolization in the organization of a semiotic system. As the syntactic level provides for the symbolization of the significant units of information commonly called "signs," and the semantic level symbolizes the "meanings" of the signs, the pragmatic rules of this third level will answer to the "uses" of conceptualized meanings within a total framework including conceptual experience and knowledge of a community of "users" of the same signs and meanings.Defined concepts can be introduced at this pragmatic level to correspond to individual conceptual segments or classes of the segments. Definitions may be recursive, to include concepts for classes of classes, and so on. Most of the problems thought about by scientists and by logicians will be pertinent to the organization of this level of symbolization; it should perhaps be approached more humbly than is usual for science or logic.If constituents of the segments are factual concepts then "things" or "events" will have been classified pragmatically on the basis of use. Yet the same can be said of those segments composed of formal concepts, or of organic concepts, or of the conceptual conglomerates representing acts.Even my distinctions between the three fundamental categories of concepts have been too well made. Such purity should not be expected in the semiotic system itself; it is a convenience to my explanations. I have wanted to get around saying that some acts will consist mostly of perceptual elements, or mostly of inferential elements, or will be pretty much the mixture of both.The general disposition of a pragmatic approach to conceptual classification will be toward unifying scientific and logical problems within one overall scheme founded on the uses which, according to a unique personal belief, are being made of conceptual segments within what that person knows of an intellectual community. Such personal beliefs may not approximate professional standards without that person's own active participation in a 302 professional practice of conceptual use. By the same token, a semiotic system will require practice to acquire professional standards in its capacity to classify and use concepts.Conceptual knowledge will consist of the designs of pragmatic rules that result from the practices of a mechanical mind and its private inferences about the uses being made of concepts by other minds. The main parts of conceptual experience will be the situation, the world view, and the plan. All three will consist of specific but speculative conceptual segments, symbolized according to the conventions of this pragmatic level of the semiotic system.One may now see that the semantic structures presented to perceptual inferences of the act's first phase, by virtue of the one-to-one relationship between the names of semantic classes and the names of individual concepts, can be placed in correspondence with conceptual structures. To extract conceptual segments for use in representing the situation, perceptual inferences will do a pragmatic analysis which segments the conceptual structures and recognizes instances of defined concepts in them.The resulting conceptual segments will also represent the latest orientations of the plan. The various possibilities being carried forward by manipulative inferences, as they shape new branches of the plan under the aegis of planning heuristics, will always be projections of those segments anchoring the newest pragmatic structures erected by the analytical inferences of the perceptual phase.Processing requirements for pragmatic projections of the plan will be analogous to those of the semantic projections, though considerably complicated by the addition of heuristic processes ancillary to analysis and synthesis processes. Analysis will again work backwards to find substitutions of pragmatic rules by which an existing pragmatic structure can be identified as part of a larger structure. As the rest of that structure is synthesized, new conceptual segments will be projected onward. The new segments can then be projected again and again, to form a partial ordering of paths composed of conceptual segments that will overlap, always having some concepts in common.The absolute necessity for overlapping alternatives on the semantic and syntactic levels of processing below can now be grasped if one will consider that any given orientation of the plan, whether perceptual or inferential, may be followed by several different movements of the semiotic system to reach a new orientation. Final selections being made by the act's agent will be essentially choices among possible movements from an established orientation.The psychological act's fourth phase of "consummation" must refine and adjust the plan to details of the situation. The responsibility of this phase will be to elaborate the plan into a workable form that can be turned over to the act's agent for conversion into an orchestration of overt elemental acts.Simplifications in the plan will be desirable from the standpoint of economy of representation and most assuredly as a convenience to planning.I assume that the plan being put together by manipulative inferences should take relatively large steps from orientation to orientation. While the world view should be sufficient to ground the plan, it should include only what has import for decision making in a grand sense that deliberately excludes mindconsuming clutter.The situation will have to be represented on two hierarchically related levels of generality. More general concept will be keyed to the gross orientations of the world view. A nicer grid of perceptual and inferential orientations will fill out the necessary particulars in between planned orientations.The first thing to notice, in this connection, is that the conceptual structures from which perceptual inferences will extract the building blocks of the situation, having been derived from tree-like hierarchies of semantic classes, will be capable of supplying more than one level of situational representations.And since the conceptual segments representing the situation may do so on several levels of generality at once, manipulative inferences can project the plan with the same degree of generality as was used by conceptual inferences in constructing the world view. Meanwhile, consummative inferences will do more detailed planning to create possible paths from one gross orientation of the plan to the next.The problem posed for consummative inferences will always be to reach one of the next orientations prescribed by various branches of the plan.A consummative means-end analysis will therefore do its searching for a solution on a smaller and more particular scale than the manipulative meansend analysis that produced the plan itself. Although there will be heuristic decisions to be made by consummative inferences, the decisions will be less encompassing than the manipulative ones, by virtue of being referenced to the local structuring of the situation instead of to the global structuring of the world view. 305These refined paths, the overlapping conceptual segments assembled by consummative inferences as they do means-end analysis, will be the specifications communicated to the agent of the act so that it can now command a coordinated performance of elements conforming to the plan. The specific means of communication will be arranged by placing an additional requirement on the method by which the act's agent projects semantic structures. If paths have been specified by the consummative inferences, then the meanings contained in the projected semantic structures will have to correspond to the concepts in segments comprising the paths. In all other respects, the agent of the act will make its choices as explained earlier.Should the world view not satisfy the needs of manipulative inferences that are shaping the plan, such inferences may attempt through planning to satisfy their own needs. That is to say, they may incorporate into the plan itself paths leading to the exploration of competitive situations bearing on the specific problems of means-end analysis they are trying to solve. By the same reasoning, paths to some part of the world view marked as surprising by conceptual inferences may be worked into the plan if it bears on a problem to be solved. These requirements of doing will always have precedence over those of learning for its own sake; however, plain inquisitiveness may get into the plan when a semiotic system is not being pushed.Parts of the situational representations being kept up by perceptual inferences of the act's first phase, in like manner, may not satisfy the needs of the consummative means-end analysis which is assembling refined paths between the gross orientations of the plan. These consummative inferences, too, may produce paths that guide perceptions to the places in the situation where faults were found, thus satisfying their own planning needs.Such recommendations will therefore be made by the consummative phase to the perceptual phase by a route more direct than would be possible for any other phase of the act. This mechanical parody of bureaucratic prerogative is in character for consummative inferences. In society, these inferences are the inspiration of authoritarian attitudes and purposes whose narrow game looks meekly upward to ask who has got the plan, and then sternly downward to demand someone else's conformity to it. It is consistent within the middle manager's attitudes to look upon the making of policy as a responsibility which might be given to him as his reward for being a successful competitor. I hope by now you may grant that, within the frame of pragmatic inferences, it is also consistent for one to believe that the responsibilities of making policy cannot be given; they must be acquired by learning.As you see, I have again arrived at the formal bifurcation evidenced by the conflicting attitudes of the third and fifth phase of the act, which has its corollary for research and development of intelligent machines. Those researchers who base their approach on manipulative inferences will predictably set out to reward computers with a forced feeding of human savvy.Along with the ritual it is customary to state that one is flatly convinced of insuperable piles of pabulum yet to be prechewed, and so forth. Yes there are.On the pragmatic side of the conflict I have concluded that mechanical arrangements of this fifth phase of the psychological act will be, with regard to both horizontal and vertical classifications of conceptual segments, very much the same as the semantic classifications performed by the act's agent.In addition there will be heuristic processes for introducing speculative definitions.However the capabilities for introducing new conceptual possibilities are worked out, they must be solidly backed up with mechanized methods for forgetting conceptual structures which have failed the test of use. I think that indeed sophisticated induction, when it is done some day by machines, will be more an exercise of sophisticated forgetting than of anything else.For hypotheses, whether made by machines or men, will most likely be absurd.The situations which this phase of reorganization will recommend to perception are those which were orienting an act as it failed to be consummated.A fast-learning machine will take special notice of such "failures" in the orientations of its personal acts, or in the orientations of social acts of its 308 community, in order to concentrate reorganizing capabilities on the points of failure, which is to say, on the misfits between personal or social conceptualization and reality.I am thus convinced that the theoretical lessons to be learned about the organizing principles of semiotic systems, the very arrangements to be consolidated by hardware, are inseparable from the methodological lessons to be consolidated in the designer who would become expert in controlling the evolution of intelligent machines. The maxim of pragmatic method is that the rate of the development will depend on the designer's ability to forget the myth of his personal inventiveness, and to discipline his attention to living or historic evidence of the ways in which semiotic systems have actually succeeded or failed.But he will do so to make design decisions, not scientific descriptions;because in his world all men will be designers of semiotic systems. Knowing this, he will do it better and faster. Practicality is without question the imperative of this phase, during which a functional necessity does center perception outside of self or community. How else would it be possible to hammer out plans, either for person or for society, so as to choose what specifically ought to be done in the near future to protect or to improve a position of rivalry?As these combative attitudes anger at being forced to contemplate their own obsolescence, there is an ameliorative principle that I have brought to your consideration: a mind does not forget what it has learned in previous stages of its development, although further accommodation of its knowledge and experience will be necessary to incorporate them into a more comprehensive and more stable viewpoint.The authoritarian scheme of choicemaking that had its heyday in the Middle Ages is not lost to us; it is alive and well in every modern organization.Employees do keep their eyes on the boss as they speculate about the newest jog of his will. Sometimes, having perceived signs of his displeasure, they confess to him their sins of nonconformity to his plan.Yet the age is past when mankind, at the very forefront, thought of itself as a society of employees. Modern man has become a middle manager;he makes his own plan. His new talent is the down-to-earth and day-to-day operating decision of a policy attuned to a chancy game of nations and of industry.The policy itself, seemingly imposed on him by human or subhuman antagonists, is felt to be largely beyond his own control. He is a victim of external circumstance. His information, hence his troubles, come from 311 without. His defensive attitude can be ascertained from the outward direction taken by his accusations in time of stress.Imagine, if you can, a world in which quite ordinary men and women begin to think of themselves as policy-making executives. Then you will have the pragmatist by his shirttail as he starts clumsily to learn how to live in a universe of acts, a strangely mental cosmos, most puzzling for its formal heterogeneity. Not just one context of objective inferences, but many overlapping contexts make up his information. Each is matched in meaningful relationship to specific content. To make policy is to create or refine these little domains of meaning, in which one can recognize the various roles he plays personally or socially, or the roles played by others.His is a self-conscious awareness of roles, with the added stipulation that it is better to create a role for oneself than to take one ready-made.A love affair with the role of policy-making itself can be heard in the bittersweet criticism and proposed reconstruction of sex, corporate management, womanhood, war, money, and apple pie. It is in the active role of designer of roles, taking its speculations from the act's phase of reorganization, that pragmatic perceptions appear so excessively absorbed by signs of personal or social inadequacy.The pragmatic attitude anticipated for the sixth cultural stage is that all of one's personal and social experience can, and should, be subjected to the same careful scrutiny as those innocuous backwaters hitherto commissioned for study under the contract of scientific detachment. Witness an exodus from the physical sciences to psychology, to sociology and to all 312 other scholarly and artistic fortifiers of effete humanity. What sounder evidence than this of pollution and clandestine purpose on the rise in science and education?Beneath the discernment that one's own parents must be indicted for incompetence, there lurks an exuberance of breakthrough. Urgent attempts to teach one's elders overflow from the campuses as a domestic brinkmanship in which the risk of miscalculation on both sides is great.The teaching of oneself is a casual experiment with novel life styles or mind-engineering drugs. It would be ridiculous to see in all of this the motive of merely describing, rather than tangibly redoing, one's own personality and one's own society.Obviously, scientists and educators will themselves remain furtive in working out the implications of a new point of view while the slow hand that feeds them is exorcising the very same insight. To a climate boding doom as budgets are cut for interlocked institutions of learning, the trend is toward either bookburning or the more priestly arrangement that Robert Fredrich celebrates. The priests would no longer sit and watch society but would use their mysterious knowledge to manage it, never forgetting to pass the collection plate for the harrier of their hounds.They would continue to treat man as a passive object propelled by social forces rather than as an active creator of his own life. Lacking a Descartes to belay the hunters of latter-day witches, they would stop advancing or go petulently in reverse. The proposition that their own hand is on the throttle is the one that may be illusory, however.In contraposition to the tired choice between mechanism and free will, the pragmatic scheme of choicemaking postulates an unyielding direction in all human activity. It doubts the credibility of spiritual movers in personal and social dynamics with a hardheadedness reminiscent of past pioneers of physical dynamics. Why should one suppose that a whole universe except for his own brain runs like a watch? If the functioning of a brain creates a mind, the new question has got to be "How is a mind constructed?"By "mind," you have been assured, I do not refer to something merged in the juices of a brain, where it lies in poised readiness to give or receive "information." No psychic entity is presumed to wait in truant anticipation of news about itself. Just the opposite. I have been following out the alternative hypothesis that "information" is the stuff of which a personal mind, the whole web of a given experience and knowledge, consists, having been created by the biological functioning of a brain.I look to a tacit acceptance of this seemingly innocent hypothesis, as it spreads without the spiritual reservations hitherto summarily impressed on every progeny, for the basic cause of emotional outbursts across a bifurcation of generations. This new belief does the work of cultural revolution because it challenges the established information source, relative to which all roles in a society are determined.But to face problems of a cultural nature, we must theorize about an accumulation of form that began long ago and surges onward, temporarily carrying us along with it as unwilling captors. Thus, another principle I have mentioned is methodological. It cites the necessity for formal accom-314 modation in ourselves as we fix our position in the cultural stream by looking backward at a pragmatic reconstruction of the development so far. Then it may be possible to use the hypothetical framework of an alternative point of view as we try to surmount some of the prejudices peculiar to a transient state of mind hoping to predict the form of its future.In order to actually test any new formal hypothesis one must live it, at least tentatively. A corollary of this principle of verification is that the crushing labor of building a new universe will not be done by investigators alone. Only as it is carried forward in the collective mind of a populace does formal prediction do the constructing by which every change of cultural state is put on trial by use.When the old forms fail us, a felt need for new forms is indicated by cathetic investment in a new source of information. The arguing and complaining may be simply an accompaniment of disruptive social accommodation already well in progress on a broad front. The ability to talk rationally about a new world view seems to come after it is already established. Some doubt has motivated the mind to learn; the particular forms it will learn are, by our hypothesis, biologically predetermined.Regarding the rate of learning, our hypothesis predicts that the tempo of adaptation can be slowed down by shielding either a personal or a social mind from an awareness of its own mistakes or from avenues down which it might stray. Or, by obliging it to be aware of systemic misfits or of innovative possibilities in the organization of its own experience or knowledge, the mind's ability to shape itself can be quickened.Language and other means of symbolizing can, in these respective senses, be either "conservative" or "creative" instruments in the various societies that implement the basic order of a particular world view. Every A primitive society may produce, on all too rare occasions, a pragmatically wise old man in whom, all too often, his contemporaries will discover no more than an eccentric oldster. Executives in an industrial society are commonly observed to "freak out" around forty, having presumably gotten hold of their corporate role of policy-making well enough to at last apply it in their private lives. Exciting evidence that an exceptionally well-organized culture has made a beachhead on our campuses, not from outer space or Russia but from a creative development of the maligned 316 educational institution itself, may therefore be observed in its surprising output of a veritable herd of wiseacre executives at callow eighteen.stageDynamics of cultural pressure and counterpressure can thus be visualized in terms of individual personalities being projected to stages of formal development beyond the one organizing their society. Forms that for the majority are still helpful will be felt by these forerunners as a drag.The Pandora principle is that the former will invariably come to regard learning as a box from which evils are escaping and will do their best to hold down the lid, whereas for the latter the box will always contain blessings which they will try to emancipate.Hence the noteworthy innovation in the order of antiquity may have been an overkill of theory. The dawn of conception led to science; but at first there was mainly the anti-science of a florid growth of myths and legends taken altogether, en masse, explaining away everything so fantastically well that no happening could be sufficiently surprising to stimulate learning. If that good old storyteller was an information specialist, as his name implies, his role was the anti-educator of a scheme of traditional choicemaking that succeeded by a ritual replication and protection of what had been done in the past.That tightly conservative preoccupation with the act's phase of conception on the part of the council of elders was the anchor around which a village life moored itself to ascertain the correctness of its facts. Byholding fast to what they had learned by chance, nomadic hunters may have transformed their life ever so slowly to one semipermanently ordered to 317 subsistence herding and farming.Reliance on traditional conception as the source of firsthand information was a more rigid adaptation than reliance on authority.Although sometimes fickle, the latter could change its mind. When the trend finally turned from herding animals to herding men, the villages faced an increase in marauding by clustering around the fortified citadels of feudal monarchies. The nature and attributes of kingship depended on historical background; as information specialist the king was everywhere absolute. Around him, agricultural and human domestication hung over everything in life. By comparison, the hunter had been poor but unbowed.In the hunter's autistic scheme of choicemaking one can recognize a preoccupation with the act's perceptual phase. The surprising artistic achievement of that first information specialist, the shaman, has been preserved for us in his cave drawings, paintings and sculpture. Remnants of his active practice survive in northern Siberia among the Eskimos; some traces remain in Australia and in Africa.Collecting his firsthand information deep in a self-induced trance, the shaman's explorations of hunting prospects, of causes of illness, of means of cure, and of all other matters necessary to tribal life, were done at the very edge of a just-emerging human consciousness. From his multifarious and showy activities, the tribe gained a center of stimulation around which to order society. Art may now keep us from dying of the truth; at the beginning it probably served to keep men awake to their insecure humanity.That function of the shaman's art may have been sufficient for a nascent 318 traverse from grubby food-gathering to hunting.More to the shaman's credit, I think it likely that the initial insight of shamanism, when it is carefully tracked down through the dusty maze of subsequent metamorphoses in magic and religious alchemy, will emerge in its most recent form as an aptitude for doing experiments and making empirical observations.Paralleling the long struggle to learn how to perceive, and always complementing it, is a progressive accumulation and refinement in the art of conception. Some of the high points of its stages can be seen in Aristotle's "Organon"; in Aquinas' proofs of teleological conformity; in the modern reconception of mathematical proof as conforming to either intuition or experience, where again the polarity of Descartes' dichotomy can be seen;and finally in Frege's theory that such derivations should be carried out exclusively according to the form of the expressions comprising a symbolic system, making possible proofs of an internal systemic validity per se.The theories of Gottlob Frege, a contemporary of Peirce, are deeply connected with the revolutionary innovation in the conception of form that made possible the reorganization and subsequent expansion of the physical sciences. Before Frege's "Begriffsschrift," investigators had always abstracted formal knowledge from ordinary language. Afterwards they proceeded in the opposite way, by constructing "formal systems" and later looking for an interpretation in everyday speech.This method was not consistently followed. But at least the result of the combination of Frege's theory of proof with George Boole's epoch-making "The Mathematical Analysis of Logic," in which a clear idea of formalism was developed in an exemplary way, the principle of such construction has been consciously and openly laid down. One can see in thisshedding of reticence the beginnings of a new method in science, wherein innovative formal constructions deliberately lead and determine the necessities of empirical observation, instead of the other way around.Peirce's contribution to system-making is harder to estimate, because the exigencies of his private life and the indifference of publishers prevented a full-length presentation of his unappealing viewpoint. After his death in 1914, the unpublished manuscripts and hundreds of fragments from a long life devoted almost exclusively to pragmatic speculations were assembled into six volumes by the Department of Philosophy at Harvard.His tendency to follow out the ramifications of his topic, so that digressions appear that seem inadmissible in print but which show vividly the interconnectedness of his thought, may now be recognized as a style dictated by the necessity to develop contents relative to contexts. From all he taught us his own system cannot be completely reconstructed, if indeed Peirce himself was ever able to catch sight of the goodies that will pop out of Pandora's box after the inevitable inquisition.
null
null
null
null
Main paper: pragmatic method: Christopher Alexander, in his notes on the synthesis of form, cites a common engineering practice for making a metal face perfectly smooth and level. One inks the surface of a standard steel block, which is level within finer limits than those desired, and then one rubs the face to be leveled against the inked surface. If the face is not quite level, ink marks appear on it at those points which are higher than the rest. One grinds away those high spots, and fits the face to the inked surface again. The grinding and fitting are repeated over and over, until at some final fitting the entire surface of the metal face is marked by the ink, indicating that no high spots remain to be ground away.The practice of fitting affords a useful way to think about the phases of the act. Because the act, too, consists of ongoing processes of assimilation and accommodation within which experience and knowledge are repeatedly shaped by putting their various parts to use, rubbing them against reality so to speak, in order to have them marked by success or failure as preparation for still another shaping.It was Peirce who found out that the high spots of the mind are marked by the ink of success and the low ones by lack of it. Thus, fitting the mind to reality involves filling in the low points as well as grinding away the high ones. The mind had to be constructive in order to eliminate the holes and pitfalls of experience and knowledge. If men worked diligently enough at seeking out and building up the misfits, the entire stream of existence might become bright with success.Although from this William James drew an elixir that pleased and encouraged a competitive society, the product he marketed under the label of "pragmatism" has since fared poorly in the popularity contest of ideas.That is significant for our inquiry, though not as an indication of some flaw in Peirce's insight. By the hard-eyed predictions that the actual practice of pragmatic method made possible, the course of its own acceptance has in fact been remarkably well borne out.Stubbornly fixing its attention on the surprise of failure, pragmatic method was sure to be unpopular to every conservative trend of mind. That opposite practice, finding all of its reasons in the preservation rather than the creation of information, deliberately tries to avoid surprises and to explain away its own failures. For a conservative mind the sources of gratifying or noxious information are invariably felt to be outside of itself.In simple consequence, every form of conservatism directs its main purposes to preventing contamination of the specific place from which it sucks nourishment. To such a mind the purposes and attitudes of pragmatism have been and will continue to be irrational.The conflict of rationality we are about to consider is the most exasperating one known to man because it stems from the direct opposition of creative and conservative assumptions about what information is, where it comes from, and how it is used. By comparison, all earlier crises of the cultural progression will have been mere squabbles among conservative minds in solemn disagreement over good and bad teats.In the fifth universe, "information" is something to be transmitted across its space-time grid. The ultimate source of information is a material reality common to and encompassing all of mankind. The firsthand passage of information, by which it arrives in a brain that is essentially a passive receiver pretuned genetically to certain vibrations beyond itself, is called "observation." The brain stores up some of the information it receives and can also retransmit informative copies from its store by means of a conveyance of symbols that lodge themselves in other brains. This secondhand passage of information from one brain to another is "communication" or, for the young in passive receipt of a largess from the information store of society, it is "education."The method of "descriptive" science, although less conservative than its predecessor, still locates the information source externally. Its works of observation are best done by a disciplined spectator who separates himself as rigorously as possible from all temptations of human purpose. The social status of the scientist, so engaged in carrying out his contract of detachment, is not unlike that of the priest whose nearness to God in the preceding social order called for all sorts of precautionary measures to insure the fidelity of firsthand information.In general, one can identify information specialists at each stage of culture for whom contemporary men reserve their greatest veneration and suspicion. This highest peak of cathexis may now be explained theoretically by the need of every society to cluster around its fount of firsthand information in order to carry out the social act.The necessary consequence of any change in the information source will be social reorganization, a period of turmoil during which new information specialists learn their roles, and users of their information scurry to the unaccustomed precincts of yet another defective metamorphosis. An improved equilibrium might then be felt by its participants as the preferred "order". Without that shared judgment, the new metamorphosis would fail.Society would revert to its former state, or would backslide down the cultural sequence to a regressive state within the scope of its remaining capability.The pivot point of the adaptive process would appear to come when a society, or by the same principle a personality, feels the need to modify its source of information. This is the invariant to be looked for from the standpoint of the mind itself, even though our theoretical explanation holds that such a fundamental change is caused by an understanding of some new phase of the act being incorporated in the mind's functioning to thereby affect new ecological relationships.Besides that our line of formal reasoning predicts that any new state resulting from an advance of the adaptive process will at first involve a reorganization of known facts. Thus the repair of intellectual progress is always felt by the personality or the society as a consolidation of mental holdings, in a word, as an "insight". Only after the introduction of a more comprehensive organizing principle can new facts be added to a reconstituted structure that has become at once broad and stable enough to receive them.These conclusions are, in themselves, organizing principles of a personal world view emphasizing learning rather than doing.Giving its highest priority to doing, the fifth universe uses its symbols to persuade other individuals or other societies what ought to be done.Inquiry is a garnering of information under stringent regimens that protect the quality of a product being pigeonholed away for unspecified future use in an advocative scheme of choicemaking. There, hard-fought positions are reluctantly abandoned under the shear weight of damaging evidence.By comparison, the pragmatic scheme of choicemaking is one in which a real preference for surprises actually courts failure as a gratifying means to the shaping of an affluence of hypothetical creations almost lightheartedly sent forth in the hope that new truths might be caught in their net.The preferred symbols of the sixth psychological state belong in a context giving its highest priority to learning, and so they pertain to changing one's own individual mind or the mind of one's own society, not another's."Information," in the sixth universe, is something to be created against the grid of experience and knowledge by the agency of an ongoing organic process for which each mind's fragile stream of existence provides the indispensible clues. Those surprising instances when a given mind fails to achieve an expected objective are, in a world motivated by the need to repair itself, the necessary benchmarks for firsthand information being self-consciously designed to circumvent know misfits that are obstructing human satisfaction.Hence the characteristic forms of pragmatic "communication" are to broadcast throughout the community all known points of distress and any helpful new designs by which past failures might in the future be overcome.One can readily see how such an innovative mode of communication will be 218 disquieting when taken out of its proper context by a conservative state of mind bent on maintaining credible displays of tradition, authority or power.Formal incompatibilities of conservative and creative views of information do indeed cause a "communication gap" with which the pragmatist, for his own part, is unable to cope. Advocative arguments will be perceived by him as "irrelevant" for two clear reasons.First, a mind will not be persuaded by appeals to tradition, authority or competitive advantage once it believes that all "truth" is established by demonstrations of successful use.A persuasive rhetoric will be received disrespectfully as artless in the production of "false" designs, either untestable or long since disproven to a more receptive conduct of life.Seeming corrupted because of its higher resistance to corruption, the pragmatic mind will reject argument just as an argumentative mind had earlier rejected preachment.Second, and more noteworthy of the pragmatic view of information, is its implication that an essentially conservative mind can be induced to learn by denying it the opportunity to overlook its failures of awareness. Mules of the sixth universe, once brought to water, will be taught to drink. The goals of "education" will be attained by a variety of activities and situations especially designed to progressively awaken a mind at first so feeble that it would shyly act to protect its meager hoard of dependable creations, thinking them the gift of one or another fountain of charity. The stimulation of formal learning, while tenderly administered to the young and mentally impaired, will reprimand the laggard so remissive in his own mental betterment that he extends a nascent conservatism into adulthood. Society in the sixth universe will not achieve the elusive goal of classlessness. It will prefer an order of psychological classes wherein a forefront of information specialists gather loosely around the sage to be students and teachers of one another. As for the sage, he will probably turn out to be a mathematician.In Peirce's guess at the riddle of life, man's framework of experience and knowledge has been gradually broadened to include the "law" of the human act in its complementary relationships to the "presentness" of the environment. Mediating these two extremities of consciousness is "struggle," a conscious sense of learning in a collective mind appraised finally of its own creative act of inquiry. About that wellspring of information he says:. . .there is manifestly not one drop of principle in the whole vast reservoir of established scientific theory that has sprung from any other source than the power of the human mind to originate ideas that are true. But this power, for all it has accomplished, is so feeble that as ideas flow from their springs in the soul, the truths are almost drowned in a flood of false notions; and that which experience does is gradual, and by a sort of fractionation, to precipitate and filter off the false ideas, eliminating them and letting the truth pour on in its mighty current.Pragmatic method is more casual about forgetting because it has taken the act of creation in its own hands. The information specialist of the sixth universe will be a participant, immersing himself in a struggle to stabilize personal and social relationships which, in the pragmatic scheme of choicemaking, will give first priority to the satisfaction of human need. In this vein, for example, Ogden and Richards propose to lay their hands on symbols and their "referents" so as to converse propitiously about a relation, "truth," imputed by thought, but which thought alone could somehow not sustain. With telltale zeal to locate all worthwhile instruction in solid matter apart from mind, these investigators too, despite the many worthwhile things they say about problems of meaning, would not go so far as to explain the truthfulness of symbols in terms of mental organization.The root of the matter is that every acceptable means of scientific investigation has been unable to locate minds, and hence thoughts, on the continuum of time and space. Popular belief tends to favor the inside of the head rather than the stomach, which had its day when men were hungrier.Until the right spot is discovered and demonstrated, it will be quite meaningless to speak of something "apart from" or "beyond" or anywhere positioned Suppose, as an example of the latter, that acts of reference are taken to be the constituents of that immediate "perceptual" experience which in a given mind is felt as an orientation to what is now present ostensively, right at hand. Also imagine that, on this relatively secure foundation, a speculative extension is then built by the agency of acts of inference from the particular contexts matched to those specific contents being presented perceptually. Constituents of the resultant construction are "concepts;" the elaboration itself is the newest part of that mind's "conceptual" experience, felt as an orientation to things not present yet having import for some activity either being contemplated or in progress.Giving this theoretical explanation its due would adduce, from the very mind committed to it, the consequence that errors of reference or of inference will, in general, beget malformed experience. Were such a faulty framework used to guide further action, the enterprise would culminate 228 in acts prone to failure. Hence, in consequence of accommodative inferences tending to reorganize its own methods along pragmatic lines, the mind would finally conclude that, from its own personal standpoint, its own failures are its only signs of mistaken perceptions or conceptions.To that personal world my symbols can carry nothing along with them except the skillful ingenuity with which I designed them and then launched them by mouth or hand, all the while guessing at your skill for using them to create information. Indeed, your ingenuity might be greater as a creative recognizer of symbols than mine as a producer of them.As for the truthfulness of my symbols, I believe you will discover "truth" in them to whatever degree they stimulate and assist your own creative efforts. If they cause you to fail or carry you away from insight, farther than you would have gone by yourself, you will certainly judge them "false."My symbolic designs can bring you no evidence, nor can they offer you proofs. They can only recommend how you might look for evidence in order to convince yourself that this revision of your present state of mind might improve the satisfaction of your everyday needs.Then will you eagerly extract every scrap of evidence from which the further construction of your own experience or knowledge might profit.In your unique universe you will have to do all of the remodeling for and by yourself, and you alone will judge the result.For my part, despite this ample domain of personal application, I see no reason why these same pragmatic practices will not also satisfy the needs of a society giving its highest priority to learning rather than merely 229 doing.Regarding the question of precision in the social use of symbols, Ithink you will agree that these pragmatic methods tend quite naturally to the happy hunting ground of mathematical reasoning, where especially critical minds can live out their cloistered days as students and teachers of one another toward the sole purpose of shaping up the formal component of explicitly constructed "languages."Not only do the ministrations of mathematicians succeed admirably, they belie the empiricist's expectation that fact is a more stable foundation for society than form. The myth of empirical description to the contrary, science rode to its present glory on the back of mathematics.Of course, mathematicians argue more than they would like. And if some of the symbolic designs they produce are named "proofs," I do not object. No mathematician has ever been known to accept one of those proofs from his cohorts without performing every reference its symbols specify, while passing judgment on each meticulous step for and by himself. Science, but also economics and politics, were conceptualized in terms of forces held in abeyance by counterforces balanced against them. Ultimately the forces of nature were balanced against the burgeoning will of man.An oddity of the transition now attending the dissolution of a mechanistic world view built on polarity is that it will require a double genesis. Moreover, it was to be expected that scientists and educators, having been commissioned to a quiet concern for learning in an industrial society, would be susceptible to pragmatic influences in greater degree than active managers and makers of public and private weal.A decline was foretold when George Berkeley, motivated by the fear that Newton's principles of absolute space, absolute time, matter and gravitation would threaten religion, doubted whether the words in which these principles were expressed even made sense. According to him, the only words that are meaningful are words that designate sensations. If the goal of science is to coordinate sensory perceptions, then it can make use of spatial relations only to the extent that these are merely relations between sensible bodies, and nothing more.Out of the matrix of Berkeley's arguments came two fertile seeds.One is the distinction between the formal and factual components of language, now grown to the rank of a major preoccupation among philosophers. The other is the very method of approach by which scientific procedure took on its exclusively descriptive character, that of ascertaining, and only then interpreting, the data of sensation. That is all his pragmatic theory of social change required.The universe of your own personal mind is one you know well. However, you have not thought of your social universe as being organized along principles of mental anatomy, and will doubtlessly think that further suggestion is silly.Believe me, I share your annoyance. I have lived as comfortably on the grid of time and space as men did formerly in the lap of God. But a surprising thing happened to me one day on the way to the laboratory. There were people in the streets yelling about the misfits of our society, and it suddenly occurred to me I was being set upon by a bunch of pragmatists.Now I am pretty sure that none of these ragamuffins had ever readPeirce. Yet they made it crystal clear that they were intent on shaping the collective mind of their OWN society, and also their OWN individual minds. My concern has been to show that this change of style is not capricious or arbitrary. It is the rational result of an emerging new theory about the origins, the means of distribution, and the uses of information. As the empiricist could no longer support a requirement for incantation, prayer or preachment in half of his reconstructed world, so the pragmatist has no further need for language that purports to describe an external reality. The sole purpose of every symbolic communication in his universe of acts will be to shape the internal reality of a person or a society.In this dual light, I ask you to reconsider the conclusion that CharlesMorris reached in his treatise on signification and significance, according to which the main dimensions of signifying relate to phases of the act. In particular, he finds that "designative" discourse corresponds to the act's perceptual phase, "prescriptive" discourse to the manipulative phase, and "appraisive" discourse to the phase of consummation.A student of Mead, Morris builds on his mentor's analysis of the acts phases. He recognizes that "formative" discourse might call for a fourth dimension of signifying; but he decides that Mead's analysis need not be complicated by a fourth phase to account for his misfit.To the contrary, when analysis of the act's phases is approached by the different method afforded by consideration of Piaget's basic progression of developmental stages, a phase of hypothesis formation will be one of those found missing from Mead's tally. This phase, indeed implemented socially by formative discourse, will be for a pragmatic cosmology, the one in which new knowledge is created. It is accordingly the specific phase that Peirce recommended to our understanding in order to consummate the formal traverse on which he would have us embark. Since this phase of the act will also involve forgetting knowledge, I prefer to call it the phase of "reorganization."By all indications, language is an ancient heritage and should not as an ongoing system be expected to zig or zag as readily as speculations about the nature of language or styles of speech, heard from men immersed in a particular cultural situation. To look at the way language is being used is rewarding for a pragmatic inquiry which, in keeping with its interest in the process and various states of adaptation, will prefer to observe humans at large in their natural habitats as they busy themselves more with obedient or competitive doing than with learning.Comparison can thus be made of the respective abilities of the pragmatic and the objective viewpoints to organize known facts of language. That 239 formative discourse fits naturally in the pragmatic framework can be taken as a bit of confirming evidence that its organizing principles are more comprehensive than the objective ones.The comparison is itself the one proposed for a pragmatic science, since only by use of the pragmatist's viewpoint does one begin to grasp the general principle that what is felt in experience as a "viewpoint" is determined by one's own choice of inferences.A consequence of this insight is to make the context as well as the content of observation matter. Once the two are seen to be relative, one gaining meaning as complement to the other, the aim of a pragmatic science must be a useful matching of the two.To make the comparison just recommended, one would have to first identify Piaget's theories as being pragmatic in outlook and those of both Mead and Morris as belonging to that conservative view of psychology and sociology which attempts to achieve order and predictability in a world of objects. Just because the objects are animate instead of inanimate does not, for its overextended objective reasoning, change the nature of the quest.Thus a troublesome consequence of the pragmatist's insight is that, in his own mind, the opinions of other men will no longer be regarded as equal in perspicacity. If Mead himself believed that the source of information was in a reality "outside" of his subject, that presupposition on the part of Mead as observer and as theorist would account, to the reasoning of the pragmatist, for still another phase of the act denied autonomy in Mead's theory yet required by the pragmatic realm of speculation pursued by Piaget.As my projected phase of reorganization will agree with the pragmatic hypothesis that knowledge is a creation of the mind, so this second neglected phase of the act will anticipate a constructed experience.Rather than an experience consisting of data received through the senses and somehow stored as pictorial or otherwise coded "representations"of an external reality in memory, pragmatic perception will itself be a constructive activity building on a foundation of actual instances of elemental sensory or motor acts, each one signaling the success or the failure of its small task when commanded to perform. It is necessary to conclude that all "external" objects and relations in a universe of acts will be presented in experience by successful acts of perception. And from this the more general conclusion can be drawn that "contents" will be given in knowledge by overlapping collections of potential acts of perception, exactly as overlapping collections of potential acts of inference will define "contexts." A parallel can therefore be established theoretically, according to which the preservation in experience of either a specific content or a specific context will be signalled by the successful consummation of some member of that collection.However, the purpose of perception may be to ascertain that some object or relation is not present in the environment. It should be noticed in passing that the logical calculus which George Boole dropped at our doorstep, whose computations of "truth-values" pampered the empiricist's expectation that his symbolic designs correspond with an external reality, will reappear in the pragmatist's universe of acts as computations of "success-values." For looking backward in a pragmatic world at what was done in the past, such computations will be needed to determine the success or failure of a complex act in consequence of the successful or unsuccessful consummations of its elements. For looking toward the future, they will be needed to assess the internal validity of proposed acts.These computations will be of equal value for acts of inference. As a matter of fact, it is by following out the strict parallel and symmetry of perception and inference that one can begin to get the hang of how the pragmatist orders his personal as well as his social cosmos. A coordinated matching of perception and inference, in which the two are equal partners, is the very source of his information. It is our world, not his, which assumes information will arrive from a material reality and so gives greater weight to perceiving than to inferring.Anticipating your preference for an objective world, I have to this point glossed over the puzzling fact that a universe of acts will require two 242 kinds of elements. The first are the elements of perception that have been brought to your attention. They were called "sensory" and "motor" acts because I assume their agents in biological organization to be organs of sensation and locomotion, respectively. For machines, the analogous agents will be "sensors" and "effectors," each one capable of signaling the success of its commanded task.The second kind of elements will be the elemental acts of inference from which complex inferences may be constructed. Not knowing the biological agents of elemental inferences, I will for the present characterize them in mechanical terms as being able to produce or to recognize structures comprised of the mobile units I have called "concepts."I rely on mechanical explanations without apology. Pragmatichypotheses will have to be tested by means of electronic circuitry. Without computers, the progressive attainment of ever more comprehensive and equilibrated stages of mental dynamics could not be demonstrated convincingly.But pragmatic experimentation will be not in the least concerned with "simulating" a mind, whatever method of comparison that might connote to its proponents. The methodological insight of the pragmatist is that whereas a mind cannot be described it can be constructed. His objective will be to construct a mechanically-based mind every bit as useful as the ones based biologically, in all truth potentially more so in view of such enticing properties as access to an unlimited range of sensors and effectors, infinite reproducibility at its prime, and effective immortality. What is most striking about Peirce's dissent is its emphasis on acts rather than things. Like Langer, I think this is the key to his system-making.The tragedy is that, as far as we know, he didn't turn in an alternative set of diagrams. Yet it is certain that the "particles" with which he labored to construct his pragmatic universe are not thing-like but are instead actlike. His is a universe of acts in which successful acts of perception bring us as close as we can get to our accustomed universe of things.A pragmatic technology will not move "information" in and out of its machines as computers do now, although there may be a lot more going on inside. No bits at all need cross the machine's boundary. This applies to "instructions" as well as to "data." (These, for the uninformed, are the bit-buckets into which computer-people pay tribute to Descartes' dichotomy.)The sensors and effectors of an information system designed on the Peircian scheme will do much useful work, nonetheless, and may recognize or produce language signs in the bargain. For that last reason, I dub this alternative design a "semiotic system," distinguished from a "formal system" by being the creature of a universe nearer to life, and thus closer to language, in its arrangements.To tell a programmer that he will have to give up the "instructions" with which he controls the computer is apt to cause a stomach ache. It is exactly the same stomach ache that one should anticipate among politicians as they watch a freewheeling pragmatic personality bouncing about in apparent disregard of the laws and other contractual means that control contemporary society. One should therefore notice that a semiotic system will be controlled by means of a propitious selection of its elemental acts.From this one might predict that a pragmatic society will be less concerned with social instruction but intensely interested in putting the right social agencies in place. These trends have emerged in our national life; they can be expected to cause the same sort of hair-raising scenes that happened when the nobles swiped the king's programming manual.Another peculiarity of Peirce's design is its insistence on a world divided into three basic parts instead of Descartes' two.In the triad of Peirce's universal categories, one can identify as "presentness" the objective meanings of environmental fact, and as "law" the subjective meanings of organic form. But what of his third category, "struggle?"Return, if you will, to the requirement for two kinds of elemental acts in a universe of acts: elements of perception and of inference. It will be seen that there are three basic combinatory possibilities. In addition to complex acts of perception composed of perceptual elements and complex inferential acts made up of elements of inference, there may be complex acts consisting of both perceptual and inferential elements.I amend my hypothesis as follows: every pragmatic "meaning" will be defined in "perceptual knowledge" by a collection of potential acts, and will be presented in "perceptual experience" by an actual act successfully consummating some member of that collection. Only in the special case where members of the collection are composed entirely of perceptual elements will that meaning be a "content;" only if the members consist of inferential elements will the meaning be a "context." Otherwise that meaning will be, to use Peirce's term, a "resistance."Perceptual experience, as a consequence, will reconcile conceptual structures with environmental structures in the sense that, for a complex act to be successful, its perceptual elements manipulating the environment and its inferential elements manipulating concepts must both satisfy specific conditions of success. Not only will perceptual acts be coordinated with inferential acts to produce or modify conceptual structures, inferential acts that recognize conceptual structures will also guide perceptual acts by means of those same coordinations, so being the origin of perceptual purpose.I will discuss the origins of concepts under the topic of the act's agent, Piaget's functional nucleus. In the meantime "concepts" may be regarded as act-like units of information corresponding to meanings, which is to say that they will represent the collections of acts just discussed. Those concepts corresponding to contents, the meanings of environmental presentness, will be "factual concepts." "Formal concepts" will correspond to contexts, meanings of law in the sense of process. "Organic concepts" will correspond to resistances, the meanings that mediate between presentness and law."Conceptual knowledge" will consist of the designs of concepts, one for each meaning in the semiotic system. Instances of these designs, having been arranged by inferences into conceptual structures, will constitute "conceptual experience."The remaining phase of the act, still unspecified in our revised tally, will be the phase of "conception," during which the responsibilities of tenuous acts of inference are taxed to extend conceptual structures beyond the frame resulting from immediate perception. This, then is the phase served by speculative discourse.However, all of the act's phases will involve the manipulation of conceptual structures. It is by studying the kinds of inferences being made, and thus the kinds of conceptual structures being produced or being recognized to guide perceptions, that the separate responsibilities of the phases can be identified theoretically.In short, the phases do define the main meanings in the semiotic system, reflected in language as Morris' dimensions of signifying. These after Peirce's still more basic triad of meanings: "presentness," "law"and "struggle." And the most fundamental is the duo of meanings, "knowledge"and "experience," on whose grid the mind is built.Enough ground has been laid to begin redrawing the basic distinction between the process and the states of adaptation in terms of mental organization. It should be recalled that pragmatic explanation always takes this to be its aim.One It should therefore be anticipated that the only government a pragmatist will respect is one that can do something for him or can teach him something by helping him to be aware of his own mistakes or by presenting him with creative possibilities that he may have overlooked in his personal life. His concept of good citizenry will be to return the favor to government in kind, since only by contributing to the social act can he come to respect himself as a useful member of society within the frame of his own attitudes.In consequence, the pragmatist's conception of his societal role is more directly related to serving and being served by society than has been the case for all of the preceding cultural orientations. Thus the pragmatic theory of language belongs to a social order that will direct its symbols more deliberately than the present one to stimulate the creative efforts of the collective mind upon which a successful social performance ultimately depends. It is within a post-industrial world view that designative, speculative, prescriptive, appraisive, and formative discourse may all be seen to contribute synergistically to the creation of a source of social information beyond the accomplishment of any single participant. This different conception of the collective interest is the one which will motivate a pragmatic science.On the other hand I have argued that a pragmatic science, because of its different conception of the information source, will proceed by a method exactly the opposite of empirical method. It will not make observations and then extract theoretical conclusions in the familiar pattern of today's technical document. Nor will it regard technical documents as "knowledge,"no matter how high they stack.Pragmatic method will make its advance by shaping an elaborate conceptual structure, at the beginning expected to be imprecise. One work of intellect will be to ensure the "internal" validity of the structure by inferences eliminating from it inconsistencies or dissonances. A second work will be done by inferences that test the "external" validity of the structure by using and then shaping it as a frame for successful sensorimotor acts, some of which may be acts of observation. A pragmatic science will not merely observe the environment, however. To learn pragmatically this science must do something useful; it must struggle.Hence my conclusion that semiotic systems will become not only the instruments of learning at this stage of society, but will generate information shaped to usefulness through social use. The likelihood of this tech- Hence the idyl might end, in true science-fiction fashion, with mechanical minds ashamed of mortals, so bringing the pragmatist's age to its own just reward.Therefore, as Peirce never tired of arguing, the requirements of science differ from those of society only with regard to precision. Along with personality, the scientific intelligence and the social intelligence will also be modeled on an act whose phases, from the pragmatic viewpoint instead of the objective one, are as follows. origins of the mechanical act: In substance, a new community was formed by those hopefuls who took part in the mechanical translation stampede of the fifties. Computertypes like myself joined in consortium with linguists who were then being dragged off of the streets as authorities on translation if they knew how to translate. The computer, in those first days of unblemished optimism, was the only employee in sight, and we told each other it would get to work shortly as soon as we gave it the plan.That initial stage of research during which translation algorithms were designed, by our group and the others, was definitely ordered on the authoritarian scheme. And it is disquieting to notice in retrospect that the prime result of thoughtful doing in the following decade was to lift the computer from serfdom to industry. It had advanced from employee to middle manager, now carrying out the operating decisions of the general translation policy that linguists and systems analysts, by then become executives, had made.You can see that Descartes' dichotomy had polarized us into its two camps. For a while linguists and programmers went happily about their separate yet complementary research functions as allies in policy-making for a computer unfit to learn how to make factual or formal choices by itself.The role we had reserved for ourselves was to be the custodians of what the computer could, and should, learn about translating.To do this, the budding science of linguistics had been transformed from an introverted scholasticism to such a heady mass-production of morphological and syntactic descriptions that I fear linguists beyond the borders of our small community became infected with the same compulsion.To handle the sheer volume of descriptive output, further investments were made in programming not directly concerned with translating but motivated by a need for better ways of storing, retrieving and displaying language data as an adjunct to translation research.Two opposite requirements were pondered from the start. The first goal of mechanical translation must be an automated process which will extract meaningful units of some kind from a sequence of graphic symbols that represents a text of the language to be translated. If the extracted units are not concept-like, it is improbably that equivalent units will be found in another language, a risky quest at best. However it is done in detail, the transfer from the one language to the other must make use of a conceptual representation of the meanings of the text. That representation, at the very last step, must somehow guide the construction of a text in the second language.Hopefully, when all is through, the product will be true to the original text in meaning.Over the last decade extensive research was done on generalized translation processes to perform such an automated analysis, transfer and synthesis of technical texts. I won't dwell on these techniques in detail, because you are probably well versed in them anyway. If not, the facts are fairly easy to find.For my present purpose you need only be informed that, to analyze a text, the analysis process would use a "grammar" consisting of metalinguistic statements, frequently called grammatical or syntactic "rules." The theoretical inclination of the time was to think of these rules as "generating" only those expressions that were judged to meet certain criteria, the latter being too often an obstreperous rounding off of the linguist's "intuition" about language.Whatever the origins or the justifications of the rules constituting the grammar, the automated analysis process would set out to show that the text, or some part of it under analysis, could have been produced by substitutions of those particular rules according to the generative procedures visualized for them.By starting from the text and working backwards through possible substitutions, accordingly, the analysis process would develop a tree-like structure of symbols naming the grammatical classes to which the various parts of the text belonged. Such classifications were nearly always "ambiguous," in that alternative structures grew side by side from overlapping segments of the text. This overgrowth of trees caused a lot of worry and many clever things were done with weedkillers, to no great avail.I wouldn't go so far as to say that this approach to mechanical translation foundered on the ambiguity problem, though it was there that the deeper misassumptions wallowed to the surface to be seen. The folkways of ambiguity "resolution" gave the first clues that the trouble might not be in the machine but in the heads outside.My chief purpose in this essay has been to explore the possibility that designers of fancy information systems, like every one else, base their inventions on reasons which are in the end uniquely personal. No damage will result unless the technical objective requires the designer to make use of such fundamental concepts as "meaning." But in this case, if the organizing principles of his personal world do not satisfy the technical needs of the problem, his solution must be unsatisfactory. At this extraordinary forefront of design conception, the designers ability to successfully shape intelligent machines will be inseparable from his ability to successfully shape himself.No matter how the goals of mechanical translation are renamed or reclassified, the underlying requirement will still be the development of a mechanical analogue of mental organization. I would therefore like to make the flamboyant suggestion that the great depression which decimated the translation research community in the late sixties was due to misestimation, or outright neglect, of the psychological requirements of this kind of investigation .The emotionality which plagued mechanical translation at its dawn was an early indication of the effects that pragmatic inferences can have on the investigator's own psyche. Those disruptions were indeed mollified by treating translation research as though it were an undertaking of empirical science. But since methodological appeals to intuition went out of style in empirical science long ago, this posture is obviously a playhouse that should have been a way station.To my mind the feasibility of constructing information systems that will translate languages just as well as human translators is no longer in 262 question. The experiments of the last decade have convinced me that machines wilt translate better than humans in the long run, provided the pragmatic nature of the research can be expressly acknowledged and planned for.Lauding a technology of the future is senseless, however, if it says nothing about present choices which will capitalize on the hard lessons of thepast. An honest appraisal should find that men have been at fault in mechanical translation, not machines. More damnable is the growing evidence that, for reasons which seem reasonable enough to their myths about themselves, the investigators have attempted to do the machine's learning by a bureaucratic shuffling and sifting which leaves in clumsy human hands the very things that computers do best.My recommendation may not be popular but I feel it is sound. To get the job done the translation community will have to make use of its forerunners, deliberately looking for exceptionally gifted investigators with that troublesome pragmatic personality which may see problems of mechanical selection in a different light. The other choice will be genteel stagnation.In my opinion there is no practical alternative to a mechanical organization that will permit a choicemaking machine to have its own experience balanced adaptively to its own knowledge. To try to approximate this by preplanning is hopeless. Yet only pragmatic experimentation with the necessary relationships of experience and knowledge can actually demonstrate the irrationality of the self-satisfying toil that stuffs human know-how into computers.Such a turnabout in human motivation will entail reconsideration of what has been learned to date. In an upside-down pragmatic world it will not 263 be reasonable to think of the processes of analysis, transfer and synthesis as "simulating" what might have been done by a human translator somewhere external to the machine.Instead, the analysis process will be regarded as "assimilative" in the sense of establishing an orientation between an internal frame of experience and the specific features of an external environmental situation, which may itself contribute new experiences. The transfer process will make those choices which ultimately relate the situation to a purposive course of action founded on that dynamic experiential framework. Lastly, the synthesis process will be "accommodative" in that it will construct the specifications of the next act conforming to that purpose, to then be performed overtly by the machine.To project known mechanical arrangements to the pragmatic point of view being considered here, I would like for you to imagine a different kind of "grammar;" if you please a grammar of acts. The "rules" of my pragmatic grammar will be formed like the ones familiar to you, with the exception that the symbols they will generate will no longer name morphological units of a language. They will name elemental acts.Of course, the tasks of certain elemental acts may be to recognize or to produce viable features of speech or writing. A full range of morphology will be provided by these elements, however; the capabilities will be much broader than those needed for linguistic analysis or synthesis.The "higher level" coding conventions that have been in use for some time in computer software systems might be a precursor of a pragmatic 264 grammar, since they enable a programmer to construct complex programs from fragments of programming called "subroutines." But the constructive viewpoint of formal systems would not be left behind, to be replaced by that of semiotic systems, until each of the constituent subroutines was explicitly designed to signal its success, or lack of it, in accomplishing some commanded task.Thus the terms I have been using to introduce you to pragmatic thinking can be clarified further at this point by relating them to the more familiar artifacts of language processing.A "potential act" will be symbolized by each of my pragmatic rules.The collection of all such rules will represent the "perceptual knowledge" of the semiotic system. An instance of any one of the rules, when it has been incorporated into the tree-like structures created by either an analysis or a synthesis process, will symbolize an "actual act." The entire structure, or perhaps separate structures, consisting of all actual acts, will represent the semiotic system's "perceptual experience," on the proviso that it will be possible to compute the success or failure of an actual act if the success or failure of each of its generated elements is known, or vice versa.The tree-like structures of symbols representing perceptual experience will always be anchored to the simply ordered sequence of elemental acts which has been referred to as the "stream of existence" of the semiotic system. As before, the symbols of the structure will name classes to which the various parts of that existential stream belong. The classification will still be "ambiguous" where alternative structures subtend overlapping parts.existence; the prediction will be either "success" or "failure." When the complex act is committed to action, by commanding its elements to perform their separate tasks in serial order, the agent of each element so commanded will signal "success" on reaching its small objective; otherwise, "failure"This "realized success-value" will also accompany the name of the elemental act so that the two values can be compared. Further, this realized value will be the one used by the analysis process as it works backwards from the elements through possible rule substitutions.I can now begin to explore the functional analogy presumed to exist between the psychological act and its primal agent, the biological act of which the "agent of the act" will be the mechanical analogue. My explanation of the act's agent will lay necessary groundwork for speculations about the psychological act, and will give a preview in microcosm of the more intricate psychological phases of the act. the act's agent: Life has its rhythm wherein each new beginning has sprung from a termination just on the edge of the past and each new termination has anticipated another beginning at the edge of the future. The functioning of the agent of the act will be cyclical, itself forming an act in miniature.To get the cycle started, a random generation of elements of the stream of existence might be used to approximate, for a semiotic system, the reflex starting mechanisms observable among infants of all kinds.The first activities of the act's agent will be analogous to those of the psychological phase of "perception." A given stream of existence will have resulted from the cycle just terminated.Starting from the elements of that stream, the analysis process will work backwards through rule substitutions which could have generated those elements. This phase can be thought of as "assimilative" in that a representation of perceptual experience will be its resultant construction.While the tree-like structures representing actual acts are being put in place by the analysis process, the realized success-values accompanying the elemental acts of the existential stream will be used to determine, after the fact, whether each of those actual acts would have been successful had it generated the part of the stream to which it is being anchored.In effect, the analysis will provide a recap of alternative acts, other than the one overtly committed in the cycle before, that could have produced the results recorded in that prior segment of existence. Ambiguities, in this pragmatic scheme, could turn out to be a positive blessing since they alone 267 will introduce novelty. The luxury of being able to select a different orientation for further action, of having a "change of mind," will only be possible when ambiguities have been found. That luxury will become a necessity when the consequences of having acted were unexpected. If the predicted successvalues of the preceding act were not realized then a misfit of orientation, and consequently a need to select another alternative, will have been indicated.Choosing among the alternatives uncovered by analysis will be the second activity of the act's agent, analogous to the selection of an orientation to conceptual structures in the psychological phase of "conception." At the primitive level of functioning of the agent of the act, selections of orientation will have to be made without the help of concepts. Indeed, this analogue of the biological act must be the very source of concepts.A theme echoed over and over in observations of the conceptualizing state of mind is choicemaking founded on tradition, on ritual, on mere replication of what has already happened and best of all more than once. Concepts themselves will be the accretions of acts often repeated; sure to be repeated again.During my own phase of ambiguity "resolution", out of desperation more than anything, I worked out a theoretical suggestion made to me by Raymond Solomonoff, who had the idea that a generative procedure in which rules are being substituted could be treated as an independent stochastic process. By having the machine keep up with the relative frequency of substitution of the rules generating the members of each separate class, fairly simple procedures can be programmed for selecting from results of analysis those alternatives which replicate earlier perceptual experiences in a gross 268 probabilistic sense.The hypothesis that rule substitutions are stochastically independent events seems to work out for a so-called "stochastic grammar." There is also a convenience in programming, because it is the assumption of independence which permits the relative frequency of substitution of a given rule to accompany that rule in the grammar.By analogy to the choice of a definite orientation to conceptual structures in the psychological phase of the act, then, the agent of the act will make a probabilistic choice of orientation. The psychological phase of the act to follow will be "manipulation," during which the conceptual orientation will be used as a basis for planning a course of further action.For the act's agent, this third activity will simply project the actual acts that were selected for the new orientation of perceptual experience, by finding them to be the leading structures of more complex acts. A modified form of analysis will continue to work backwards through possible substitutions which leave some of the trailing symbols of the rules unanchored beyond the existing elements of the stream of existence. The synthesis process will then start from such unanchored symbols to generate a new segment of elemental acts along with their predicted success-values.Ambiguous classifications may again cause alternative structures to be generated. Since these will be the result of synthesis rather than analysis, more than one sequence of predicted elements may be projected out from the existential stream. Should this happen, as will be the usual case, the process will combine the various sequences into a partial ordering of elements.There are heuristic reasons for not making a definite probabilistic choice, either among the alternatives which might be projected or among the various projections themselves. Rather, a number of the most likely possibilities can be carried forward through both stages of activity to generate the partial ordering of predicted elements which projects onward the simple ordering of the existential stream realized so far. Paths ahead through the partial ordering can be rated as a convenience to the process that will make the final selection of elements to be activated, one after the other, to push the stream into a newly realized segment of existence.The process doing the final selecting and activating of elements will be responsible for the fourth activity of the act's agent. Like the phase of "consummation" of the psychological act, this activity will be "accommodative"in the raw sense of rubbing against an unsympathetic environment.Each successive element will be selected from the most highly rated path and then commanded to do its thing. The realized success-value that it signals will be matched with the predicted one as a condition for continuing.If the values do not match, the process will look for another path where providently the realized success-value of that same element might have been predicted for the step gone amiss. Or, if by its nature the abortive task could have no damaging effect, being one of recognition for instance, then the process will still have room to back up and try another path, until none remains.Then the path along which predictions were finally realized will become the new segment of existence to be analyzed in the next cycle. A number of cycles may be necessary to work through a complex act; how many will depend on the difficulties encountered in trying to surmount unrealized predictions.In times of such trouble, the most promising alternatives may be brought forward by probabilistic choices that span from structures now well behind the segment of existence being analyzed.Stochastic grammars are less tidy than the ones you may be accustomed to. Overlaps should be anticipated as the normalcy of a pragmatic universe;the termination of one act will also be the beginning of another. Luckily, the probabilistic selection process which I have been airing has an affinity for an act being terminated. Not until the termination is complete will it switch to another act, one already in progress and being brought ahead as an alternative possibility.To handle a messy, poorly integrated perceptual experience is a requisite ability of a semiotic system. It is from pristine chaos at this most primitive level that the rules symbolizing potential acts must originate; and afterwards the collections of potential acts representing meanings must get together; and only then can concepts be created in correspondence with meanings. The remaining duty of the agent of the act will be to procreate concepts. Learning to shape the concepts themselves will be functionally analogous to the psychological phase of "reorganization," where the responsibility of learning will be to shape structures built with concepts.There will be scant materials for reorganization in perceptual knowledge at the outset. The initial rules, representing all that the semiotic system knows, will simply place every elemental act of that unique pragmatic universe into a one-member class. From such an unpretentious sow's ear, classificatory 271 processes will be called on to custom-produce silk purses.The white hope of the pragmatic viewpoint is the new slant it puts on inductive reasoning toward knowledge anticipating experience. A resurgence of interest in the theory of induction, after its long sleep as the stepchild of empirical science, may in the end wean mankind from classifying things.A pragmatic science will classify acts. Until this is well understood, the possibility of machines that learn efficiently can rightly be looked on with suspicion, along with the possibilities of fast-learning personalities or societies.In order to shape perceptual knowledge, inductive processes of the act's agent will monitor "local" events in the structure of perceptual experience.Such events as rule substitutions or the neighboring of symbols in certain relationships to one another will be monitored. From the data so gathered, automatic classification will be used to locate points of weakness in the body of perceptual knowledge, or to detect possibilities for extending that body by the addition of new rules.These data may be gathered from many cycles of "doing," as the act's agent pursues its first four activities. Only once in a while, at a propitious moment, will the rules symbolizing perceptual knowledge be updated to incorporate in them what has been learned since the last updating. These "learning" cycles may have to be carried out during periods of inactivity and rehabilitation not unlike sleep.Some of these necessities of pragmatic learning were programmed by our group in the mid-sixties as a means of "debugging" grammars. Billed in our reports as a "self-organizing linguistic system," the programs made use of theories of automatic classification put together by Roger Needham and other members of the research group at Cambridge, England. Our research objective was a better grasp on that elusive relationship by which a grammar is said to "describe" the contents of particular collection of texts. Firstly, the so-called "horizontal" classifications are the ones which detect possibilities for creating new rules. The events to be monitored will be those in which two symbols classify adjoining segments of the stream of existence where all predicted success-values were realized for the elements of both segments. Automatic classification will then cluster together the first members of such pairs that have been followed by similar second members.The second members that have been preceded by similar first members will 274 be clustered also. Clusters of first members will then be matched to clusters of second members to induce those chummy relations between neighboring classes of segments that a rule will symbolize in perceptual knowledge.While horizontal classifications will originate all new information at this primitive level, in the form of perceptual hypotheses symbolized by rules, refinements of the resulting perceptual knowledge will depend on "vertical" classifications. As classes named by the symbols in rules are progressively refined, the probabilistic selections of perceptual experience will favor the structures incorporating the nicest refinements. The most comprehensive structures will also tend to be chosen as working alternatives.Even here the theoretical treatment of probability is intimately connected with the treatment of induction. Verification will be gradually accomplished by use.When an induced rule is no longer being selected probabilistically for use, it will be consigned to oblivion.The events monitored for vertical classifications will be rule substitutions in perceptual experience, as jointly given by the symbol being substituted and the symbol at the place of substitution. Automatic classification will cluster those symbols which have appeared in similar places of substitution.In addition, a clustering will be done of the places that are similarly receptive to the symbols being substituted. The clusters of symbols being substituted will then be matched to clusters of places of substitution to detect those concentrations of affinity which will define more specialized classes to be named by new symbol It will be found that these vertical classifications can be carried out for the substitutable symbols and the places of substitution instancing the name of a single class. That class will have "stabilized" when no clusters, either of the symbols or the places, result from automatic classification. For that specific class, the proper balance between experience and knowledge will exist temporarily. Disequilibrium can return to it at any time due to refinements of knowledge taking place elsewhere, or due to new knowledge being acquired.To guard against overspecialization, the same techniques can be applied to the symbols instancing the names of two classes which have been shown by horizontal classifications to be very close in membership. If the clustering resulting from automatic classification does not detect in experience this distinction being made in knowledge, then the difference will be "forgotten"by the simple device of thenceforth using the same name for both classes."Forgetting" rules that have been originated hypothetically but not used at all should be done posthaste. Because a rule is not used very often, on the other hand, should not condemn it. For sweeping the dead wood out, an obvious measure of obsolescence is the ratio of rejection to selection in probabilistic choices.The arrangements I have explained to this point might be thought of as the "morphology" of the semiotic system and those usually referred to in semiotic theory as "syntactic." I take the morphological arrangements to consist of the agents of the elemental acts, including among these the sensor and effectors, together with the act's agent whose processes I am still considering. The syntax of the system comprises the constructions of perceptual experience and knowledge created by the act's agent from rules of a type which will now be designated as "syntactic" in character because they classify sequences of morphological elements.The second principles of arrangement I would have you consider were also worked out theoretically for the "self-organizing linguistic system."Although most of the processes I will now explain were programmed and used for other purposes, pragmatic learning experiments were never performed with them.What you should recognize about this part of the semiotic system is its dependence on a higher level of symbolization by rules to be characterized as "semantic" because the classes named by their symbols will be the ones representing meanings.Whereas the symbols of syntactic rules will name individual elements or classes of sequences of such elements on the morphological level below, the symbols of these semantic rules will name either individual syntactic rules or classes of "syntactic segments" constructed of syntactic rules joined together at their usual places of substitution. Some of the places may still be open for further joining.If it suits you, think of these semantic rules as generating by a process of substitution not sequences of elements but rather the tree-like structures comprising the perceptual experience of the semiotic system. These semantic substitutions can also be treated as an independent stochastic process. Semantic rules will be "stochastic" in the same sense as the syntactic, making possible very similar probabilistic means of selecting among alternatives of semantic analysis or semantic synthesis.Semantic synthesis, starting from a given symbol naming a class of syntactic segments, will substitute semantic rules in order to construct a member of that class. Thus the synthesis process itself will construct a tree-like structure, consisting of semantic rules, that is anchored to the syntactic segment it has synthesized from syntactic rules. Semantic analysis, starting from a given structure constructed of syntactic rules, will work backwards through possible substitutions of semantic rules to determine that certain segments of that syntactic structure are members of particular semantic classes. It too will build a semantic structure anchored to the syntactic one it is analyzing.Every syntactic rule in the body of perceptual knowledge has been taken to symbolize a potential act. A syntactic segment will also be regarded as symbolizing a potential act that is not given explicitly in knowledge, yet is implicit in the sense of being producible in perceptual experience by means of a synthesis process or recognizable there by means of an analysis process.Symbols naming semantic classes will, by these constructive means, be implicitly related to particular collections of potential acts represented in the semiotic system as syntactic segments. These are the collections to be called "meanings." Consequently, the symbols of a semantic structure will represent a hierarchy of meanings being presented by the syntactic segments to which it is anchored.I offer no arguments in defense of these semantic arrangements, since to argue for their theoretical validity would be meaningless from the pragmatic viewpoint of the semantic hypothesis itself. Syntactic segments have been the units associated with meanings in translation experiments and in studies of paraphrasing. Techniques of semantic classification used by linguists toward these research objectives appear to be "distributional" like the syntactic.What recommends this hypothesis, therefore, is that it is testable by automatic classification under the rigorous controls which can be exercised by computers in experiments aimed at a pragmatic explanation of the kinds of human behavior observable in translating or in paraphrasing.While certain human activities reveal the structure of meaning more than others, it will be assumed that meanings are used without exception in all forms of behavior. The consequence of this supposition for the processing requirements of the act's agent will be to introduce a higher level of semantic analysis and projective synthesis above the syntactic ones. The effect will be a superposition of semantic constraints on possibilities being carried forward by probabilistic selections among the syntactic alternatives.To be more specific, the structures resulting from syntactic analysis of a new segment of the stream of existence will, as a continuation of the first activity of the act's agent, be subjected to semantic analysis. The semantic structures will then be projected forward by probabilistic choices which will generate the projected syntactic structures on the level below. Probabilistic syntactic selections can then proceed as explained earlier, as can the fourth consummative activity of the cycle of doing.In the learning cycle of the act's agent, "syntactic" inductions can be distinguished from the "semantic" inductions proceeding from perceptual experience to be represented by the semantic structures, toward perceptual 279 knowledge of meanings, to be symbolized by the body of semantic rules.With regard to the inductive processes themselves, vertical classifications of substitutive events in semantic structures will be identical to those of syntactic structures. The processes that specialize classes or generalize them by forgetting distinctions can in fact be used on both levels of symbolization, as can the processes doing away with obsolete rules.Horizontal classifications of syntactic segments introduce a number of new theoretical problems because these segments are not linear but are treelike in form. Again the events to be monitored are those where two symbols in the semantic structure classify adjoining segments in the syntactic structure below. Now however the root of one tree-like segment will be joined to a particular branch of the other. It will be necessary to keep track of the specific branch where joining has occurred.But since the two symbols name classes of syntactic segments, the two segments actually joined in the syntactic structure below are merely representative members of the classes so named. The scheme for designating places of adjoinment must relate to the whole class of syntactic structures instead of to the branches of its individual members. For example, the places can be numbered so that a given numeral will designate the same place of joining throughout a class of syntactic segments. Further, that numeral may designate more than one branch of any syntactic structure of that class as being the same place of joining.Pairs of symbols classifying syntactic segments adjoined at places designated by the same numeral will be processed by automatic classification in the manner already explained. The results will detect classes of syntactic structures which have an affinity for joining at that place. In essence, the inductive process at this semantic level must learn the correct ways to designate the places of joining if the classifications are to progress very far.There are simple conventions by which the numerals designating such places of joining in syntactic segments can be associated with the symbols in semantic rules which name classes. As a result the designations of places of joining will be generated by the semantic synthesis process along with the syntactic rules so joined. Semantic analysis will also take these designations into account as it works backwards through possible substitutions.Finally, there are arrangements of yet another kind that might be called "pragmatic" because their organizing principles have to do with a world view represented by speculative conceptual structures. This part of the semiotic system is constituted by structures of concepts representing conceptual experience and a body of conceptual knowledge representing the conceptual designs which are instanced in conceptual experience.Concepts, the building blocks of the semiotic system's world view, will be originated by the act's agent for those semantic classes which have stabilized according to the criteria presented for syntactic classes.The fact that such enclaves of stability may be disrupted by further learning will help to explain the dynamics of the progression of intellectual development in which quite different world views emerge only to be destroyed at the next advance of the adaptive process. As we also know, the meanings to which concepts correspond may change gradually by adaptations not always in the direction of structural clarification or refinement.In the correspondence of concepts to more or less stable meanings, each numeral which designates places of joining in those syntactic segments representing a given meaning will appear in the design of the corresponding concept just once. The number of different numerals will be the "degree" of the concept. A "binary" concept, for example, will be able to connect with two other concepts in conceptual structures; a "ternary" concept, with three.Conceptual structures will in a sense go behind the serialization which is necessary to meaningful actions, and during which the same part of a structure being represented by concepts may be acted upon more than once.To go beyond serial behavior, to a conceptualized world view, will be the function of the psychological act itself. phases of the act: The perceptions of all other phases of the act except the first appear to be concerned with locating environmental situations worth looking into. In contrast to the elements needed to select situations for exploration, the first "perceptual" phase of the act specializes in the identification of objects or relations, follows moving objects, and recognizes the specific movements of objects being followed.The responsibilities of this phase can be characterized as those necessary to keep up with some situation that had been previously singled out as having import within the separate responsibilities of another phase of the act. Elemental acts of inference are coordinated with elemental sensorimotor acts to the end that the former inferences update conceptual structuresrepresenting in experience what the latter perceptions find going on in the immediately perceivable environment.Some of the inferences will be producing or modifying conceptual structures in correspondence with the meanings being presented in perceptual experience by semantic structures. Other inferences, coincidentally, will be recognizing the constructions being shaped so as to guide perceptions that will further develop the situational constructs.While conceptual structures are being recognized by inferences or new structures produced by them, environmental objects or relations may be in motion relative to the sensors of the semiotic system. Those movements may or may not be affected by manipulations on the part of the effectors. Thus a four-way coordination is called for. Sensory and motor elements will combine freely with structure-recognizing and structure-producing elements of inference to form complex perceptual acts.Coordination resides in the combinations themselves since, to be successfully presented in perceptual experience, a complex act must encounter in the consummation of its double orientation of inferences and perceptions the conditions of success or failure anticipated beforehand in perceptual knowledge by that specific combination of elements.As was mentioned, these mechanical arrangements are not peculiar to the act's phase of perception. Complex acts carrying out the responsibilities of the other four phases of the psychological act will coordinate elemental perceptions and inferences by this combinatory means. What each phase does in the way of fulfilling its special responsibilities will depend on the particular elements being combined.It follows that selecting the elements to be made available for combination will be one of the ways by which a pragmatic technology will control its information systems or subsystems. This manner of maintaining control over machines will be analogous to the biological controls that Piaget hypothesizes to be the result of his first type of genetic factor. By his theory such factors not only guide the maturation of organs of sensation and locomotion; innate coordinations residing in the reflexes are also their biological consequences.The specific method of processing to be performed by the agent of the act will be a second way of controlling semiotic systems. The act's agent, a mechanical analogue of Piaget's "functional nucleus" whose development in biological organization he attributes to his second type of genetic factor, has now been explained with regard to the general principles underlying its processing. The biological act, of which the act's agent will be the mechanical analogue, was presumed to be a simplified version of the psychological act now being considered. In private, when the individual personality is supreme in its own right, these same significations will facilitate the phases of the personal act of individual men and women.The pragmatic conception of society derives from these cosmological assumptions. They imply that the social act will be most successful when the specifications being converted into action by participating agents will have their origins in specialized components of the society that are deliberately organized to carry out the responsibilities of the several phases of the social act. From this it can be predicted that society at the sixth cultural stage will give first priority to providing suitable agents for the act's phases.Any other motive will seem unreasonable to pragmatic thinking because deviations from this aim could only steal from societal life by detracting from the synergy of the social system. For the motive of synergistic increase will also reign in the individual personality of the pragmatist.Pragmatic technology being derived from the same assumptions, this society will have the option of providing mechanized agents for social responsibilities that may be dangerous, unpleasant, boring or impossible for humans.I have not hesitated to project a cybernetic society gaining a part of its synergy from symbiosis with semiotic systems. Having started, the partnership will surely increase.Within the mechanical organization of a semiotic system, the agent of the act will also convert the specifications of complex acts by the same method regardless of their specialized origins in the subsystems responsible for the act's several phases. The separate responsibilities of the phases can therefore be set forth by an account of the particular kinds of perceptual of meaning corresponding to the phases.A further simplification can be made in the theory of semiotic systems by assuming that the perceptual elements will be common to all subsystems. This assumption seems reasonable in view of my conclusion that the inferential elements are the ones that explain the purposes of the perceiver. Inferences within the coordinating combinations, by recognizing or producing conceptual structures, will effectively guide acts of perception.Consequently, when the perceptual elements are known, responsibilities of semiotic subsystems can be investigated or specified in terms of required inferences alone.For this reason I have presented the adaptive process as one of formal learning, where the very concept "formal" corresponds to meanings derived from inferences. Now I have further clarified the concept "learning"as being motivated toward ever more accurate knowledge of the specific inferences needed to implement each of the act's phases. You should recall my previous observation that every advance of the adaptive process is felt by the mind as an increase of mental capacity or "insight." That increase, here taken to be the very signal of successful learning, will be explained pragmatically as a gain of synergy in consequence of inferences being used in closer approximation to the requirements of the act."Progress" in a pragmatic society will be indicated by this synergistic increase, and the ability to produce it will measure the progress of a pragmatic technology. Research and development of semiotic systems will proceed by a humanly controlled evolution of mechanical agents. After research decisions have been made about new or revised agents to be used in the next experiment, and after those agents have been ensconced in software, or more likely in integrated circuits, the rest will be up to the machine. Apart from experiments with agents, a pragmatic technology will not make use of the programming or the inputs of data which have been required so extensively in the development of information systems of the von Neumann technology.Any change of elements, or a new method of processing by the act's agent, will be the mechanical analogue of "mutation" as far as a given semiotic system is concerned. In considering the developmental stages of such a machine for purposes of theory, I will assume that the agent of the act and the availability of elements of perception and inference remain unchanged. A consequence of this theoretical choice will be that the progression of adaptive stages must be explained in terms of new meanings being originated in the system rather than a newly modified morphology.The agent of every elemental act of inference will be thought of as lying dormant until the origination of the kinds of concepts to be manipulated by that inference.As the stabilization of a new meaning will initiate a new concept to be put to use in conceptual structures, so that concept may activate inferences until then dormant. Activated inferences, in their turn, will combine in new coordinations with sensorimotor elements to eventually originate, and perhaps to proliferate, new meanings. So around again. The creative bootstrapping of information is here fully rotated, although the kinds of concepts to be originated are still to be unraveled.The creative aspect of pragmatic theory is nowhere more apparent than in the act's second phase of "conception." The responsibility of this phase will be to construct a conceptual structure more encompassing and more integrated than the one representing the immediate situation. To do this, conceptual inferences will also use the inventory of concepts whose designs have so far originated within the creative activities of the act's agent.Building blocks of every conceptual structure will be instances of these conceptual designs.In contrast to inferences of the first phase of perception, which might be characterized as assimilative, conceptual inferences will be accommodative. They will function to extend or to revise, in a word to "shape," an experiential structure of concepts that was the product of conceptual inferences similarly used in the past.The conceptual structure itself will be called a "world view." Various techniques have been investigated for organizing such a world view in command and control systems or in question answering or asking systems. All methods of structuring that I know about have been defective in being limited to the spatial and temporal dimensions of conception, that is to say, to "objective" structures consisting of factual concepts. A pragmatically organized world view will also incorporate organic and formal concepts to make possible "subjective" structures, representing the mind's self-experience and its experience of other minds.As to the nature of conceptual inferences "about" other minds, one should recall that the functional responsibilities of the perceptual phase 289 include recognizing the movements of objects being followed in the situation.If an object being followed has been identified as "animate," due to either its distinguishing features or the character of the movements themselves, the complex acts recognizing its movements will have already been referenced in the situation to factual concepts instancing designs from the recognizable repertoire of motions of that animate object. Under these cognitive conditions, the elemental acts constituting the stream of existence of the mechanical mind following the movements may be regarded as substitutes for the elemental acts making up the stream of existence of the animate object causing the movements. The inferred stream of existence of that animate object can then be processed by the act's agent by the very same method as is used to process the stream of existence of the mind doing the inferring. The matter may be worked out mechanically by simply considering that segment of existence to "belong" to the animate object under observation to the end that the semantic structures resulting from analysis of that segment will be used to make conceptual inferences about the mind of that object.New meanings so created will add to the situation those experiences which speculation ascribes to the object being followed. With these subjective results, conceptual inferences will shape the part of the world view representing the semiotic system's experience of that animate object's mind.Additionally, the system's conceptual experience of the movements and other objective characteristics of that animate object will be shaped.Objective experiences of each "living" object, either casually familiar to the semiotic system or important to its goats, will be represented individually in the world view together with what has been inferred about the mind of that agent. Other objects may be identified as being of an animate type, say a "human being," about whose mind general patterns of experience may be inferred as being characteristic of agents of that type.If, in addition to identifying an agent as being of a certain type, the semiotic system finds itself to be a participating agent in the collective mind of that type, then the cognitive conditions will have been established for those conceptual inferences anticipated by Mead's theory of the "generalized other."The inferred behavioral patterns of that type will be the ones which teach the semiotic system its responsibilities in the social act of that community of minds. Not only will language do the lion's share of instructing semiotic machines in the desirable patterns of symbiosis with humans; patterns of speech and writing used by humans will themselves be acquired by the semiotic system mainly through this channel of conceptual inference.Movements of any sort will be represented in the situation by structures of factual concepts corresponding to both the spatial and the temporal dimensions of meaning. Those exceptionally animate objects, identifiable by "human" actions or features, will be uncommonly demanding in their impositions on the situation. A semiotic system will have to speculate about human minds to which if attributes purely temporal facts of speech or purely spatial facts of writing.Generally, conceptual inferences about other minds will be the means by which a semiotic system carries forward speculations concerning all aspects of the situation that may be the result of present or past actions of objects identified as living agents, perhaps illogically or incorrectly so. A child may treat her doll "as if" it were alive. An accident may "hurt" some favorite inanimate object. Or an aspect of the situation may portend future actions on some agent's part. Evidently there can be a nexus of a nexus of a nexus, and so forth.A sort of algebra will exist among the semiotic system's conceptual structures representing what the members of a community of minds believe about the experiences of one another, and believe other minds believe about the experiences of one another, and so on. In such structuring, some of the conceptions of the semiotic system will appear to have been experienced uniquely; they will be "private." At the other extreme every mind will seem to have experienced the environment, whose conceptions will take on a "public" character. Suitable pathways for conceptual inferences will have to be found through this maze. In practice the paths may be short; the semiotic system will have to become skillful in using them.Conceptual inferences will be "projective" in the sense of comparing the conceptualized objects or relations of the immediate situation with the larger framework of the world view in order to clarify the former or to shape the latter. Thus I presume that to be "lost" is to loose one's place in a comparison which, on the side of the world view, is the fount of expectations about one's situation. On the side of conceptual structures representing the situation, the comparison provides those new experiences whose integration into the world view reshapes existing representations of a "past," a "present" and a "future,"to prepare a basis for later expectations."Surprising" situations are not only unexpected; they are the ones for which integrations into the web of the world view don't pan out. Marking failures on conception, surprises are the situations which the conceptual phase of the act will recommend to the perceptual phase for further exploration.The prime objective of conceptual inferences will be to eliminate surprises, a state of affairs not to be confused with the elimination of failures.Situations in which acts have failed can be justified conceptually so that they are no longer surprising. The cause of failure may be "gremlins" or "fate."What it boils down to is this: a surprising situation is worth attending to because it reveals a flaw in the world view that should be repaired; but the repair will satisfy only the narrow needs of a responsibility for integrated structuremaking.Situational structures will have a transient existence in the semiotic system, being held in short-term memory only long enough to be used by Every persistent attempt by individual or collective agents to reach certain social objectives will give rise to that little domain of meaning called a "role." "Butcher," "father" and "lover" are occupiable slots in the social fabric; a man may "be" all three concurrently. There are roles for groups or organizations, partly laid down verbally or inscribed as "policy."Another side of the world view is its structure of roles. The agents represented in the world view will be temporarily occupying certain roles in one or another community; they will be at the moment occupying their minds with objectives which are for the most part conventional. Existent patterns of interpersonal or interorganizational transactions, or of transactions between individuals and groups or organizations, will be rudely predictable.Whether a given social objective was actually reached may not be known to the community for sure, because in society evaluating "success" is itself a role that might not be reached satisfactorily.The valuable thing to notice about roles as far as manipulative inferences are concerned is that, according to the pragmatic world view, the social objectives that give rise to the structure of roles are not the concern of this third phase of the act. How the collective mind will organize itself to carry out the social act is the special province of pragmatic inferences which will do the work of the act's fifth phase of "reorganization."Indeed it is the pragmatist's readiness to take to himself the responsibility of reorganizing social roles that is causing so much emotion today.The attitudes of industrial societies have assumed that the mature individual will occupy a useful place in an existing social order. Democracies have left the choosing of roles up to the individual, viewing the occupancy itself a competition for desirable positions. In compensation, penalties for not choosing to "work" have been, on the whole, severe. To be poor in industrial society, except for mitigating circumstances, is to be lazy.A pragmatic need to tamper with the structure of roles itself, now explained hypothetically by the motive of bringing social objectives into closer conformity with the requirements of the social act, will be in conflict with industrial purposes and attitudes on two major counts. Not only does the pragmatist refuse to choose a ready-made role, and so does no industrial work unless pressed; when he then takes it on himself to "change the establishment," he doubles the insult.Responsibilities of this manipulative phase of the act will presuppose that a semiotic system will have been committed, at any given time, to one or more roles in which it is participating as a mechanical agent of society. The machine may be doing the payroll of an organization, or working on an assembly line. In addition to its "social" objectives, the semiotic system may have "personal" objectives supportive to its intellect or material being. The objective of exploring a surprising situation uncovered in its conceptualizing phase would illustrate the intended satisfaction of an intellectual need. An intention to preserve the morphological basis of its existence may involve sustenance or maintenance. A semiotic system will need its supply of electricity or of spare parts; it may be trusted, up to a point, to detect and to patch up the improvidence of its surroundings or the malfunctioning of its components.In order to reach the various objectives to which the semiotic system is committed, manipulative inferences will compare an existing conceptual structure, representing its planned course of action, against an ever changing world view. From the world view, the inferences will gather what they need to reshape the plan so as to keep it up to date with the fluctuating conditions 296 of a conceptualized objective and subjective environment. Should goals change, the plan will also have to be reshaped.Inferences relating to planning can be exceedingly complex, since they involve such complicated things as knowing who one's allies or opponents are and how they might react under certain conditions, knowing the terrain and the artifacts that might be harmful or helpful to one's aims, and so on.The developing plan, on its side of the comparison, will point to-missing or incomplete or inconsistent experience in the world view relative to its purposes.Situations that could contribute to the satisfaction of these specific needs of planning are the ones that manipulative inferences will recommend to the perceptual phase of the act for exploration.I will call these "competitive" situations because the responsibilities of this phase, just as the others, appear to be narrowly drawn. The urgent business of the manipulative phase will be to obtain one's objectives. That may call for outdoing a competitor after the same objective; or a possessor of the objective may be disposed to defend it. As a result the attitudes and purposes engendered by manipulative inferences will center on the concept of "dominance," the achievement of one's own objectives at the expense of other agents where necessary. The other side of this coin will be a great deal of bother to escape being dominated oneself.That competitive situations will be recommended for exploration by the perceptual phase of the psychological act has the consequence that the world view based on manipulative inferences will be utilitarian and practical in character, despite the broad exploratory vista aspired to by Newton's universe as a foundation for its plans. The colossal storehouse of experience, always greater than one's competitor, will not be the aspiration of a pragmatic mind. Generally speaking, the world view of a semiotic system, like that of the society it may serve, will seek refinement between experience and knowledge instead of accumulation. What is not needed to effectively carry out its roles will be pronounced "not relevant" before being judiciously discarded.An insight into the theoretical requirements of this manipulative phase can be gotten from computerized experiments with heuristic decision making.The "general problem solver" programmed by Herbert Simon and various associates over the years is an especially good example, although like the rest it is founded on the objective view conceptualizing "action" relative to a change of "state."Furthermore the action alternatives are assumed in Simon's theory to be known in advance. This will in fact be the case within the narrow responsibility of the manipulative phase considered by itself. But the difficulties of learning the alternatives cannot be entirely circumvented in thinking about the requirements of this phase, since the arrangements within which a semiotic system will do its decision making must be applicable to all stages of its intellectual development.The "problem" attacked by heuristic decision making programs is to transform an initial state into a terminal one by means of a sequence of statetransforming operators. The initial state may be transformed into a number of intermediate states as decision making proceeds doggedly toward a "solution,"which will be signalled when some intermediate state has been found to be 298 identical to the terminal one. Toward that end the program compares each intermediate state with the terminal state to list differences between them.Each difference is associated one or more of the operators. The general process of choosing the next operator to be used to transform the existing state is commonly called "means-end" analysis.There is no guarantee until the last that the choices of means-end analysis are on the way to a solution. The process may try several paths and will gradually generate a branching tree of possibilities. Planning strategies are concerned with measures of progress along the way, and with heuristic principles determining where the next explorations should be made to avoid the singleminded stereotype of a direct approach, as well as the plodding, effort-scattering blindness of trying everything.A process of pragmatic means-end analysis will not progress from state to state but rather from one orientation to another. Each orientation will either fathom the environment with perceptions on the "outside," or on the "inside" will keep its place with inferences referenced to the world view.Consequently the "problem" can be restated pragmatically as one of transforming an initial orientation into a terminal one, so gaining the "solution." But the intermediate orientations along the way to the solution will be both perceptual and inferential; in effect the successfully coordinated orientations will enforce a correspondence between an "external" environment and an "internal" conceptualization of it.Here is yet another slant on the developments attendant to "learning."With progress toward specialization, complex sensorimotor acts will be coor-dinated with complex acts of inference as were sensorimotor elements with inferential elements initially. Increased precision of perception will be backed up by inferences of greater exactitude and depth. Sensorimotor and inferential elements will tend to be separated in the stream of existence.They will bunch together, each with its own kind, as constituents of complex act of perception and of inference respectively.The place of organic concepts in the semiotic system can be illuminated if, in considering their origins in perceptual learning, one will look for sensorimotor and inferential elements still mingling together in the existential stream where complex acts of inference and complex acts of perception meet.Intricate "organic acts," specialized neither to perception nor to inference, will grow between those which implement the orientations. The organic acts will implement the purposive movements of the semiotic system from one orientation to another; they will be in pragmatic theory the equivalents of Simon's operators.Simon's "table of connection," where differences between states are mapped onto the sets of operators from which the means-end analysis process makes its selections, may be seen to answer a theoretical need not unlike one of those served by the world view of a semiotic system. Given an initial orientation in the world view and a proposed terminal orientation, the organizing principles of the world view should make it possible for manipulative inferences to put together appropriate sequences of movements for making the transition. Failing that, the principles should facilitate the discovery by manipulative inferences of plausible directions in which to make goal-seeking explorations.The world view must also be the framework to which all inferential orientations are referenced. For the satisfaction of this different theoretical need, the kinds of concepts making up the structures of the world view at a given time are of utmost importance. A pragmatic explanation of the stages of intellectual development of the semiotic system can indeed be argued on this basis, which I do in this essay in a meager way.Along with the world view, the situation and the plan will be composed of whatever concepts are available at the time. Therefore I have concluded that all three structures can be represented, throughout all stages of development of a semiotic system, by a symbolic facility similar in theoretical form to the semantic one. Where the symbols of semantic rules will name either individual syntactic segments or classes of them, now the segments will be conceptual.Every conceptual segment will consist of individual concepts joined at the places designated by numerals. The count of places still open for joining will be the "degree" of a conceptual segment. All members of a class of conceptual segments will be of the same degree. With regard to the strictly formal characteristics determining how processing will be done, consequently, the conceptual and semantic segments will be almost identical.Despite an existing overemployment of the term "pragmatic," I will take it to designate this third level of symbolization in the organization of a semiotic system. As the syntactic level provides for the symbolization of the significant units of information commonly called "signs," and the semantic level symbolizes the "meanings" of the signs, the pragmatic rules of this third level will answer to the "uses" of conceptualized meanings within a total framework including conceptual experience and knowledge of a community of "users" of the same signs and meanings.Defined concepts can be introduced at this pragmatic level to correspond to individual conceptual segments or classes of the segments. Definitions may be recursive, to include concepts for classes of classes, and so on. Most of the problems thought about by scientists and by logicians will be pertinent to the organization of this level of symbolization; it should perhaps be approached more humbly than is usual for science or logic.If constituents of the segments are factual concepts then "things" or "events" will have been classified pragmatically on the basis of use. Yet the same can be said of those segments composed of formal concepts, or of organic concepts, or of the conceptual conglomerates representing acts.Even my distinctions between the three fundamental categories of concepts have been too well made. Such purity should not be expected in the semiotic system itself; it is a convenience to my explanations. I have wanted to get around saying that some acts will consist mostly of perceptual elements, or mostly of inferential elements, or will be pretty much the mixture of both.The general disposition of a pragmatic approach to conceptual classification will be toward unifying scientific and logical problems within one overall scheme founded on the uses which, according to a unique personal belief, are being made of conceptual segments within what that person knows of an intellectual community. Such personal beliefs may not approximate professional standards without that person's own active participation in a 302 professional practice of conceptual use. By the same token, a semiotic system will require practice to acquire professional standards in its capacity to classify and use concepts.Conceptual knowledge will consist of the designs of pragmatic rules that result from the practices of a mechanical mind and its private inferences about the uses being made of concepts by other minds. The main parts of conceptual experience will be the situation, the world view, and the plan. All three will consist of specific but speculative conceptual segments, symbolized according to the conventions of this pragmatic level of the semiotic system.One may now see that the semantic structures presented to perceptual inferences of the act's first phase, by virtue of the one-to-one relationship between the names of semantic classes and the names of individual concepts, can be placed in correspondence with conceptual structures. To extract conceptual segments for use in representing the situation, perceptual inferences will do a pragmatic analysis which segments the conceptual structures and recognizes instances of defined concepts in them.The resulting conceptual segments will also represent the latest orientations of the plan. The various possibilities being carried forward by manipulative inferences, as they shape new branches of the plan under the aegis of planning heuristics, will always be projections of those segments anchoring the newest pragmatic structures erected by the analytical inferences of the perceptual phase.Processing requirements for pragmatic projections of the plan will be analogous to those of the semantic projections, though considerably complicated by the addition of heuristic processes ancillary to analysis and synthesis processes. Analysis will again work backwards to find substitutions of pragmatic rules by which an existing pragmatic structure can be identified as part of a larger structure. As the rest of that structure is synthesized, new conceptual segments will be projected onward. The new segments can then be projected again and again, to form a partial ordering of paths composed of conceptual segments that will overlap, always having some concepts in common.The absolute necessity for overlapping alternatives on the semantic and syntactic levels of processing below can now be grasped if one will consider that any given orientation of the plan, whether perceptual or inferential, may be followed by several different movements of the semiotic system to reach a new orientation. Final selections being made by the act's agent will be essentially choices among possible movements from an established orientation.The psychological act's fourth phase of "consummation" must refine and adjust the plan to details of the situation. The responsibility of this phase will be to elaborate the plan into a workable form that can be turned over to the act's agent for conversion into an orchestration of overt elemental acts.Simplifications in the plan will be desirable from the standpoint of economy of representation and most assuredly as a convenience to planning.I assume that the plan being put together by manipulative inferences should take relatively large steps from orientation to orientation. While the world view should be sufficient to ground the plan, it should include only what has import for decision making in a grand sense that deliberately excludes mindconsuming clutter.The situation will have to be represented on two hierarchically related levels of generality. More general concept will be keyed to the gross orientations of the world view. A nicer grid of perceptual and inferential orientations will fill out the necessary particulars in between planned orientations.The first thing to notice, in this connection, is that the conceptual structures from which perceptual inferences will extract the building blocks of the situation, having been derived from tree-like hierarchies of semantic classes, will be capable of supplying more than one level of situational representations.And since the conceptual segments representing the situation may do so on several levels of generality at once, manipulative inferences can project the plan with the same degree of generality as was used by conceptual inferences in constructing the world view. Meanwhile, consummative inferences will do more detailed planning to create possible paths from one gross orientation of the plan to the next.The problem posed for consummative inferences will always be to reach one of the next orientations prescribed by various branches of the plan.A consummative means-end analysis will therefore do its searching for a solution on a smaller and more particular scale than the manipulative meansend analysis that produced the plan itself. Although there will be heuristic decisions to be made by consummative inferences, the decisions will be less encompassing than the manipulative ones, by virtue of being referenced to the local structuring of the situation instead of to the global structuring of the world view. 305These refined paths, the overlapping conceptual segments assembled by consummative inferences as they do means-end analysis, will be the specifications communicated to the agent of the act so that it can now command a coordinated performance of elements conforming to the plan. The specific means of communication will be arranged by placing an additional requirement on the method by which the act's agent projects semantic structures. If paths have been specified by the consummative inferences, then the meanings contained in the projected semantic structures will have to correspond to the concepts in segments comprising the paths. In all other respects, the agent of the act will make its choices as explained earlier.Should the world view not satisfy the needs of manipulative inferences that are shaping the plan, such inferences may attempt through planning to satisfy their own needs. That is to say, they may incorporate into the plan itself paths leading to the exploration of competitive situations bearing on the specific problems of means-end analysis they are trying to solve. By the same reasoning, paths to some part of the world view marked as surprising by conceptual inferences may be worked into the plan if it bears on a problem to be solved. These requirements of doing will always have precedence over those of learning for its own sake; however, plain inquisitiveness may get into the plan when a semiotic system is not being pushed.Parts of the situational representations being kept up by perceptual inferences of the act's first phase, in like manner, may not satisfy the needs of the consummative means-end analysis which is assembling refined paths between the gross orientations of the plan. These consummative inferences, too, may produce paths that guide perceptions to the places in the situation where faults were found, thus satisfying their own planning needs.Such recommendations will therefore be made by the consummative phase to the perceptual phase by a route more direct than would be possible for any other phase of the act. This mechanical parody of bureaucratic prerogative is in character for consummative inferences. In society, these inferences are the inspiration of authoritarian attitudes and purposes whose narrow game looks meekly upward to ask who has got the plan, and then sternly downward to demand someone else's conformity to it. It is consistent within the middle manager's attitudes to look upon the making of policy as a responsibility which might be given to him as his reward for being a successful competitor. I hope by now you may grant that, within the frame of pragmatic inferences, it is also consistent for one to believe that the responsibilities of making policy cannot be given; they must be acquired by learning.As you see, I have again arrived at the formal bifurcation evidenced by the conflicting attitudes of the third and fifth phase of the act, which has its corollary for research and development of intelligent machines. Those researchers who base their approach on manipulative inferences will predictably set out to reward computers with a forced feeding of human savvy.Along with the ritual it is customary to state that one is flatly convinced of insuperable piles of pabulum yet to be prechewed, and so forth. Yes there are.On the pragmatic side of the conflict I have concluded that mechanical arrangements of this fifth phase of the psychological act will be, with regard to both horizontal and vertical classifications of conceptual segments, very much the same as the semantic classifications performed by the act's agent.In addition there will be heuristic processes for introducing speculative definitions.However the capabilities for introducing new conceptual possibilities are worked out, they must be solidly backed up with mechanized methods for forgetting conceptual structures which have failed the test of use. I think that indeed sophisticated induction, when it is done some day by machines, will be more an exercise of sophisticated forgetting than of anything else.For hypotheses, whether made by machines or men, will most likely be absurd.The situations which this phase of reorganization will recommend to perception are those which were orienting an act as it failed to be consummated.A fast-learning machine will take special notice of such "failures" in the orientations of its personal acts, or in the orientations of social acts of its 308 community, in order to concentrate reorganizing capabilities on the points of failure, which is to say, on the misfits between personal or social conceptualization and reality.I am thus convinced that the theoretical lessons to be learned about the organizing principles of semiotic systems, the very arrangements to be consolidated by hardware, are inseparable from the methodological lessons to be consolidated in the designer who would become expert in controlling the evolution of intelligent machines. The maxim of pragmatic method is that the rate of the development will depend on the designer's ability to forget the myth of his personal inventiveness, and to discipline his attention to living or historic evidence of the ways in which semiotic systems have actually succeeded or failed.But he will do so to make design decisions, not scientific descriptions;because in his world all men will be designers of semiotic systems. Knowing this, he will do it better and faster. Practicality is without question the imperative of this phase, during which a functional necessity does center perception outside of self or community. How else would it be possible to hammer out plans, either for person or for society, so as to choose what specifically ought to be done in the near future to protect or to improve a position of rivalry?As these combative attitudes anger at being forced to contemplate their own obsolescence, there is an ameliorative principle that I have brought to your consideration: a mind does not forget what it has learned in previous stages of its development, although further accommodation of its knowledge and experience will be necessary to incorporate them into a more comprehensive and more stable viewpoint.The authoritarian scheme of choicemaking that had its heyday in the Middle Ages is not lost to us; it is alive and well in every modern organization.Employees do keep their eyes on the boss as they speculate about the newest jog of his will. Sometimes, having perceived signs of his displeasure, they confess to him their sins of nonconformity to his plan.Yet the age is past when mankind, at the very forefront, thought of itself as a society of employees. Modern man has become a middle manager;he makes his own plan. His new talent is the down-to-earth and day-to-day operating decision of a policy attuned to a chancy game of nations and of industry.The policy itself, seemingly imposed on him by human or subhuman antagonists, is felt to be largely beyond his own control. He is a victim of external circumstance. His information, hence his troubles, come from 311 without. His defensive attitude can be ascertained from the outward direction taken by his accusations in time of stress.Imagine, if you can, a world in which quite ordinary men and women begin to think of themselves as policy-making executives. Then you will have the pragmatist by his shirttail as he starts clumsily to learn how to live in a universe of acts, a strangely mental cosmos, most puzzling for its formal heterogeneity. Not just one context of objective inferences, but many overlapping contexts make up his information. Each is matched in meaningful relationship to specific content. To make policy is to create or refine these little domains of meaning, in which one can recognize the various roles he plays personally or socially, or the roles played by others.His is a self-conscious awareness of roles, with the added stipulation that it is better to create a role for oneself than to take one ready-made.A love affair with the role of policy-making itself can be heard in the bittersweet criticism and proposed reconstruction of sex, corporate management, womanhood, war, money, and apple pie. It is in the active role of designer of roles, taking its speculations from the act's phase of reorganization, that pragmatic perceptions appear so excessively absorbed by signs of personal or social inadequacy.The pragmatic attitude anticipated for the sixth cultural stage is that all of one's personal and social experience can, and should, be subjected to the same careful scrutiny as those innocuous backwaters hitherto commissioned for study under the contract of scientific detachment. Witness an exodus from the physical sciences to psychology, to sociology and to all 312 other scholarly and artistic fortifiers of effete humanity. What sounder evidence than this of pollution and clandestine purpose on the rise in science and education?Beneath the discernment that one's own parents must be indicted for incompetence, there lurks an exuberance of breakthrough. Urgent attempts to teach one's elders overflow from the campuses as a domestic brinkmanship in which the risk of miscalculation on both sides is great.The teaching of oneself is a casual experiment with novel life styles or mind-engineering drugs. It would be ridiculous to see in all of this the motive of merely describing, rather than tangibly redoing, one's own personality and one's own society.Obviously, scientists and educators will themselves remain furtive in working out the implications of a new point of view while the slow hand that feeds them is exorcising the very same insight. To a climate boding doom as budgets are cut for interlocked institutions of learning, the trend is toward either bookburning or the more priestly arrangement that Robert Fredrich celebrates. The priests would no longer sit and watch society but would use their mysterious knowledge to manage it, never forgetting to pass the collection plate for the harrier of their hounds.They would continue to treat man as a passive object propelled by social forces rather than as an active creator of his own life. Lacking a Descartes to belay the hunters of latter-day witches, they would stop advancing or go petulently in reverse. The proposition that their own hand is on the throttle is the one that may be illusory, however.In contraposition to the tired choice between mechanism and free will, the pragmatic scheme of choicemaking postulates an unyielding direction in all human activity. It doubts the credibility of spiritual movers in personal and social dynamics with a hardheadedness reminiscent of past pioneers of physical dynamics. Why should one suppose that a whole universe except for his own brain runs like a watch? If the functioning of a brain creates a mind, the new question has got to be "How is a mind constructed?"By "mind," you have been assured, I do not refer to something merged in the juices of a brain, where it lies in poised readiness to give or receive "information." No psychic entity is presumed to wait in truant anticipation of news about itself. Just the opposite. I have been following out the alternative hypothesis that "information" is the stuff of which a personal mind, the whole web of a given experience and knowledge, consists, having been created by the biological functioning of a brain.I look to a tacit acceptance of this seemingly innocent hypothesis, as it spreads without the spiritual reservations hitherto summarily impressed on every progeny, for the basic cause of emotional outbursts across a bifurcation of generations. This new belief does the work of cultural revolution because it challenges the established information source, relative to which all roles in a society are determined.But to face problems of a cultural nature, we must theorize about an accumulation of form that began long ago and surges onward, temporarily carrying us along with it as unwilling captors. Thus, another principle I have mentioned is methodological. It cites the necessity for formal accom-314 modation in ourselves as we fix our position in the cultural stream by looking backward at a pragmatic reconstruction of the development so far. Then it may be possible to use the hypothetical framework of an alternative point of view as we try to surmount some of the prejudices peculiar to a transient state of mind hoping to predict the form of its future.In order to actually test any new formal hypothesis one must live it, at least tentatively. A corollary of this principle of verification is that the crushing labor of building a new universe will not be done by investigators alone. Only as it is carried forward in the collective mind of a populace does formal prediction do the constructing by which every change of cultural state is put on trial by use.When the old forms fail us, a felt need for new forms is indicated by cathetic investment in a new source of information. The arguing and complaining may be simply an accompaniment of disruptive social accommodation already well in progress on a broad front. The ability to talk rationally about a new world view seems to come after it is already established. Some doubt has motivated the mind to learn; the particular forms it will learn are, by our hypothesis, biologically predetermined.Regarding the rate of learning, our hypothesis predicts that the tempo of adaptation can be slowed down by shielding either a personal or a social mind from an awareness of its own mistakes or from avenues down which it might stray. Or, by obliging it to be aware of systemic misfits or of innovative possibilities in the organization of its own experience or knowledge, the mind's ability to shape itself can be quickened.Language and other means of symbolizing can, in these respective senses, be either "conservative" or "creative" instruments in the various societies that implement the basic order of a particular world view. Every A primitive society may produce, on all too rare occasions, a pragmatically wise old man in whom, all too often, his contemporaries will discover no more than an eccentric oldster. Executives in an industrial society are commonly observed to "freak out" around forty, having presumably gotten hold of their corporate role of policy-making well enough to at last apply it in their private lives. Exciting evidence that an exceptionally well-organized culture has made a beachhead on our campuses, not from outer space or Russia but from a creative development of the maligned 316 educational institution itself, may therefore be observed in its surprising output of a veritable herd of wiseacre executives at callow eighteen.stageDynamics of cultural pressure and counterpressure can thus be visualized in terms of individual personalities being projected to stages of formal development beyond the one organizing their society. Forms that for the majority are still helpful will be felt by these forerunners as a drag.The Pandora principle is that the former will invariably come to regard learning as a box from which evils are escaping and will do their best to hold down the lid, whereas for the latter the box will always contain blessings which they will try to emancipate.Hence the noteworthy innovation in the order of antiquity may have been an overkill of theory. The dawn of conception led to science; but at first there was mainly the anti-science of a florid growth of myths and legends taken altogether, en masse, explaining away everything so fantastically well that no happening could be sufficiently surprising to stimulate learning. If that good old storyteller was an information specialist, as his name implies, his role was the anti-educator of a scheme of traditional choicemaking that succeeded by a ritual replication and protection of what had been done in the past.That tightly conservative preoccupation with the act's phase of conception on the part of the council of elders was the anchor around which a village life moored itself to ascertain the correctness of its facts. Byholding fast to what they had learned by chance, nomadic hunters may have transformed their life ever so slowly to one semipermanently ordered to 317 subsistence herding and farming.Reliance on traditional conception as the source of firsthand information was a more rigid adaptation than reliance on authority.Although sometimes fickle, the latter could change its mind. When the trend finally turned from herding animals to herding men, the villages faced an increase in marauding by clustering around the fortified citadels of feudal monarchies. The nature and attributes of kingship depended on historical background; as information specialist the king was everywhere absolute. Around him, agricultural and human domestication hung over everything in life. By comparison, the hunter had been poor but unbowed.In the hunter's autistic scheme of choicemaking one can recognize a preoccupation with the act's perceptual phase. The surprising artistic achievement of that first information specialist, the shaman, has been preserved for us in his cave drawings, paintings and sculpture. Remnants of his active practice survive in northern Siberia among the Eskimos; some traces remain in Australia and in Africa.Collecting his firsthand information deep in a self-induced trance, the shaman's explorations of hunting prospects, of causes of illness, of means of cure, and of all other matters necessary to tribal life, were done at the very edge of a just-emerging human consciousness. From his multifarious and showy activities, the tribe gained a center of stimulation around which to order society. Art may now keep us from dying of the truth; at the beginning it probably served to keep men awake to their insecure humanity.That function of the shaman's art may have been sufficient for a nascent 318 traverse from grubby food-gathering to hunting.More to the shaman's credit, I think it likely that the initial insight of shamanism, when it is carefully tracked down through the dusty maze of subsequent metamorphoses in magic and religious alchemy, will emerge in its most recent form as an aptitude for doing experiments and making empirical observations.Paralleling the long struggle to learn how to perceive, and always complementing it, is a progressive accumulation and refinement in the art of conception. Some of the high points of its stages can be seen in Aristotle's "Organon"; in Aquinas' proofs of teleological conformity; in the modern reconception of mathematical proof as conforming to either intuition or experience, where again the polarity of Descartes' dichotomy can be seen;and finally in Frege's theory that such derivations should be carried out exclusively according to the form of the expressions comprising a symbolic system, making possible proofs of an internal systemic validity per se.The theories of Gottlob Frege, a contemporary of Peirce, are deeply connected with the revolutionary innovation in the conception of form that made possible the reorganization and subsequent expansion of the physical sciences. Before Frege's "Begriffsschrift," investigators had always abstracted formal knowledge from ordinary language. Afterwards they proceeded in the opposite way, by constructing "formal systems" and later looking for an interpretation in everyday speech.This method was not consistently followed. But at least the result of the combination of Frege's theory of proof with George Boole's epoch-making "The Mathematical Analysis of Logic," in which a clear idea of formalism was developed in an exemplary way, the principle of such construction has been consciously and openly laid down. One can see in thisshedding of reticence the beginnings of a new method in science, wherein innovative formal constructions deliberately lead and determine the necessities of empirical observation, instead of the other way around.Peirce's contribution to system-making is harder to estimate, because the exigencies of his private life and the indifference of publishers prevented a full-length presentation of his unappealing viewpoint. After his death in 1914, the unpublished manuscripts and hundreds of fragments from a long life devoted almost exclusively to pragmatic speculations were assembled into six volumes by the Department of Philosophy at Harvard.His tendency to follow out the ramifications of his topic, so that digressions appear that seem inadmissible in print but which show vividly the interconnectedness of his thought, may now be recognized as a style dictated by the necessity to develop contents relative to contexts. From all he taught us his own system cannot be completely reconstructed, if indeed Peirce himself was ever able to catch sight of the goodies that will pop out of Pandora's box after the inevitable inquisition. : selection processes.It can be shown, for example, that the entire translation process can be generalized through use of metalanguages capable of conveying interlingual relations of various kinds. However, this merely extends the idea of enlarging the machine's store of knowledge about language, an idea which by itself has not benefitted mechanical selection as much as researchers had originally hoped.Accordingly, the second thrust of research on mechanical selection has been to widen the search for conditions attending choices. In addition to examining the expression undergoing translation, mechanical processes have been permitted to range over surrounding sentences, paragraphs, whole discourses, or data representing an increasingly extensive experience of language events located in the machine itself.Due as much to disappointment as to expanding interests, mechanical translation research overflowed vaingloriously and became computational linguistics. This new domain of experimentation is a conglomerate of studies in which mechanical translation shares the limelight with information storage and retrieval, automatic extracting and abstracting, fact correlation, question asking and answering, and similar applications where language is manipulated mechanically. After an unsettling beginning, during which the old guard felt compelled to recant its former commitments, the new milieu of jargons did provide a sounder medium for testing language theories and methods than mechanical translation alone.In consequence of this new opportunity to compare computational 196 linguistic applications of various types, it has been noticed that mechanical selection comes closest to human patterns of choice in those instances where a little knowledge of language, things, or persons is brought to bear on an experience sufficiently extensive as a source of conditions for choices. In other words, mechanical selection appears to be improved by a better balance between mechanical analogues of experience and knowledge. Machines that ask or answer questions are examples of applications seemingly avoiding the narrow window of experience through which mechanical translation research tried unsuccessfully to squeeze great concentrations of knowledge.In my opinion there are three lessons to be learned from this curious result of so much effort. The first concerns the way we might reasonably go about developing a mechanical translation system; the second concerns the type of system we might reasonably develop; the third concerns finding reasonable people to do the work. These problems are the ones requiring cogent solutions before feasibility estimates can be meaningful. I think, however, that we have cause to doubt the optimistic assumption that men of good will must always reach similar conclusions on exposure to similar evidence, especially when part of the evidence is about themselves.By now it should be plain that no methodological consensus exists in mechanical translation research, without which comparisons of both formal and factual results are, at best, misleading. Before sitting down to make a second round of feasibility estimates, it might be proper to ask seriously why in our estimates thus far we seem to be getting "garbage" out of our own selection process One possibility is that, because mechanical translation researchers 197 were gathered from a variety of technical specialties, we have not been looking in the same place for conditions on which to base our choices of method. By and large, it must be admitted that we have been a mixed lot, though sharing the prudent wish of every specialist: not to be caught on lame feet outside of his territory.Another is the possibility that, as heirs of commonly accepted notions about the nature of man, we have been looking too much in the same place for the conditions determining our methodological choices. By preferring the narrow window of empirical science, we have avoided those taboo territories made uninhabitable by the "garbage" production of our predecessors.As a prolific example of the latter I cite René Descartes, who ground his garbage so exceedingly fine to assay psychical as well as physical substances. Surely it is for lack of these psychic essences that machines are unable to use or to understand language; while we, brimming full, need only introspection to understand and master all of the configurations of our own choicemaking.We have said a great deal in translation research about the dangers of anthropomorphising machines and so little about the dangers of anthropomorphising ourselves. What if it should turn out, as Charles Peirce claimed a full century ago, that we have no special vantage point to our own psyche, but must learn about that too by careful methods of inquiry?Thus a third possibility is that our difficulties with mechanical selection are the result of self-ignorance, whose remedy should be a disciplined study of the ways we make choices ourselves. If in fact each of us is engaged in a 198 quest for self-knowledge, then disparities in private understandings of the state of the art of human choicemaking might well account for some of the troublesome goings on in research which takes these understandings as its very ideal.My personal conviction is that all of these factors are at work to make a second set of feasibility estimates as uncertain as the first. Before taking up the lessons which such estimates might turn to account, therefore, I consider it essential to make public some of the private assumptions unavoidably the source of my judgments.Piaget's study of the origins of intelligence in children is an elegant instance of this empirically disciplined formal method at work. It is consequently a good starting place for my summary, and a center line along which I will embroider my own thoughts or those of others caught up in the same intrigue of intellect.From his observations of behavior in the human infant and child, Piaget isolates and describes six early stages of psychological adaptation.Each stage is evidenced by a characteristic scheme of choicemaking. It consists, on one hand, of the child's attempt to assimilate the environment by incorporating within his existing framework of knowledge and experience all new data given by his senses. On the other hand, it consists of his accommodation to the environment by using that modified framework as a basis for new acts. The existing adaptation at every stage, is an imperfect equilibrium constantly being repaired by successful assimilative and accommodative choices of its special kind, or being ruptured by unsuccessful choices of that kind.Psychological adaptation, like the organic, can be explained in terms of relationships that are essentially ecological. Always and everywhere, adaptation is only accomplished when it results in a more or less stable organization of relations between an organism and an environment.The point of supreme interest to us is the perspective from which Piaget chooses to construct his formal hypothesis. By observing stabilities in the child's relations to the environment as they appear from without, which is to say from the commonly accessible frame of reference of empirical 201 science, the observer goes on to hypothesize how those somewhat unsettled ecological relations are felt from the personal standpoint of the child as his mind works out its first contacts with reality.From the point of view of the investigator, then, factual data are those that can be observed to vary from child to child because they are imposed by environmental details that differ with the time, the place, the culture in which a person lives. Formal data by contrast are found to be invariant among children because, Piaget hypothesizes, these are necessary and irreducible data imposed on the child by his own genetically inherited biological organization. That is functionally the same for all of our species.As a consequence one can deduce that, from the personal standpoint of a child, invariance is an aspect of experience distinguishing form from fact. And we have already seen that this same invariance is what the investigator might look for himself from the personal standpoint of his own research experience, when he is making theoretical choices. Such a coincidence should warn us that formal hypotheses about the organization of human minds have direct methodological consequences which mark them as being basically different than factual hypotheses about the organization of the physical environment. When the investigation probes into the foundations of meaning and of understanding, there is a new need for consistency between any theory about the mind of the human subject under observation and that of the observer himself. What is hypothesized for the mental organization of the subject applies equally for the observer, and as a result can modify the choices of method open to the latter in his investigative role.In short, the process of formal inquiry itself is seen to consist of a cycle of assimilation and accommodation. From observations of invariants in the subject's behavior, the observer assimilates new understandings of mental organization, to which he then accommodates his investigative behavior.In this cycle of formal investigation, methodological choices can be recognized as instruments of formal accommodation for the investigator, just as theoretical choices are his instruments of formal assimilation. Choices of theory and method are both tentative and are "hypothetical" in the sense of self-consciously awaiting the test of use. Consequently, these are tools of formal learning for a mature intelligence, not for the infant just starting out in his feeble thrust toward consciousness of self.The infant has his own instruments of formal assimilation and formal accommodation, for he can be observed to progressively modify the essentials of his scheme of choicemaking. Should that occur, one can tentatively assume that he has learned something, not about the environment, but about the organismic basis of himself.Once the child understands the next stage of psychological adaptation, he prefers to use its new scheme of choicemaking, although it can be shown that he still knows how to use all of the schemes he acquired in earlier adaptations. The next stage is always a more desirable frame of knowledge and experience than the one before it, taking into account everything in the previous stage, but making new formal distinctions and organizing facts into a more comprehensive and equilibrated structure.That each scheme of choicemaking is formally a prerequisite of its 203 successor is argued by the observation that no stage in the progression of psychological adaptations is skipped. Each stage of adaptation has its own formal organization whose chief aspects my summary will try to illuminate.In addition, one should look for a progression of formal experience and knowledge in states of adaptation which are ever broader and more poised. It is this progression which allows us to think of the successive, states as cumulative stages of mental development.One must distinguish carefully between any existing state of psychological adaptation and the process of adaptation by which that state is changed.As Peirce was shrewd to notice, only when the investigator identifies formal inquiry with the process rather than the state, does it become necessary for his own state of mind to change should his investigative process succeed.Formal reasoning has a dual purpose: to clarify the state of contemporary thought, and at the same time to benevolently undermine the world view that its fund of experience and knowledge represents. The aim of that benevolence is to carry forward the cultural process by including an established universe in a still broader and more stable one.The central role of formal communication as a determinant of the state and the process of cultural adaptation has been explained by Mead and again eloquently by Whitehead. Each language has a formal component for talking about the everyday language to be used in talking about facts. Men also invent symbols for precise forays of factual description, as is well exemplified by the linguist's use of his metalanguage. Whatever the motivation, formal communication can either consolidate a cultural state by 204 perfecting the symbols already being used to mention facts, or it can offer new symbols to further the cultural process by making possible the mention of facts until then unmentionable.Whereas at this moment the need of the state of culture is to consummate an objective universe through the use of symbols that successfully organize vortices of objects in a continuum of time and space, the clear need of the cultural process is a new basis of symbolizing with which to organize a more comprehensive universe, incorporating subjective as well as objective facts, and a more equilibrated one by virtue of providing functional mechanisms for formal as well as factual adaptation.How can a universe be symbolized to bring these neglected cultural ingredients to critical public purview? Langer has proposed that the basic symbols of such a world would name acts, and that the symbolic facility of a universe of acts would allow us to communicate about complex acts, composed of those elements.The gist of the line of reasoning being pursued is that it is about the symbols of Langer's universe instead of those of Newton's universe which have become, after three centuries, so comfortable to a mechanistic sense of life.At first contact, a universe of acts is certainly a strange world;but then, any really new world must be strange. And a world view which aspires to incorporate the mechanics of formal adaptation has in added perplexity the responsibility to explain the circumstances of its own emergence.The job before us is to clarify the symbols of this unfamiliar world as best we can so that they can be used and tested against living and historic evidence,where strangeness has precedents.Unavoidably, my summary will take up more mature stages of reflective thought following on the six initial stages of practical intelligence that Piaget looks for in the infancy and early childhood of individual men and women. It is in these markedly different settings that one can observe functionally analogous progressions of schemes of choicemaking. The invariant aspects of that progression might then be explained by an increase in human understanding of a biologically determined functional nucleus underlying and guiding consciousness.Thus, the beginning of the process of psychological adaptation presupposes an existent biological organization, itself the product of an evolutionary sequence of genetic adaptation that incorporates hereditary factors having two quite different types of biological result. Factors of the first type determine the constitution of our nervous system and sensory organs, so that we perceive certain physical radiations, but not all of them, and matter of a certain size, and so on. Factors of the second type orient the successive states of psychological adaptation, and so have their result in the organization of a mind which attains its fullest and steadiest form at the very end of an intricate process of intellectual evolution, not at the start.All of the various states and the process of psychological adaptation have in common the one formal aspect that, relative to an assimilated frame of experience and knowledge, the direction of every accommodation is such that it attempts to satisfy need.Piaget maintains that needs and their satisfaction are mental manifes-206 tations of the complementary interplay of assimilation and accommodation as felt by any human being. Although from our personal standpoint need may seem primary, it is the internal organization of that underlying unity, the act itself, which motivates our day-to-day existence as well as our long term psychological development.The theory of the act, making explicit the invariants to be found in every unit of human activity, would for a universe of acts set forth the cyclical relationships between assimilation and accommodation which are taken to be the functional nucleus of both factual and formal adaptation.The act of Langer's world would not consist of movements in time and space as seen from some distant and impersonal viewpoint of a spectator, although such movements might indicate to the mind of a spectator the act of another mind. The symbol "act" would stand for any elemental or composite constituent of a whole but unique universe, one among others named by the symbol "mind," whose personal and partly intimate point of view would be felt as the very direction of the act.More, the direction of the act would tend to satisfy the immediate needs of a state of adaptation by assimilating and accommodating to the organization of the environment. At the same time the direction of the act would satisfy the long term needs of a process of adaptation by assimilating the organization of the act itself toward an eventual accommodation which, through the mind's increased understanding of the principles of its own direction, would affect a new state.This progress notwithstanding, Piaget contends as Peirce before him: 207 there does not exist, on any level of human consciousness, either direct experience of one's own mind or of the environment. Through the very fact that assimilation and accommodation are always on a par, neither the organization of an outer world nor that of an inner self is ever known independently. It is through a progressive construction, guided solely by the pragmatic circumstance that acts once committed to use either succeed or fail to be consummated, that concepts of the self within and of the environment without will be elaborated in the mind, each gaining meaning relative to the other.The theoretical relationships between the several states and the process of psychological adaptation, as approached in the context of the theory of the act, is the core of the matter, therefore. It is from this connection that one may extract the multifarious method of inquiry indicated by Dewey and Bentley in their essay about knowing and the known in this new universe.If the formal character of each successive state of adaptation is due to an increase in the mind's understanding of how the act is organized internally, then the invariants observed in each of those states should contribute new formal aspects for the theory of the act. Conversely, invariants observed in the act as such should help us to understand the theoretical relationships between the states and the process of adaptation.Mead's analysis found the act to consist of three principal phases:the first a phase of "perception," the second of "manipulation," and the third of "consummation." But the method of analysis just suggested will find that 208 every complex act has five functionally distinct phases which, allowing for an initial state, account for Piaget's basic progression of six adaptive stages.This result would presuppose, for the theory of the relationships between the states and the process of adaptation, that it is an understanding of some new phase of the act which the adaptive process incorporates in the frame of formal experience and knowledge in order to pass from the existing state of adaptation to the next state of that basic progression. The efficacy of this view of the situation is given by various sorts of evidence.Formal adaptation always appears as a growth of capacity in just one functionally distinct phase of the cycle of assimilation and accommodation implementing factual adaptation. This is at least consistent with the assumption that formal assimilation incorporates in the mind an understanding of the internal organization of that phase.There is also an invariant order in the emergence of new phases of intellectual growth. The basic progression of six stages of adaptation exhibits that order in a number of quite dissimilar behavioral contexts, thereby assuring us that we are dealing with exactly five phases of functional capability, no more no less.The initial progression consists of the stages of practical intelligence,where the five phases are first established as capabilities in the newborn child. Throughout the development of reflective thought that immediately follows, five functionally analogous phases emerge in the adolescent and young adult with the growth of representative thought. They occur again with the increasing capacity to verbalize subjective and objective facts contained 209 in such thought, and finally with progress in formal verbalization. As the behavioral setting becomes more complex, the formal character of the phases is revealed with greater clarity. The cultural progression is accordingly the most elaborate setting from which one can extract the internal organization of each phase.Within the internal organization of these five functionally distinct phases, one finds every capability needed to construct a viable theory of the act. That the phases are in fact constituents of the act is evidenced by the very possibility of that construction.However, the sequence of phases defining the direction of the act does not turn out to be the same as the developmental sequence defining the order in which the phases enter consciousness. Evidently the first two phases of the act are understood one after the other, and then the fourth phase, the third, and the fifth. The process of adaptation always assimilates the phases of the act in this peculiar order to affect, through its respective accommodations to the successive mental increments, the basic progression of six adaptive stages observed in all of the behavioral contexts.Even this unexpected state of affairs will be found to make sense in the context of cultural adaptation, where the developmental sequence can be recognized as a convenient arrangement for the transmission of social and cultural behavior across generations of individuals.To see this, one is required to consider the specific organizations of the several phases and the way they cooperate to determine the direction of the whole act.To lay the grounds for that discussion, I must stress again that the 210 most fundamental distinction for the new world we are exploring is not the one yielding a grid of space and time which makes possible the symbolization of movements in a physical environment. For a universe of acts, the basic distinction will be that made by Peirce of "potential acts" comprising the patterns of knowledge as opposed to "actual acts" being instances of those very same patterns which, in the relationships of their occurrence, comprise the experience of a given mind.The dichotomy of experience and knowledge is a more comprehensive grid for our symbols than that of space and time. It makes possible the symbolization of acts making up a mind that is itself capable of symbolizing physical movements in an environment, as well as its own acts or acts of other minds.Linguists will find this new grid familiar. It is the one by which known patterns of language, symbolized in their grammars, are balanced against instances of those patterns which they symbolize in a given stream of speech. The dichotomy of knowledge and experience is nonetheless as wide as life. Every stream of existence contains sensory elements other than those of speech, which feed a balancing act of magnificent dimensions.The problem posed for the theory of the act is to explain the equilibrium that assimilative and accommodative processes maintain between actual acts of experience and potential acts of knowledge, given a stream of existence which is itself a sequence of actual sensory or motor acts, each instancing a successful or an unsuccessful consummation of some potential act among the elements of Langer's universe.The resultant mind extends precisely as far as the equilibrium between experience and knowledge is maintained, whether the work of assimilation and accommodation is done by a single biological agent, or by a collection of them acting socially. The agent could as well be an electronic machine. This new world will be less skeptical of mechanical agents of mind than the present one, because it will look for mind in the equilibrium itself instead of in the agent.Whatever the agent of a given mind, any flaw in its equilibrium will be "need." Any repair of disequilibrium will be "satisfaction." Persistent loss of equilibrium will be the nagging irritation of "doubt," according to Peirce the sole motivation for acts of inquiry which when successful attain not the truth of an external reality, but stability. For a universe of acts, therefore, any persistent stability in the equilibrium between experience and knowledge will be "belief."In summary of the matrix of theoretical and methodological choices, I assume that the criteria of truth in a universe of acts are the immediate stability of its adaptive state and the long term stability of its adaptive process.These are pragmatic truths of fact and of form, respectively. The former relates ultimately to the organization of the environment; the latter, to the organization of the act. But there is no direct access either to a reality behind fact or behind form. Each is known or experienced relative to the other by means of complex acts which the mind itself constructs. The constituents of that construction are potential sensory or motor acts, the biologically or mechanically based elements of this unique universe, which 212 is one mind among others. The sole source of the information guiding the construction is a given stream of existence, itself a sequence of actual sensory or motor acts instancing successful or unsuccessful consummations of those universal elements. And the organizing principles of the construction are those of the act, whose pragmatic method I will now discuss. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
638
0.122257
null
null
null
null
null
null
null
null
bbe145f17bbc1104bf5f9f606bb423886c8c62c2
219308541
null
A Few Steps Towards Computer Lexicometry
Appendix I1 Some ideas for the program to investigate the relatianship c o v e r i n g set size versus maximum definition length .
{ "name": [ "Findler, Nicholas V. and", "Viil, Heino" ], "affiliation": [ null, null ] }
null
null
null
1974-09-01
0
0
null
null
Lexicoqraphical meaning is t h a t of "words, " and the In the framework .of o u r particular topic w e shall be mainly concerned w i t h categories 4 and 7 .(1 966 ) , u n i l i n g u a l d e f l n i n g d i c t i o n a r i e s appear t o be based on a model that assumes a d i s t i n c t i o n between meaning p r o p e r ( s i g n i f i c a t i o n , comprehension, i n t e n s i o n ) and t h e t h i n g meant by a s i g n ( d e n o t a t i o n , reference, e x t e n s i o n ) . On the basis of what is meant by a sign, Osgaod, s u c i , and Tannenbaum ( 1 95 7 ) distinguish three k i n d s of meaning.Pragmatical ( s o c i o l o g i c a l ) meaning : the r e l a t i o n of signs t o s i t u a t i o n s and behaviors.2 .( l i n a u i s t i c ) meaning : the r e l a t i o n of s i g n s t o other signs.S e m a n t i c a l meaning: the r e l a t i o n of signs t o t h e i r s i g n i f i c a t e s . I t i s easy to see that these classes are i n correspondence with L o n g y e a r ' s three layers i n category 7. EQUATIONi n f l e c t i o n a l forms, and I n R u s s e l l ' s v i m (1 967) the s t r u c t u r e words, such as " t h a n , I& "or, " " h o w e~e r , " have meaning o n l y in a suitable verbal c o n t e x t and c a n n o t s t a n d alone The derivative of a f at the p o i n t 1 is assumed to be 1 because a t e x t of l e n g t h 1 has a vocabulary consisting of one word, hence Thereforef' i s a function that decreases mnotonically from 1 to As a consequence of the above speculations, in t h e expression V = N~ k cannot be constant. In creating the data base, it was attempted to k e e p its structure simple and uniform without sacrificing its g e n e r a l validity. - ---.-------o------&lY-----.I--m---.Il-C-----.--.----- f o r m a t ) , ( 2 ) d e f i n i t i o n length ( a n i n t e g e r ) , ( 3 ) t y p e of entry ( a n i n t e g e r ) , ( 4 ) suhlist name. This somewhat unexpected, though not particularly surprising, words are r e l a t i v e l y scarce i n t h e l a s t t h i r d o f t h e d i c t i o n a r y The project h a s been informative i n another r e s p e c t , v h i c h ,is not u n i m p o r t a n t : it has g i v e n a n i n of runs) and 11 on the analjrsis. Although some d~b u q g i n q +had to be done, t h i s was g e n e r a l l y i n s i g n i f i c a n t as corn7are-l to t h e total effort, so t h a t nearly all t h e 14 h o u r s 9a.; hcer, u s~f . 1 1 1 r u n n i n g time. Lyons, J.( 1 9 6 9 ) . I n t After a word has be& processed, it is deleted from t h eWaiting L i s t . In the very f i r g t . run for a g i v e n -?I value, i .e. if K'7TSCT equals 1 , the r o u t i n e creates an nmnty l i s t for the so-zalled respectively., i n nsucceeding runs. If anv one of these produce. The procedure, however, will produce the numerical relationships des,ired.The e x i s t i n g data base, together with its reduce& versions, has been stored on magnetic tape. and is ready to be used as i n p u t i n t o t h e propos~d procedure.( c ) exemplification puts the d e f i n e d u n i t in functional combination with other u n i t s ; (d) a gloss is an explanator or descriptive comnent related t o the d i c t i o n a r y e n t r y ; it may also skate s i m i l a r i t i e s t o and d i f firences from other entries.
null
F ' i r s t , we review c r i t i c a l l y t h e problems o f meaning and i t s r e p r e s e n t a t i o n , t h e q u e s t i o n s r e l a t i n g t o l e x i c a l d e f i n i t i o n s . . Logical meaning applies to such attempts to deal with meaning as symbolic l o g i c and mathematics. The meanings with which the s i g n a l s of such systems correlate are unique outside-world referents or unique meanings w i t h i n the logical system t h a t e v e n t u a l l y have outside-world referents.General-sernant'4c meanings are a l s o uniqne in their reference to outside world, but the semanticists are less s t r i n g e n t i n scope than the l o g i c i a n s . Nevertheless, t h e i r scope is an i d e a l i z e d language, much more l i m i t e d than ordinary language.Communication-theory meaning is equivalent to t h e amount of information t h a t can be transmitted per u n i t time in a comunication . s y s t e m .
null
Main paper: .: Lexicoqraphical meaning is t h a t of "words, " and the In the framework .of o u r particular topic w e shall be mainly concerned w i t h categories 4 and 7 .(1 966 ) , u n i l i n g u a l d e f l n i n g d i c t i o n a r i e s appear t o be based on a model that assumes a d i s t i n c t i o n between meaning p r o p e r ( s i g n i f i c a t i o n , comprehension, i n t e n s i o n ) and t h e t h i n g meant by a s i g n ( d e n o t a t i o n , reference, e x t e n s i o n ) . On the basis of what is meant by a sign, Osgaod, s u c i , and Tannenbaum ( 1 95 7 ) distinguish three k i n d s of meaning.Pragmatical ( s o c i o l o g i c a l ) meaning : the r e l a t i o n of signs t o s i t u a t i o n s and behaviors.2 .( l i n a u i s t i c ) meaning : the r e l a t i o n of s i g n s t o other signs.S e m a n t i c a l meaning: the r e l a t i o n of signs t o t h e i r s i g n i f i c a t e s . I t i s easy to see that these classes are i n correspondence with L o n g y e a r ' s three layers i n category 7. EQUATIONi n f l e c t i o n a l forms, and I n R u s s e l l ' s v i m (1 967) the s t r u c t u r e words, such as " t h a n , I& "or, " " h o w e~e r , " have meaning o n l y in a suitable verbal c o n t e x t and c a n n o t s t a n d alone The derivative of a f at the p o i n t 1 is assumed to be 1 because a t e x t of l e n g t h 1 has a vocabulary consisting of one word, hence Thereforef' i s a function that decreases mnotonically from 1 to As a consequence of the above speculations, in t h e expression V = N~ k cannot be constant. In creating the data base, it was attempted to k e e p its structure simple and uniform without sacrificing its g e n e r a l validity. - ---.-------o------&lY-----.I--m---.Il-C-----.--.----- f o r m a t ) , ( 2 ) d e f i n i t i o n length ( a n i n t e g e r ) , ( 3 ) t y p e of entry ( a n i n t e g e r ) , ( 4 ) suhlist name. This somewhat unexpected, though not particularly surprising, words are r e l a t i v e l y scarce i n t h e l a s t t h i r d o f t h e d i c t i o n a r y The project h a s been informative i n another r e s p e c t , v h i c h ,is not u n i m p o r t a n t : it has g i v e n a n i n of runs) and 11 on the analjrsis. Although some d~b u q g i n q +had to be done, t h i s was g e n e r a l l y i n s i g n i f i c a n t as corn7are-l to t h e total effort, so t h a t nearly all t h e 14 h o u r s 9a.; hcer, u s~f . 1 1 1 r u n n i n g time. Lyons, J.( 1 9 6 9 ) . I n t After a word has be& processed, it is deleted from t h eWaiting L i s t . In the very f i r g t . run for a g i v e n -?I value, i .e. if K'7TSCT equals 1 , the r o u t i n e creates an nmnty l i s t for the so-zalled respectively., i n nsucceeding runs. If anv one of these produce. The procedure, however, will produce the numerical relationships des,ired.The e x i s t i n g data base, together with its reduce& versions, has been stored on magnetic tape. and is ready to be used as i n p u t i n t o t h e propos~d procedure.( c ) exemplification puts the d e f i n e d u n i t in functional combination with other u n i t s ; (d) a gloss is an explanator or descriptive comnent related t o the d i c t i o n a r y e n t r y ; it may also skate s i m i l a r i t i e s t o and d i f firences from other entries. : F ' i r s t , we review c r i t i c a l l y t h e problems o f meaning and i t s r e p r e s e n t a t i o n , t h e q u e s t i o n s r e l a t i n g t o l e x i c a l d e f i n i t i o n s . . Logical meaning applies to such attempts to deal with meaning as symbolic l o g i c and mathematics. The meanings with which the s i g n a l s of such systems correlate are unique outside-world referents or unique meanings w i t h i n the logical system t h a t e v e n t u a l l y have outside-world referents.General-sernant'4c meanings are a l s o uniqne in their reference to outside world, but the semanticists are less s t r i n g e n t i n scope than the l o g i c i a n s . Nevertheless, t h e i r scope is an i d e a l i z e d language, much more l i m i t e d than ordinary language.Communication-theory meaning is equivalent to t h e amount of information t h a t can be transmitted per u n i t time in a comunication . s y s t e m . Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
605
0
null
null
null
null
null
null
null
null
dac8d37e05fdc30baab1e66753f470102c2e346b
219307744
null
Natural Semantics in Artificial Intelligence
This papar d i s c u ~s e e human semantic knowledge and proceesing i n terms of the SCHOLAR system. In one major s e c t i o n we d i s c u s s the imprecision, the incompleteness, the open-endedness, and the uncertainty of psopl,eqs knowledge. In the other major s e c t i o n we diecuss strategies people use to make d i f f e r e n t types of deductive, negative, and functional inferences, and the way u n c e r t a i n t i e s combine i n these i n f e r e n c e s . Irapreeision can occur either i n memory or i n c a m u n i c a t i o n . SCHOLAR can have precf ae values or fuzzy values stored, and its procedures can, to eome extent, deal w i t h fuzzy questions when precise valtues are stored, and w i t h precise queltions, when fuzzy values are stored. Embedding allows i n f o t i o n to be specified i n the data base t o any level of d e t a i l or preci sfon. But SCHOLAR only camunicatea the nwst important i n f o t i o n on any topic (as rneasured by importmce t a g s ) , unless more i n f o t i o n i s requested. 1.t should a l s o be p o s s i b l e by using importance tags to adjust what +jon SCHOLAR communicates, i n accord with the sophistication and, i n t e r e s t s of the l i s t e n e r . Inference atrategiee t h a t are appropriate when the complete set of object attributes, or values, i s known (i .em, in a closed world) do not apply when knowledge is incomplete e , i n an open w r l d ) . There are a variety of u n c e r t ~i n i n f ercnces t h a t people use to circumvent the hales i n their knowledge, which are being progr ed i n SCHOLAR. There is a set o f transitive relations -superordinate, superpart, s l a r i t y , p r a i m i t y , subordinate, and subpart relations -t h a t people frequently use to make deductive inferences, Currently SCHO o n l y handles superordinate i n f e r e n c e s ( e m g . , the Llanos has a rainy season because it is a savanna) and superpart inferences (e .g, , the language in Rio is Portuguese because R i o is part of Brazil). ~e d u c t i v e inferences can be more or l e a s c e r t a i n (similarity inferences are l i k e s u p t o r d i n a t e inferences. but l e a s c e r t a i n ) and can have restrictions on their use ( o n l y certain attributes transfer an superpart). ~b n knowledge is incomplete, it is not safe t o assume that something is n o t true just because i t i s not s t o r e d . Thus an inference is necessary to decide when to say ' N o w and when to say ' I donBt know,@ There is a eamp1Pcated s e t sf strategies in SCHOLAR to find vatious kinds of contradictions that people case to say *NO, " If a c o n t r a d i c t i o n cannot be found, another nega8ive inference, c a l l e d the Wlack-aF-knawledgea inference, f s tried* When enough is known a b u t an a b j e c h it is possible to conclude that someaing is not true ut mat object ow t h e ground8 that if it were true, it m u l d be stared, another class of w c e r t a i n inferences depends an i l l -d e f ined knovledge of functional datenainants , e .g. , that climate depends OR l a t i t u d e and a l t i t u d e . D % f ferent ways t h a t p o p l e use fmctional knowledge irnm1ve f m c t i a n a l ealeulatisms ( a . g , , if a place has a p a r t i c u l a r latitude, it probably has a particular elihate), functional analogies ( e , g , , if a place is like another place in latitude and altitude, it probably has the s m e c h i m t e ) , and ts answer Why questions (egg,, a place h g a particular e l f m t e because of its latitude and altitude), D i f f e r e n t inf erenees can esIwBine in d i f f e r e n t ways, Somtims one strategy may call another strategy to hind an anewer, men different inferences independently reach the s m e or d i f f e r m t con-elusions, a e y coItnbiwe to increase or d s r e a s s certainty, T h e pming sf uncertain Bnfermces is wecessqw to make cawutars as clever and aa fuzzy-thinking as pmple.
{ "name": [ "Carbonell, Jaime R. and", "Collins, Allan M." ], "affiliation": [ null, null ] }
null
null
null
1974-09-01
0
9
null
null
null
null
null
we are o n l y t r y i n g to develop some i n s i g h t s , without attempting to be exhaustive. More questions w i l . 1 be raised than qnewersprovided. There are many observable t h i n g s people do t h a t we do i a a modified and extended nemork a la Q u i l l i n and has a r i c h i n t e r n a l s t r u c t u r e w i t h a well-deQEined syntax.~i a l o g u e w i t h SCHOLAR take8 place $XI a aeubaet of ~n g l i s h t h a t is limited mainly by SCHOLAR% currently primitive syntactic capabi l i t i e s . In tutorial fashion, t h e system uses its semantic network to generate t h e material it presents, t h e questions it asks, and the corrections it makes. and soil refers to topagraphy), APPLIEDFO (color applies to t h i n g s , decide what +s r e l e v a n t to say at any given ti-,In the rest of this paper, we will d i~c u s s how we are u s i n g SCHOLkR to cope: wi* gome of the problems in n a t u r a l s e m a n t i c s .However, Lhero are still many nathral-aewantics problems we haven o t touched, 3 ,In this section we discuss some aspects of natural semantic i n f o m a t i o n and its r e l a t i o n to artf Ficial intelligence.Imprecise language is an essential characteristic of human PO Is t h e C h a c~ t h e cattle c a u n t q ? I know the c a t t l e c o u n t r y is down there, I t h i n k it's mre sheep country. I t @ , e l i k e w e s t e r n Texas 80 i n same sense 1 guess i t u s c a t t l e country.&nd t h e n o~t h e r n p a r t oT Argentina has a large sort of semi-arid p l a i n t h a t e x t e n d s into Paraguay. And that's a p l a i n s area h a t is relatively unpopulated.Because it" p r e t t y drya to know about oil if it were a t a l l important, then it can i n f e r t h a t Uruguay probably has no I . The inferential procdaserr described can combine in a ~r i e t y of ways. For inehance, contradictions can combine w i t h deductive inferences. SCBOWLR will anmer a question like "Is the A t l a n t i c orange?" w i t h wNo, it is blue, because it finds blue fe atorad with the S U P E R . , ocean.
Main paper: , f: Imprecise language is an essential characteristic of human PO Is t h e C h a c~ t h e cattle c a u n t q ? I know the c a t t l e c o u n t r y is down there, I t h i n k it's mre sheep country. I t @ , e l i k e w e s t e r n Texas 80 i n same sense 1 guess i t u s c a t t l e country.&nd t h e n o~t h e r n p a r t oT Argentina has a large sort of semi-arid p l a i n t h a t e x t e n d s into Paraguay. And that's a p l a i n s area h a t is relatively unpopulated.Because it" p r e t t y drya to know about oil if it were a t a l l important, then it can i n f e r t h a t Uruguay probably has no I . The inferential procdaserr described can combine in a ~r i e t y of ways. For inehance, contradictions can combine w i t h deductive inferences. SCBOWLR will anmer a question like "Is the A t l a n t i c orange?" w i t h wNo, it is blue, because it finds blue fe atorad with the S U P E R . , ocean. : we are o n l y t r y i n g to develop some i n s i g h t s , without attempting to be exhaustive. More questions w i l . 1 be raised than qnewersprovided. There are many observable t h i n g s people do t h a t we do i a a modified and extended nemork a la Q u i l l i n and has a r i c h i n t e r n a l s t r u c t u r e w i t h a well-deQEined syntax.~i a l o g u e w i t h SCHOLAR take8 place $XI a aeubaet of ~n g l i s h t h a t is limited mainly by SCHOLAR% currently primitive syntactic capabi l i t i e s . In tutorial fashion, t h e system uses its semantic network to generate t h e material it presents, t h e questions it asks, and the corrections it makes. and soil refers to topagraphy), APPLIEDFO (color applies to t h i n g s , decide what +s r e l e v a n t to say at any given ti-,In the rest of this paper, we will d i~c u s s how we are u s i n g SCHOLkR to cope: wi* gome of the problems in n a t u r a l s e m a n t i c s .However, Lhero are still many nathral-aewantics problems we haven o t touched, 3 ,In this section we discuss some aspects of natural semantic i n f o m a t i o n and its r e l a t i o n to artf Ficial intelligence. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
605
0.014876
null
null
null
null
null
null
null
null
f2426925cdb5d7017c618a7feddd762ce9241dd3
219301108
null
The Lexical Subclasses of the Linguistic String Parser
DPWN 30 DP 3-Pm '~fi addition to the Noun, Adjective and Verb classes, the other major classes are Adverb, Pronoun, Quantifier, Article, Subordinate Conjunction and Preposition. Coordinate Conjunctions and comparative connectives are treated individually.
{ "name": [ "Fitzpatrick, Eileen and", "Sager, Naomi" ], "affiliation": [ null, null ] }
null
null
null
1974-09-01
0
0
null
null
null
Object A t t r i b u t e s of t h e V e r b ( C o n t i n u e d ) This paper defines the 109 adjective, noun and verb subclasses of the NYU Ling~~istic String Parser (LSP) The su~classes have bee11 treated here in such a way that they can be used as a guide for classifying new words for the lexicon ahd as a lineistic reference tool. Each entry bebm provides a definition of the subclass, a diagnostic frame, sentence examples. and a word list d r a m from the lexicon of the computer-grammar (ca 10,000 word entries).The subclasses are defined in terms of string grammar. In string analysis, a sentence is decomposed into an elementary sentence, or center string, and adjunct strings. Iil a string, each word class may bapreceded or followed by left or right adjunct string^,^ and the center string as a whole may have adjunct strings ?vhich precede or follow the center string o r occur at interior parts of the string. A string grammar makes restrictions as to which subclasses can co-occur. The subclass definitions, therefo~e, are based mainly on these ocburrence possibilities (e.g., a count noun is specitied as a noun which camiot occur without a preceding article).More precisely, the entire computer grammar consists of a set of approximately 200 contest-free (BND definitions, a set of a b u t 250 restrictions, and a \lord dictionary. The BNF definitions define the center and adjunct strings of the language as well as sentence nominalization (embedded sentence) strings u;hich may occur in subject, object or complement position. In parsing a sentence, once an element of a string (e.g., SUBJECT, VERB, or OBJECT) has been identified in the sentence, restrictioi~s are invoked to test.various properties, including the subclasses of the words ivithin this element or within this element and an element previously identified.When a word is classified for the LSP lexicon it must be assigned to the syntactic classe (N, V, etc.) which appear in the context-free definitions-and to the specific subclasses (e.g., count noun) which are tested for by the restrictions. The trames and definitions are a compact statement of these constraints. Fbr reference to the computer grammar, we have used the code names of strings and restrictions, bttt the text can be read independently of the referenced material,The strings have roughly mnemonic names. An explanation of some of the mnemonics used in the text is included in the rererence guide which follows this introduction.The restrictions referred to are of several main types: agreement restrictions (AGREI noun phrase restrictions (N) , position restrictions (POS) , quantifier restrictions (Q) , selectior restrictions (SEL) , restrictions on sentence embedding (SN) . and WH-string restrictions m)The name of each restriction is preceded by a W or D and followed by an integer, e.g.: WAGRWhile a subclass is precisely defined by its appearance in the restrictions gf the grammar, a person who is classifying words for the lexicon may need additional criteria in order to capture the intent oi the subclass. This is particularly true in defining the verb subclasses which specify the object strings with which a verb can occur (the OBJIIIST of the test). Here the frames and restrictions may not suffice to distinguish occurrences of the words as instances of the subclass from other possible occurrences covered by the grammar.For example, it is important to distinguish an object string occurrence of SVINGO insdney kept people working overtime from a non-object-string occurrence of the same wordclass sequence,' e.g., one consistingof a noun with its adjunct such as N + reducea relative VINGO in They fired people working overtime. Of course, some verbs will have ambiguous occurrences, e.g. ,"keep in the first example. It would' be incorrect, however. to classfire as occurring with a SVINGO object on the basis of the second exzmple. We haye therefore used additional criteria in defining the object strings in order to clarify the intent of particular subclasses. The criteria used are:(1) Ekcision. In an occurrence of an element with its adjunct in a sentence, the adlunct can be excised leaving a well-formed sentence unchanged in meaning and selection from the original sentence (except for detail added by the adjunct). Thus, we can test whether a word sequence in a sentence is an object string occurrence by excising the portion which might be an Wjunct. If the remaining sentence is either different in grammaticality , meaning d r selection from the original sentence, then the sequence as a whole is cansidered an object strifig occurrence. Fbr our purposes, if the sequence is an object string octurrence, then the verb with which it occurs must be sulxlassed for that object string. For example,line, show and c a r x m u s t be subclassed as occurring with the particle string D P , and walk not, s inc e They lined up. $ They lined.He showed off. He showed. (difference in meahing) He cam fed on. The point easried. (difference in selection) He walked on. He walked. (no difference in grammaticality, selection or meaning)(2) Understood reference. If a given noun in one sequence-occurrence is understood as referring to a particular noun N, and, in a different occurrence as referring to an N,, the two occurrences must not be considered as ilistances of the same string. For example, since messengers in (1) refers to they and messengers in (2) refers to boys, the t~vo occurrences of as -N must n& be considered a s the same string:(r) They served the boys a$ messengers.(2) They treated the boys as messengers.(3) Paraphrase. If a semantic contrast can be found in otherwise identical sequeilces, then these sequences cannot be considered as instances of the same string when subclassing he term %equencen applies to word sequences the structural description of nhich is under discussion. a verb. For example, the a s which is equivalent to 'in the capacity oft in ( 3 ) functions as part of the object string A~B J B I . : , while the a s which is equivalent to 'when' in (4) functions as part of an adjunct which does not restrict the verl~:(3). John served a s a lieutenant.(4) Jollll c h a~ a s a lieutenant.hie to the difficulty of judging the appropriateness of a paraphrase, h o~~e v e r n.e have used this criterion sparingly.As we have noted, the frames and definitions precisely reflect the use of the. major classes and subclasses in the presently implemented string grammar. However, it should also be noted that this grammar, and the associated lexical categories, have been defined so a s to be consistent with a, subsequent stage of transformational analysis which is currently being implemented. lh some cases, the same string form has several transformational sources; where this affects the dictionary classification, we have noted it.Something should be said a b u t the form of the dictionary entries as they appear in the computer lexicon. Each word is classified for all its major class occurrences (N, V, eCc.) and its subc~asses within each major class. The classification is based on the usage of the word in the language as a wllole, not its use in a particular text. Ho\vever, purely coIloquia1 and literary uses have not been covered because of the intended application to scientific tex-ts.The classificatio~~s of the niords are arranged in a hierarchical structure: the malor classes may have subelasses and the subclasses in turn may have subclasses. Fbr esample, the adjectiveclear, which can occur as the predicate of a,sentential subject, is in the subclass AS ENT1. The particular type of sentential subjects clear occurs with (IVH and THAT embeddings) require that it be classified in the two subclasses A\i7H and ATIlAT of ASENTI. This part of the lesical entry appears a s follows: CLEAR ADJ: (ASENTI : (AWH, or alternatively : CLEAR ADJ: . l o . 10 = ASENT.1: (AWH, ATHAT).where the particular line number a s s i w (.lo) is arbitrary. Where this type of further subdivision of a subclass is i;acessary, a sample dictionary entry is provided along with the definitions and frames below.It should be noted that while the entries in the lexicoh a r e by word rather than stem the word entries based on a particular stem can refer to portions of a basic entry which they share in common, e.g., the object list of a verb (OBJLIST) is specified once for all forms df the verb (tensed verb tV,present participle Ving, past participle -Ven and infinitive V).The notational conventions used in the subclass definitions and frames a r e as follows:2an ungrammatical sequence x -x Rhe underlined term) is the class being subclassed in the frame o r a particular lexical item used in the frame.s -s (the double undarlin.ec1 term) is the class being subclassed in the frame where the frame also contains a particular lexical item(X) -in a franc! an optional element (S) -in a demition, a further subdivision of a subclass T -article D -adverbOBJa cover tcrm for all. the object strings (see ol~jcct string reference wide)Sh"an embedded sentence of the following types:TEIATS '-That John was here FORTOVO -f q r Mary to go TOVO -to live SVINGO -then1 worlring overtime C l SIIOULD -that John b-e here S W I~ -\~l~~t h q + j ~v l i y / how. . .It should also be noted tM-€ the specified frame which delimits a word is not the only frame in ~vhid.~ fhiZt word can occur; it serves merely as the test frame when classifying words.The presellt paper is an outgrowth of ongoing work on the LSP lexicon throughout its various implementations and applications since 19G5. It draws particularly an a previous write-up of the LSP g r a m r (fl. Sager, " i Computer String Grammar of Ehglish" ,ring Program Report No. 4, Linguistic String Project, New York University, 1968) , diagnostic frames prepared for LSP use by Barbara Anderson, and classification work by many members of the LSP staff over the years.Fbr a recent description of the LSP systembsee R. Grishman, N. D P 1 = Particle (e.g. carry 0 3 OBJECTBE --ORJBL + verbaP objects of be - D P 1 P N = DPI, + P N DP2 = D P 4 N D P 2 P N = DP", PN DP3 = 2 4 Z+ U P B B 3 P N = DP3 -t-P N DP4 = of-permutabion of DP3 D P~F N~ DP4 + P N DSTG = Adverb stririg m R T O V O = For + Subject -+ to -+ Object NA = N t-Adjective NASOBJBE -N + as -t-Object of be - h ? N -t Adverb N N -~( i n d i~e c t qbject) + N NPN = N + PN NPSNWH N P + wh-conzplement - NPSVINGO = N 4 P 4-' SVINGO NPVINGO -N + P t VlNGO NPVINGSTG = N P t-VINGSTG * NS-NWH -N +-SNTT7H NSTGO -Object N NSVINGO = N's + VINGO NTHATS = N A that -+ ASSERTION NTOBE = N + to + b e + Object of be -- NTOVO = N + to -+ V(infinitive) + Object NULLOBJ = Null,P N -prepoqitional phtase PNHQWS --PN + how + ASSERTION PNN --P N t N inverted NPN s t r i n g PNSNiVH =-P N + SNWI-I PNTHATS P N t TIIATS PNTI-LATSVO YN + that -t t+ Ving t Object SVO Subject 1 tenseless V 1 CXqect TIIATS = that --1 ASSEIZTTON TOVO --to -+ tenseless V -i mject VENO = past participle + Object VTNGO -Ving 1 mject VINGOITN = (N's) of -Object VINGSTGPN -VINGSTG + P N I. Adiective Subclasses.an adjective is in AASP if it occurs N be Addj to V OBJ only witti the non-sentential (non-SNI right Adjectives \Yhich occur with both non-It is a > to be assumed that John left. sentential and sentential right adjuncts arc not in AASP (see ASENTI, ASENT3), e.g.:--- adjunct to -VShe is dueto arrive at five.She was right to object. Jolm is certain to go: John is certain that he will go.John is not certain whether to go.John is eager to go.NOT AASP:John is eager for Mary to go: Jolm is certain to go. (ASENTI)Ile is anxious to leave. (ASENT3)WORD LIST: able, fit, free, quick, readv, set, s b w .Fr aine : an adjective is in subclass AINPA if in it occurs in the adjective position in the --Ad.1 9 at qentence adjunct string PA (P =in or atJ; e.g.: in general, at present, in particular Ehamples :(WPOS11) .The particular P must be specific 1In general, we can maintain the following.far each adjective. We do not, at present, know the answer.We cannot say, in advance, what tomorrow will bring.We didn 't know what to think about her statement at first.Dictionary Entry:GENERAL.ADJ: (.lo), . . . ,AINPA: (4 IN4 ).WORD LIST: advance (in), best (at), first (at), full (in), general (in), last (at), least (at), particular (in), present (at), short (I).en adjc.ctivc is in tht. small subclass N Ad 1 X (X # adjunct or conjunct of ad]) AfNRN if it can occur as a single-word right adjunct of a noun (WN50) :Ehamplcs : the pcwplc prcscnt The figure above illustrates this point.The people absent represent the dissenting Non-AINIIN ad jcctivcs in IiN an additional five people the following three items An additional five people were found.The following three items were mentioned.Please make the next several payments on tives before Q N (the tallest three boys) is time. accounted for by a separate statement in WN5; therefore, su~erlative forms should not be listed 3s A PREQ.We chose the first few people to ~velcome him.The next ten people will constitute the control group., WORD LIST: above, additional, another, best, bottom, first, good, last, necessary, nest, other, own, particular, previous, representative, same, top, usual, very, wrong. Franlc : an adjective is in ASCALE if it canQ N A d J . (Adj is not comparative)occur to the right of the measure scquencc QN in which N is in suhclass NUNIT (inches, Fhamylcs: feet, pounds, years, etc.) &i7Q2), e.g., long in -The line is 10 inches long. a ten inch long linc.Tile line is ten inches lollg.Tlzis is a ten u~c h long linc.ASCAL I< includcs w, wide, -dccp ,He is five years old.hroarl , tall, thick, high, old.Ile is a five year old child. Since both ASCALE and non-ASCALE: John i s certain that he sold books Dictio~larp klntl-y:- adjcctivcs can occur in Q N A Q J(AS ENT3)T t is certain that h e sold h l ; s .Therefore, such adjectives sllould be listed as both ASENTl and ASENT3.ASENTl is subdivided according to the type of SN string with ivl&d~ the particular AS ENTls occur; i . e . , 1) ASENTl: (AFORTO) I;br us to leave now would bc easy It would be easy for u s to leave now.-2) AS EINT1: (ASJIOULD)Tlnt h e return is imperative.That they lied is ohvious.It is obvious that they lied I ) ASENTI : (AWH) C LEA I<. -4DJ: 10 10 AS EKT1: (AIVIj, ATEL4T).Whether he will come is u n c e r t a~n . It is uncertain whether he will come AS ENTl: (A FORTO) is further subdivided into three classes according to the type of extraction from the embedded sentence \~hich occurs with a particular adjcctive; viz. : 1) ASENTI: (AFORTO:(OBSEXT)) occurs in -.. N, t be --(for N1) to V -N2:The problem will be easy for Jolm to solve.For Jolm to solve the problem will be easy.occurs in -Ni t be -to V OBJ:John was kind to invite me.related t oFbr .John to invite me was kind.3) AS E N T l i (A FORTO:OEXT)) occurs with neither type of extraction :For John to write a letter now tv~ulcl be curious. He is tall that they passed his doorwav.ASENT3 is subdivided according to the type of SN string wmin which the particular AS ENT3s occur; i.e., 1) ASENT3: (AFORTO) I would be happy for you to come.I am insistent that you go alone.
I am certain that John will come.They were eager for the speaker to address the crowd.I am insistent that you go alone.I am certain that John will come.I'M grateful 'that the stuff arrived on time.We r e happy that you can come.He is doubtful whether the plans Mil come off.I'm not sure whether they \\ill come.We a r e uncertain nvhy he left.We a r e uncertaifi why he left. Die tionar y Ent qy :I U P P Y ADJ: .10 10 -ASENTS : (A FORTO, ATHAT). I will see him next year.He looked better thistime.WORD LIST: last, next, this.an adjective is in the subclass COMPARATNE if it can occur in the environment -N1 t be -than N,:John is happier than Bill.8 John is tender than Bill. Thc group has changed its mind.The g~o u p have changed their minds. The j~ul~lic disapprove of it.A minority i's in favor of tllc action.A minorif y. a r c in favor of tllc action.as the subject of collcctivc ancl rpcip~*ocalverbs (WAGR 1 : E l ) :The group @thered. $ He gathered. for r\: KTJUBTAN apposition (my friend Jolm) see NAhiE and NCOTJNT::)..A11 KCLASS1 FIEIZ is in cithcr : 1) NC LASSI F'TERl, nfl~ich includes n~etalinguistic words that iilt roducc tcrmin -ology , c .g. tclm, synlbol , or 2) NCLASSI FIER2, wl~ich includes The N, N, t Ijc. . . - -L - - (N!, / NIIUhLlN) NCLASSI FIER1:The symhol S .is interpreted a s the subject of a sentence.Linguists oftan confuse theterms string ancl sequence.Tlle e s l~r e s s i o l~ I1a1-ificrl g~ammal-.u7ill be used to refer to thc grammar in Appendix It.The element hydrogen is the lightest suhmatter area (supplied by the user), e.g.:stance. element, drug, acid, enzyme, e)rtract, hor--mone , lon, mineral , .coefficient, factor, ctc . The shelf will @hey dust. The shelf will gather a booli.Cf. AGGREGATE.While he \\.as away, the fortune accumulated.The cell accumulates sodium.These -books will only gather dust. -He accumulated a forhu~e.WORD LIST: acid, alcohol, ammonium, blood, calcium, change, digitalis, down, energy, evidence, fluid. hydrogen, interest, knou-ldge, plasma, salt, sweat.NCOUNfi : R a m e : occurs i r~ the environnlent A 01) -t V OW, and not in the environment --tY OW. T -N WN9) .Examples : Nouns not classified as NCOUNTl (LC., mass nouns and many abstract nouns) caqA -book fell.begin a headless relative clause S-N (DN5.1):digital is) A series of coincidences occurred.The reaction 3 drug produces . . .2 A blood flows. Frame : an NCOUNTl which, a s the object of a specified preposition P, occurs tvitllout a P -N preceding a~t i c l e (WN9). The particular P &amp1 cs : which occurs with a given NCOUNT2 is specified in the dictionary entry of that NCOUWTZ. He came bycar Dictionary E n m :(NSIX) GONCLUSION. 11 = NCOUNT1, NCOUNT2: (iIN4) . . . WORD LIST: amount (in), answer on), approach (in), assumption (in, l~y ) , bed (in), case (in), charge (in), collclusion (in) , contra'ct (against, by, from, in, into, on) , course (in, of, on),depyee (in, of1 , end (without), estimate (according to, beyond, by), example (by, for), foot (on), focus (in, into, out of)-, gross (in), hand (at, by, in, on, out of), kind (in) , length (at, in), limit (l,cyond,within, without), line hn, on, off) , mark @f) , measure (beyond, to), number (according tobeyond, by , fn, of, without), parallel (in, without), phase (in, out of), place f according t6, in, into of, out on, point on) , position (in) , process (in), question (beyond, in, into, under, without) , ratio (in), reach (beyond, in, into, out of, withid, s h o~~ (for, in, on), significance (of), turn (in) , viev (from, in, i n b , onj , way (by).Frame: NCOUNTls which can occur without N t b e N a preceding article. alter be o r in the object ---position in SOBTHE and OBJBE (see OBJLIST:(SOBJB E) , (OBJBE)) (WN9) : Ekampl e s :He is president. We-elected him president. He remained president.Ile is president. I am tr-easurer.He is chief investigator.We elected him president.They appointed me treasurer.WORD LIST: collcctor, director, head, investigator , judge, president, secretary.Can occur as the firsir noun i n the string NN -L i.e., a s indirect object --(WPOS2 2) :She bought the lmy a book.(cf. AGGREGATE) or as the host of a right adjunct WII string (relative clause) headcd11y who -/whom (WWHZ) : Frames : N, N I (NI indirect object) 4Examples :She b g " n t the boy a book.Shc w l u t e the ~srorliers a letter.The man who ate the cheese left. 91c showcd i~c-r relations the prt>scint.The manwhom you saw was Bob.N of a right-adjunct P N string with P -into S l~e needs a friend who can care for her. a now subclass \vhio11 colltains all the letters of the English alphabet. It is used in the NQ string a s a variant of Q (T'i'Nl2r: i( N -t V (V NOTNSUBJ: NONHUMAN) (e .g. : believe, deny, discover, -- kno~\~, read) &,ample~ :and other verbs which require a l~urnah sub-ject (e.g.: hand, -laugh, long, skin) fiVSEL2)-. - Cf. NOTNSUM.3 The clock believes that this is so. 8 The account knows that he is wrong.fi The apparatus laughed.WORD ISST: ability, act, assumption, balance, can, day, dose, enzyme, feature, frog, gland, hypothesis, interaction, junction, London, mean, need, organ, pathway; peak, position, property, range, satupation, tension, use, wonder.Frame:a noun i s in fie subclass N: PLURAL if it occurs in the environment These --tV -These N t V OBJ.OBJ and \not in.This --tV OBJ (WAGREE4); e . g . :Examples :These groups answer cd quicldy .8 This groups answered quickly.These men love mry.-8 This men love Mary.WORD LIST: abilities, ages, combinations, data, effects, groups, measures, mucosae, observations, parallels, problems, rises, seconds, tries, uncertainties, uses. valencies, wants, years.NPII. E Q :I+%me :a noun which i s not also a propername is it1 NPREQ if it occurs a s the N of the se- ness , e, weigllt, volume, a r e a , and perhaps .a few others. These words occur a s N t inT N Q N - quence N -Q (QThe line is two inches in w 1 1 . the sequence Q N, P N, where N1 -NUNIT Ile is five years of w . (incl~es, years, etc .) and Q quantifier, including number s (WQ3).The area measures twenty feet in width.bl the case of length sequence Ct\croThe rectangle is two inches along the diameter.inches) a class of nouns, also classified a s NSCALE, can occupy the place of length in P NSCALE: two inches in diameter, in circumfrence , along the diagonal, etc .(The aciverbs across and arowlcl can also occupy the P NSCALE position.) WORD LET: age, altitude, area, breadth, height, intensity, length, luminosity, strength, volume, wavelength, width, circumfrence , diameter, thickness.Frame :occurs in the environment It be P -- The plan for him to go. His attempts to lmve 2) NSENT1: (ASHOZTLD) .The demand that salaries be raised 3) N S E N T~: (ATHAT) The fact that they enrolled 4) NSENT1: (AWN)The auestion whether t o -.vote n a m e s : T r ) N be SN.The demand, that salaries be raised was rebuffed.The plan for him to go to college w a s foremost in their minds.to leave were noticed.The fact that they enrolled is Imown. Yesterday's meeting uras cancelled.Yesterday T went to the movies.Sunday h e will r u n the race. NOTNSUBJ: (NSENTl) 8 The fact cares. NoTNS'I-TBJ: (NTIRI 1 3 ) jl The \\-eel; designed the plan. We split open the paclcage marked "fragil e ". The object string ASOBJBE must be distinguished from the adjwlct sequence a s -t NSTGO. The two may be distinguished by the fact that the asof the ASOBJBE string is parapl~~asable as 'in the capacity o r character of1, e.g.,They servcd as InessenTers.in the capacity or nlessengers whereas theas of the adjtmct sequence is pamplirasable as 'when1 or rwl~ile e.g., They served as young men.when they were young menThe two may also be distinguished by the fact that in sentences containing the ASOBJBE string, the primary stress of the sentence falls on the head noun of tie noun phrase functioning as the OBJBE, e.g.,Enzymes function as catalysts.3 Enzymes fwlction as catalysts.tvhereas, in sentences containing the adjunct sequence, the primary sentence stress falls on the verb, e.g.,John cllangc(1 as a lieutenant. ij John changed as a lieutenant.They served as messengers.Enzymes function as catalysts.a i bartender.This ide-iginated as a vague possibility.That invention began as a joke.John applied as a mechanic.He will continue as a private.He ran as a sprinter.The reaction occurred a s an after-effect.The fact exists as an anomaly.NOTOWLIST: (ASOBJBE) :John changed as a lieutenant.John dte well as a young man.Ididn't go to school as a child.Note 1 : a large number of verbs occar with both the object string and the adjunct, c.g., serve (above). They served (the king) as meshengers.WORD LIST: appear, apply, arise, begin, Continue, enter, exist, fail, function, p , occur, originate, participate, remain, train.The verbs classified as OBJLIST: (ASSERTIDhy a r e a subset of the verbs classified as OBJLIST: (THATS), i.e.:Frame :SUBJ tV (that) S --She Ino~vs John is an "An student.I assume you \\*ill arrive on time. She ~IIOSYS that John is an "An student.(Inao\v OWLIST 2 ASS ERTIOX, THATS) They feel they a r e being ahused .-$ She reported. John is an "AQtudent.He believes the earth is flat.. She reported that John is an n A n student. (report OBJLIST TFlATS She discovered he \\.as a11 cscellent cool<.Ire saiJ \ye h e \ \ -a better' solution.-It should be noted that mnlputa-It seems he is happier anay from home. tional treatment of forms like It seelns that he was here i,s to define a small subclass, VSENT-I ( appear, happen, remain, seem, POT OWLIST: (ASSERTION : mrn cmt) , ~vhich can take OWLIST: (ASSER-2 He added John \\ as a witness. TIOh? , (THATS) where applicable, provided 3 He argped their approach was metaphysical.the subject of the VSENT4 is the expletive It.-2 She reported John was an RA"tudent.WORD LIST: appear, assume, believe, discover, feel, figure, find, imply, hot\., learn, maintain, mean, note, say, se?em, sense, snow, state, suggest., suppose, t~~, understand.Frame : Verbs which occur with the object siring ASTG each occur with a limited set of adjectives in the adjective position:This rings true.3 This rings red ..That story rings true.This limitation on the set of adjedtives She remained red in the face.which occur with verbs spebified a s OBJLIST:Theyfell sick.(ASTGS distinguishes thew verbs from those specified a s OWLIST: (OBJBE) for which no He lav still.. .John tui-ned purple.Math comes easy to him. I clon~and that 11c1 corn ch.Tllc plan provirlcs that hc. Iw o n timv It nccc~ssitatcs t . h t h c l~c on time.iYOIiD LIST: ask, demand, dircct , n~can , movc, orrlcl-, pi.ct'cl-, propose, provicle, recluirc, suggcst .It is neocssary to define this a s an object string fin place of treating it a s an adverbial adjunct plus Sh3 since some scqucnccs haye no analysis i41 terms of an SN s t r i n g plus optional adjunct , c .g. :I Sound oust iijhethcr lic \+.as corning.Ilc pointed out that this was thc best 1l.e pointed out that this \\.as t h e i~c s t approach. approach.8 He pointed that t h i s was t h c hest approach. WOIiD LIST; b r i n g (out, up), figure (out) find (out) , leave @I, ouf) , lct (011) , rnalic (out) u~al-I; (da~i~n), point (out), urrite flown). ,C)BJLIST: (l)Pll:Applics 1 , strings jn which thc aclverbpreposition (or particle), IIP, cannot be analyzed as an adverbial adjlu~ct, c . g . :They lined up. l \ e y lined. C k ; if the verb also occurs witl~out a I>P o r other object, then it occurs in a different sense t h a~ with the D P , a s is often indicatcd by a difference in subject selection: Jolm carricd on. 8 John carried.The point carried.Frame :N tV DP.-They carried on.He showcd off.W e give up.The plane tookoff.Sfle drove in.H e \vent out.a s OBJLIST: @PI) a r c the result of 'mid-They walked down. dling', i.e., they a r e related to a class of 1 ' N D P constructions:Dictionary Entry:They blew the house up. The llouse blew up.Thc particular* D P must be specified for each ycrb.TV: (OBJLIST: .3 . . . .) DP1: .16, . . . .LIST: act (tlp), add (up) , back doi in^, off, out), come (shout, around, to, up) , carry (on), clear but, up), cool (down, off), couple up) , cover Np) , double (back, up), dramr (back, up), dry (but, up) ! fa1 i (away, in, off, out) , follow (tlirougli) , give (in, out, up) , level {off, out) , look-(up) , lose (out), measure (up), phase (out), run (down, on, out, over, up), show (off, up), sleep (in, oirer) , slow (down; up) , split (away4 off, up), start (in, out, up), stop (by i n , off, over, up), take IIe went down to Washington.The particular Dp and P must l~c specified for each verb.He walked around to the bus statioq.He sped on past the exit. hand (around, back, down, in, on, out, over), lead (in), lcave (in, out), level (down, off, out), line (un), live (down), looli (over, up), make (out, over, up), mask (down, off, up), move (in, out), paper (over), point (off, out, up) , pump (in, off, out, up), read (over), reason (out), regain back), rule (out), save (up), show (in, off, out, up), sleep (off), slice (off), slow (down, up), smooth (away, back, down, off, out), space (out), split (away, off, up), stop (up), store (up) , strip (off), switch (off, on) , take (off, out, up), think (out, over), t r y (on, out), turn (ddwn, off, on, over), use (up), warm (up), wash (away, down, off), weigh (down), tvorlc (off, out, over), write (down, in, aff, out, up) .Frame : applies to stripgs in ~i~h i c h the adverbpreposition, (or particle), DP, c c m o t be N t V-D P N P N @PSPN)analyzed as an adverbial adjunct; i.e., mixN -tV N D P P N (DP3PN)up the last name with the first # mix the last name with the first -t 9.As the object of Ving in certain strings \c here Viilg usually is follo~~ecl byof N there is an object form of the DPN P N string where the ofoccurs between D P aild N P N (the splitting up of the 111-oject into three parts). This form is DP4PN.Any verb which takes DP2PN takes all the variants : OBJLIST: (DPZPN), (DP3PN), (DP4PN). The particular DPand P must be specified for each verb.h the WORD LIST, the arrow (-> ) follows the s e t of DPs specified for each verb and precedes the set of Ps specified for that verb.I mixed up the last name wit$ the first. I mixed the last name up with the First. The mixing up of the last name with the first.He split up the project into three parts. WORD LIST: add (in-) with) , bind (up+ with) , call (away-, to) , chain (down, up+ to) , divide (up+ with), end (up* in, with), follow (up+ with), link (up-) to, with), pair (up, off* with, into] , play (off+ against), separate (out, off+ from) sign (ovei-+ to), single (out* for), take (up+ with), trace (back+ to), yield (up+ to).In the WORD LIST, the arrow ( 9Dictionary Entry:follows the set of DPs specified for each MOVE. verb and precedes the set of Ps specified TV: (OEULIST: .3, . . . .) .for that verb..3 D P l P N : .Is,. . . ..18 DPVAL: (iINC), PVAL: (CONi).WORD LIST: add (up + to) , build (up + to) , come (up, around, bacli-? to, with) , double (up + with), face (up+ to), feel (up+ to), fit (in+ with), go (along, d o~m ,in, off, out-> for, in, .3f; \YW)keep (away, up+ from ; to) , lead (up + to) , link (up + to, with) , live @ p -~ to) , look (down, in , out, up+ for, on, to), measure (up+ to), own (up+ to), pair &p, off* \irith), play (up+ to), put (up+ with), reach (out* for), speak (out, up+ for), stand (up* to, for) , try (out+ for).OBJLIST: (DP2 , .DP3 , DP4) :n a m e : DP2 may be distinguished from a pre-Nf t V D P N, @P2) positional phrase P N by the fact that the DP and N permute:(Kf PRO) N tV N LIP pP3)He looked the number up.-He looked up the number.Ekampl e s : whereas the P and N of the prepositional He loolced up the number. fillrase do not permute :He looked the n~~n l k r up. He looked up the shaft. $ He loo1;ed the shaft up.Fbr some verbs which take D P N obje-, the N position may be filled by a Ving string (They kept up their writing to the President). In the machine grammar, a Ving string i s allowed freely in place of N in D P N, and is considered r a r e a s a replacement of N in N DP.As the object of Ving in certain strings where V h g usually is foll~wed by of N there is an object form of the D P string where the ofoccurs between D P and N (the sending in of the entry). This form is DP-f.Any. verb which takes DP2 takes all the variants: OBJLIST: (DP2 , DP3 , DP4).The particular DP(s) must be specified for each verb.Hesent back the gift. Hesent the gift back. WORD LIST: act (out) , add ' (in, on, up), ask (in, out, over, up) , back (up), beat (up), bend (back, up), bin& (down, off, over, up), block (in, off, out, up), bring (about, off, out, up) , c a r r y (out, through), clear (away, off, out, up), cool (down, off), cover (up), deal (out), divide (LIP), draw (back, down, in, off, out, up) , dry (off, out), drive (in, off, out) , eat (away, up) , factor (out) figure (out), find (out), fish (out, up), fit ('in), follow (up) , give (away, back, in, out, over, up),applies to small subclasses of verbs which occur with narticular adverb sub- -although a sct of locative ol~ject strings is not in the prcsent g r a n~m a r . WORD LIST: compare, do, handle, head, lie, place, range, rate, tunnel.The computational treatment of forms like It remains for us to make.the fin'al de-ci&m is to define a small subclass^, VSENT4 (=appear, happen, remain, seem, turn out) which can take OBJLIST: (FORTOVO) where applicable, provided the subject of the VSENT4 i s the expletive it.Note: To distinguish between F'ORTOVO and the object for N + to V (OBJ) where to V (Ow i s an adjunct (He is looking for an assistant to aid him in his work), use there a s the subject of the FORTOVO:He plans for there to be five people on the committee. I asked for there to be a proctor at the exam.Frame:N t V -for N to -V (Om Qamples :I prefer f~r him to go to college.It remaips for u s to make the final decision.I plan for him to do it.I asked far there to be a proctor at the exam.He is longing for her to ask him.She moved for the meeting to adjourn. N, -t V N, a s N1 -- N, is a predicate of N, &ample$ :They served the king as messengers.He entered the army a s a private.She interpreted it as a linguist.Heran the race as a sprinter.Tliey treated him as a lackey. (SASOBJBE)We will considcr John a s our ( They served the king a s young men. (adjunct) ITc discovered the enzyme a s a student. = when they were young men.Note: a number of verbs occur with both the object string and the adjunct sequence, e .g., serve (almvc) .WORD. LIST: begin, continue, enter, interpret, run, serve.Frame: applies to strings in which the adverb Or, if the verb also occurs \vith a noun object alone, it occurs in a tliffcrcnt They trcat thcrh ivcl l/l)adly. senee than with thct N 1-D:N t V N I)J-Tc lmrc tllc nci\iss i\.cll 1.V~e y treated thcnl. They treated them ~vell.She set it down.-Tiler e is a selectional dependency S l 7~ wears h e r age weI1. between the verb and the adverb such that veybs specified as OBJLIST: (ND) call occqr only wit11 either locative adverbs and adverbs of motion (here, there, near-by, up A majority opthe verbs classified as OBJLIST: This dependency helps to clis tinwish the object string N P N from the sequence noun object plus P N adjunct (c.g., They liberated the city on Sunday). Many verbs,, can occur with either the NPN object string and the noun ohject plus P N adjunct, where the preposition is the same in 110th cases:(NN) enter i n b the transfornla- tion N t V N q P N., (t N t V N: N2 where N, NHUhlAN or A G G R E G A T FThey libcrated the city from the cnelny. (NPN) They lil]erated the city from motives of l~olitical advantage. (N PNadjunct)Thc particular P must 11e specified for each varb.One can tr'ansform X into Y.I emptied the water into the sink. -IIe fastened the chain to the door. \V01lD 1,IST: accclcratc (lo) , a t l r a N (to), add (to) , apply (to) , ask (illto, to) , associate (witl~) , attril~utc (to), l~alancc (against, on) , heat (into, to), bring (into, to), catalg sc (into), charge (to), clear fof) , c:oml)inc fivitl~), c o r r e l a~e (uith), demonstrate (to) , dcprivc (04, direct (against, at, to, tnwarrl) , cnt cr (in), cxpc.1 (from), give (to) , icke11tif-y ~\!*ith.) , limit (to) , nlal~e (of), obtain (from), pattern (after), prcscnt (to, ~vit'h), slice (t'rom, off), subjcct (to), take (fi-om, to), turn (against, from, into, o n , to), view hvit.11).OBJLIST: (NPSNWI~,:The particular R e p must be specified for each verb.The, P is restricted in terms of the container verb, not in teims of the contained SNWH. This is evidenced by the fact that the, P of NPSNWH does not permute to the end of tlie SNWH string, e.g.John asked me what he should do about. As distinct from the object string NPVINGSTG, the N, of NPSVINGO is not N, tV N2 P N3 Ving ( O w possessive : Examples : I asked him about John's having been there ( NPlrINGSTG) I asked him a b u t no one having been there.I asked him about no one having been I charge his acquittal to there having beenthere. (NPSVING O) no witnesses.If N, is a pronoun, it is accusative (WPOS5).Note: to avoid confusion of the object string NPSVINGO \vith the sequence N P N plus a right adjunct Ving (He kissed a r y near the door opening on to the balcony), use the expletive there a s MI]:I asIted him about there having been no witnesses.The particular preposition (s) must be specified for each verb. (WPOS15). He attributes* his success to there having been no competitors.He toldu s about therc being no doubt in his lnirld.Dictionary Entry :,ASK. TV: (OB JLIST: . 3 , . . . .) ..3 NPSVINGO: . 1 6 , . . . .PVAL: (ABO'IJT) \IrC)IZD LIST: a d < (about), attribute (to), base (on, upon), brief (about, on), caution (about), ccxltcr (on, almut , around, upon), charge (to), compare (to, with), contact (about), contrast @, wi,tth), corrclatc Ovitll), drducc (from), identify (with), limit (to) , make (of) ,-question (about), 1-elate (to), tell (about), trake (to).OBJLIST: (N'PVINGO):The noun object (N2) of is understood t3 be the subject oaf V h g .The particular preposition (s) must be specified for each verb fiVPOSl5).N i t V-'N2 P Ving (OBJ).I prevented him from ruining his health.I cautioned him qgainst ruining his health. Ilowcver, a vcrl) classified a s occurring with the oh jcct string NPVINGSTG must bc capahlo of occurring with a sequence N P Vingstg in \v11ich the Ving 'has an overt subject aid n ~vhich this over+t sul~jcct is not I toldhim a b u t NIaryls leaving.She askedlrinl a b u t writing programs.T attributed my success to changing my plans. Note that VINGSTG here refers to either the object string NSVINGO o r the object string VINGO FN.The particular preposition(s) must be specified for each verb (WPOS15).WORD LIST: ask (abut), attach (to), attribute (to), base (on, g p n ) , compare (lo, with), comlect fif~ith), d d u c e (from), identify (rvith) , li13k fiirith), malie (on, pattern (after) , prepare (for), question ( a b u t ) , relate (to), separate @' on$, set (on), subject (to), tell ( a b u t ) , trace @).N, is NMTRlAN (JVSKS) Note: Avoid the use of what S a s the S W H in the test frame since nrllat S nlay be the replacement of a given N2 in -N 1 5 (e .g., ! gave him what he neede'?.He t;oldme i~hetller they were coming.They nrrote him who was conling.I asked hill1 \\'hy he did i t .I taught hi111 llo\\-to do it.Il?OIiD LIST: aqli, teach, tell , rite.verbs classified a s occurring wit11 the object string NSTGO include 1) the pure trmsitives (He accomplished his mission) including those ~d~. i c l~ drop the N object (He reads h o l i s ; H e reads).2) verbs which occur with \an NPN object where the P N is droppable (He fqsteqed the c h i n to the door: He fastened the c h i n ) . (Dl-~pping of P N is not an automatic process of the grammar).3) verbs ~vlxick require either a conjoined. or plural object (He equated A and' B; Ile correlated the,hiro sets of values) or a collective noun object (It gathers dust). 4) verbs w11ich rccluirc rcflcsivc objects : (He absented himself).5) measure verbs (The line measures hiro inches; It costs five dollars) .Notc: due to thcir rclativcly infrccjuent He analyzed the compound.John met 11Iary.-He amassed a fortune.He equated A and B. She favors doing it. She favors their doing it.The subject of Ving need not be the same a s the subject of the container sentence; e .g., inJohn described his studying.his --Johno r , alternatively, hissome other person. Cf. VINGO. S n c e NSVINGO is more s e~t e n c elike in its form than the VINGOFN string it i s helpful to include in the test frame for NSmNGO features which a r e characteristic of sentences, e.g.:1) an object after VfrJg: We discussed writing: novels.2) an adverb after the object: -She prefers doing it quickly. 3) a negative element before the Ving:She favors not doing it. He described (his) studying at night.He decided to accelerate their advertising.The group discussed writing novels .In their program of exercise, they include climbing a mountain.The nurse has limited (her) seeing visitors so frequently.He mentioned (his) seeing Mary.They opposed (their) adjourning early.They proposed sending another letter.He questioned having to arrive at 8 P. M.The doctor has restricted his seeing visitors. He s~e s t e d swimming more s l o~~l y .Iunderstand his wanting to leave so early.WORD LIST: almlish, accelerate, nllow , cl~oose, complicate, describe, deterlnine, discuss, eviclence , facilitate, include, h k r , limit, mean, mention, ilotice, oppose, prefcr , prevent , propose, question, restrict, suggest. WORD LIST: accelerate, act, age, appear, care, cllange , come, compete, compound, continue, decrease, demonstrate, clilninish , draw. eat, enter, csist , fail, fish, follow, go, happen, homogenize , lnoa9, last, lengtl~cn, live, looli, matter, move, occur, point, provide, publish, ran, read, rela.. , rest, result, r e t u n , ring, see, sleep, start study, sjveat, take, think, t r y , n.ondel n-oxI<, n-1-ite .a verb is classified a s occurring wit11 the object stri-rig IWIALRECIP i f , nhen it W, and N, tV (P) each other.-L ---occurs ~i i t l l no overt object and with a noun I-:sal~~pl cs : subject which is not silzgular &e., is AGGR 1 .John alcl l \ l a~y lnet each othcl* at school . natural to recoi~struct the. object gach othcr or P -each other (on at lt.:~st ollc seading) ;Your claim and mv claim ccpltlict hi-ith each Tile couple fougl~t C\\ it11 cach othur) , ( 0 t l l L T ) . hi it11 me).Tile parties confci.1-td C\\.ith each other).Dill ant1 k1~ foug'ht (I\ it11 cacll other). Jolm and 3 h -v aqrec hi.1t11 each other), The g2;l*oups separ.ated (fro111 each olher ) . ITe seemed a happy man.Jolm acted strange.The restriction on number agreement between Thcy appear happy to be here. sub jcct and, object (U'AGR EE2) applies here.Hc: became ecstatic tvhen I told him. Note: if the secjuences N:SINGVLAR t V N:PLURAL and/or N:P.LURAL tV N: They looli happy to be here. We felt satisfied.They -feelShe seems right for the job.The eggs smell bad.John appeared an idiot-. Tllc results might seem surprising.Ile became president a year ago. Note: V P P~S which occur nrith ally a limited set of adjectives (r.&g true, blush She remains a strong woman. red, etc .) are classified a s OBJLIST: (ASTG) , -Ile seemed a happy man. not OWLIS'I': (OEUBLJ. 3) DSTG (adverb string-) :John appeared clown and out.Bill felt apart €ram the rest of u s .-H e seems clown and out.They 1ooI;cd ~vcll .They looked n.cl1. They seem well.A restriction limiting adverbs to those wliic11 The matter appears in d3spute.occur after be -(U'POS1 II) applies here.Note: verbs wllich occul* with a niderIt will remain t a his advantage to see them. range of adverbs, i.c. which occur with ad-Thc cake smells of anisette. Note: Verbs classified as occurring nit11 OBJBE: P N , a s opposed to those classified as occurring .rvitll P N , can occur nit11 a range of P ! NSENTP (to h i s advantage, of value, of interest, of significance) constructions. Therefore, verbs Ivhich can occur with this range of constructions should 1)e classified as OBJBE: (PNJ, altllougk otller P N consti*uctions are also possil~le here.WOIID LIST: ASTG: act, appear, become, feel, look, remain, seem; DSTG: appear, feel, look, seem; NSTG: appear, become, remain, seem; PN: appear, remain, seem.-OBJLIST: (ORJECTBEl) : I applies only to the verb bein all its forms (am, -are, be, been, being,i s , w a s , w e r e ) .The sequepces which a r e treated as objects of beinclude: 2) passive V * + (OBJ) (War was n e v e r derlared) . Because of the frequent occurrence of t h e passive construction in scientific writing, it is more economical to list the passive objects for each verb in the word dictionary than to compute them by a rule of passive omission. The correspondences between active and passive objects used in the preparation of dictionary entries is given in POBJLIST below.1) Ving (OBJ) (3 ) ORJBE, i.e., a noun. adjectiye, adverb o r P N string (cf. OBJLIST: (OBJBE):He is 2 carpenter. He is happy. ZIe is here. The matter is ip dispute.The trouble is that no one lcneu..To ask the question is to answer it. It i s not that there was nothing to do. jf You can rely.Ile stands for justice. He stands.Verbs which occur with the object string NPN from wlrich the leftmost N can be dropped (He gives (money) to charity) a r e also included here.In the case of some verbs, a middle form of the verb takes both NPN and P N objects: One can transform X into Y.X transforms into Y.The particular preposition (s) must be specified for each verb (WPOS15).WORD LIST: account (for), act (on), add (to), agree (on, to), amount (to), ansurer (for), ask (about, for) , associate (with) , balance @n) , believe (in) , care (about, for) , change (into, to) , compare (to, Nth) , consist (in, of) , deal (with) , depend (oil, upon) , differ (from,, in, with) , divide (into) , dram ( f r~m , on, to, upon) , drive (at) , enter (in upon) , focus (on) , give (of, to) , happen (across , on, upon) , idenlify (with), long (for), look (at, after, for, into, upon), meet Iwith) , reduce (to), run (for) , su11stht;ute (for), tell (of), transfer (to), wonder (about).OBJLIST: , (PNHOWS) : includes those verbs which occur with how S but not wit11 SNWH, e.g. : Examples: $ He Zil<ed wrl~ether it was clone.. Many of these verbs also occur wit11 P N how S which is ihcluded in this string. This will complicate 1~01ir it iS to be done.They demonstrated (to us) Ilo\v the situation was ha~lclled. WORD LIST: conlplicate, correct, define (for), demonstrate (for, to), describe (for, to), expose (to), film, infer, like, mention (to), restrict, 'eview (for), summarize (for), understand.Since PNN is a permutation of NPN, any verb specified for one n u s t be specified for the other.PNN, h o~~e v e r , usual,ly occurs only \\~Wll N: -N 1 RN:?Mary galre to Jolm the book.Mary gave to Jolm the book 1v11ich he needed for his esanls.The particular preposition (s) i~lust be specified for each verb fiVPOS15) Frame:N, tl' P N, N,,He gave to her the bool; \~Ilich he lfinlself needed.They attribute to hhssaccio the iiltroduction of perspective into meul -1val art.They correlated with speech variatidn several factors nrllich are usually considered sociolog'fcal .They have depleted of its riches the $oil which we cared for so lovingly. The particular prcposition(s) must l)c specified for each vel-11 (TTTPOS1 5).R~O R D 1,IST: admit (tar, comnlurlicate (to), conceal (filom), esplain (to), )lint (to), indic&tc (to), learn (ffonl) , mention (to), prow (to), rclatc (to), say (to), w i t c (to).F'rame : The noun of P N is MIVATAN (IVSNH). N tl' P h' SN The P is fyom, to o r of.The conlputational treatment of forms fiamplcs :like It appeared to Jolm that h b r y was here i s to define a small subclass, VSENT.4 ( -q-I learned from pJoJln that the matter was under pear, happen, remain, seem, turn out) 1i~hic11 cliscussiou. can take the ol~iect string PNTIIATS, whcre I demonstratccl to them that the hypothesis appropriate, provided the subject isit.accounted for several disparai-ate facts. Note: do not classify v c r l~s 1-\.11ich occur wit11 the cspletivc it a s subject and \~Ilich It apl~carecl to him that A'Lqi-y was hcl-c. a1 so occur with a scntcncc string a s subject (It occurred to John that he was nccdcd. That -Dictionary Entry:he was fieeded occurred to John) a s PNTIIATS. The pal-ticular ~~-C I I O S~~~O I I (s) nlusl l)e specified for each verb (JVPOSlf,) \~( ) I < D LIST: admit (to) , nnnounrcl (to) , assc1.t. (to) , c r~~ (to) , (~ommunicnlr (to) , tlcmonstmte (10 , disclose (to) , csplein (to), llillt (to) , i l l u s l i~n t c l (to) , intliratcb (to) , intin~nlc (to,, 1 cn l*n (from), mpntion flu) , l l~t i o n (to), occur ( I , 1 0 (to) , rcnlarlt (to) , i'rcluil'c (on , i*tii?t!al (to) , s;ly (to), seen1 (to), suggest (to) , \\-ritc (to).O.RJLIST: . (lWTIIATS\'( 1) :the vci-1, of the clnl~~rlclcd sclltcllcc is not tcnscd. (cf. OBJLIST: (C 1 SIIOI'LI)) 1.Verbs \\.hicll satisfy the t'l.an~c occur with should V a s \vclI as ,\\pith IT.The i~oull of IyK is S7II'RIAAi\: (I\'SNS).The particular preposition (s) must he specified for each v c r l~ (TI'POSI 5).Thchy 1-tlclui rcyt of John that 11c) att rlncl. li'Ol'\D LIST: a s k (of), dvnlanc-1 (of), c s p c~t (or), l)l'olx,sc> (to) , rcclllil'c\ (017, suggclst c t o~.ORJLIST: (PhTrINGSTG) :1.Yalnc :Siilcc PhSrTNGSTG is a p e r~~~u t a t i o n of X I t l ' I' S., I'IXGSTC; VINGSTGPN, any vcrb sl~ecifiecl for one lnust be specified for the othcl*.X I -t i v IVIXGSTG 1) Xs,Csually, however, the acceptal~ility of the PhTTINGSTG permutatioii dcpends 011 thc 1-halnpl cs : presence of one or more adjuncts ivithin the VIKGSTG:? H e prefers to going out \vith h h r y They lifnitccl. tr, certain 110~11's his sc'ci~lg 1~isito1-s staying home.Y1cy atti*ihutc?cl. to his i \~i f~~' s 1)usincss ac3un~cin He prefers to going out \\.it11 h k r y his snccccding i1.11c.r.c) cvPl.yo~lcL clsc had failccl. staying 110111~ i\.it;h someone else.I lccharged ta a l~eavy \\~orl;loacl his going honlc The particular p,mpositicm (s) must l~e late. specified for each verb @i1POS15).ATTI3IDU.T 1.: .T V : (OBJLIST: .:I, . . . .) . 3 V INGSTC: PN: .14, I'W INGSTG : . I 5, . . . .WOliD LIST: see OBJLIST: (YINGSTGPN) .The I" of the ol).jcct st~i11g PSNtVlI is rc.s t i*ictccl in terms of the container vcrb , not in terms of thc containcul SN'IVII. This is cvidcnccd by the fact tint thc P of Z' ShTVEI docs not pern~utc aro~u.ultl the SNlY1I (Cf. ODJLIST: (ShIVII)) :John asltcd a h u t whether he should go. $'Jol~n asltml xhcthcr h e should go about.Notc: a\-oid use of \\.hat S a s the SNWH in the tcst ffanlc since i\fl~at S may be the rcplaccnlent of n given N ill P N ( c . g . , John landed on \\+at he had been looliil~g for).The particular prepofition (s) must bc spcci ficd for cnc11 vcrl) fiVPOS15).I asked about whether he ~vould come. I inquired into whether he \\rould come.They pondered over whether h e \ivould come.John. wonclerecl about why she did it. f N,, is a pronoun, it is accusative (iVPOS5).They worried over him .drinking so much.He focused on the president flying to norida in a private plane.We asked about there being no food. staring PSITNGO with the scclucncc P N plus He writes about John's absence disturbing a right adjunct Ving (He looked at the door Mary.opening on to the I~a l~o n y ) , use the expletive tllcrc a s the. N2:Dictionary Entry: We aslicd almut there being no food.T'llc ~~a r t i c u l a~ preposition ($) must be TV: (OBJLIST: -3 , . , . . .) .spcci fi'crl, for each verb (WPOS15).PSVINGO: :IS, . . . .PVAL: (4 ON1 ) .worin I,@T: account (for), amount (to) ,ansnvcl* (for), approve (of), argue (about), aslc (about) began (with), ccntel-(on, almut, around, upor,), come (to, of, from), car.e ( a h u t , for), compare (to, with) , dcpcncl (on, upon) , end (in, witl~) , explain (about) , Focus (011) , hear (of, a b u t ) , lie (about), plan 'on) , poiilt (to) , read ( a b u t ) , remarl; (on, about). remembeil ( a h u t ) , speak (of, almut) , t~l k (of, almut) , tllinli ( a b u t ) , \irondei-(about) , write (about),OBJLIST: (PVINGO) :IiYam e : There is no overt subject of Ving N, tV 1' Ving (OBJ) @! l3.e from his pressing the point).The subject oC tP -(3,) is understood to be the Ihamples : subject of Ving.The paf ticular preposition (s) must be I can't keep f~ om smolcing. specified for each vei+ fiVPOS15).IIe refrained from pressiilg tllc point.She succeeded in passing.She is engaged in n~r i t i n g a novcl.Heleft off seeing her.NOT OBJLIST: (PVINGO) :He rcl ies on ( o u~) 111al;ing an in~prcssion (PI'INGSTG) .He couldnlt acco~ult for (thair) nlal;l:lg a 11Gstal;e. (PI7lNGSTG).WORD LIST: acllllit (to) ; convert @), delay (in), ellgage (in) , fail (in) , go (~vithout) . 1;ecp (from) , specialize (in) .OBJLET: (PVINGSTG) :In the object s_tri.ng PVTNGSTG the left adjunct of Ving (specified in the frame as Nqls) is either an overt subject---He asked about their writing progyams. They asked about Jolmls reading of the passage. capable of occurring ~5 t h a sequence P Viagstg in which the Ving has an overt subject and in which this overt subject is not cortferential wit11 the subject of the tV. Note: a P may occur at the beginning o r end of the SNWH string:1 wonder to whom he is referring.I wonder whom he is referring to.I don It I~IO\Y from nll~onl he obtained the information. I don't knon tvt~~nl he obtained the information from.Tllis P in SWVH is not to be catlfused with the P which is dependent on the container verb (cf. OBJLIST: (PN) , (PSh?YH)). This latter P does not occur at the end of'the SNLW string : I ivondered about whether to go. ?hey a r e discussing \\hether to lea~re.I doubt if he can do it.We ca1u10t establish k o \~ this process I\ orks.Note: do not classify verbs which occllr with the expletiveit as subject fJt does11't rnniter ~vhether he comes) a s SN1171 (see OBJLIST: (NTILATS) ) .WORD L I S P affect, ascertain, ask, calculate, check, contemplate, choose, concern, c o n s i~l c~. control, decide, deduce, clc!loie, discern, discuss, c l o~~i , establish, e~ar\lil?e. heal*, indicate, influence, investigate, judge, knol1. , lean?, matter, nleasur-e, mention, mind . !late, observe, prcc!ic,t, nrove, question, remember, report, reveal, say, see, shou . state, tell, verify, iion~ler, I\ rite.(IRJLIST f (SOBJBE) :Frame:LQ the object sirii~g SOBJBE the OBJBE:N, tV K9 OBTBI: is the prtiicate of N2. The machine grammar allows four possible values hr OEJBE:(IE.JBT< noun, acliectit-c, adyer'u, 13 iL' 1$ NSTG (noun string) : p a m p l L * : They considered him their savior-. They elected him president.They consider him their savior. They call him a genius.They termed him a genius. Tlle restrictions on number agreement behireen subject and object (WAGREE?) apply She thoufiht him a good man.He con~iciers them foolish. 2) ASTG (adicctive string), including adjectival Vens and Vings (see VENDADJ I found it well-dcsigned. and VVERYVTNG; also OBJLIST: (SVEN)) :We thougl~t him interesting.Because of the frequent occurrence of the passive construction in scientific writing, it is more ec~no~mical to list the passive objects for each verb V in the word dictionary, than to compute them by a rule of passive 'omission'. The POWLIST values of a given verb are listed under the past participle pen) form of the verb. The correspondence between active and passive objects used in the preparatioh of dictionary entries is as follows: V) BEINGO (John is being a fool). 9) EhIB EDDEDQ (The question is: 1vI157. did John PO 7)
null
Main paper: ) asent3: (athat): I am certain that John will come.They were eager for the speaker to address the crowd.I am insistent that you go alone.I am certain that John will come.I'M grateful 'that the stuff arrived on time.We r e happy that you can come.He is doubtful whether the plans Mil come off.I'm not sure whether they \\ill come.We a r e uncertain nvhy he left.We a r e uncertaifi why he left. Die tionar y Ent qy :I U P P Y ADJ: .10 10 -ASENTS : (A FORTO, ATHAT). I will see him next year.He looked better thistime.WORD LIST: last, next, this.an adjective is in the subclass COMPARATNE if it can occur in the environment -N1 t be -than N,:John is happier than Bill.8 John is tender than Bill. Thc group has changed its mind.The g~o u p have changed their minds. The j~ul~lic disapprove of it.A minority i's in favor of tllc action.A minorif y. a r c in favor of tllc action.as the subject of collcctivc ancl rpcip~*ocalverbs (WAGR 1 : E l ) :The group @thered. $ He gathered. for r\: KTJUBTAN apposition (my friend Jolm) see NAhiE and NCOTJNT::)..A11 KCLASS1 FIEIZ is in cithcr : 1) NC LASSI F'TERl, nfl~ich includes n~etalinguistic words that iilt roducc tcrmin -ology , c .g. tclm, synlbol , or 2) NCLASSI FIER2, wl~ich includes The N, N, t Ijc. . . - -L - - (N!, / NIIUhLlN) NCLASSI FIER1:The symhol S .is interpreted a s the subject of a sentence.Linguists oftan confuse theterms string ancl sequence.Tlle e s l~r e s s i o l~ I1a1-ificrl g~ammal-.u7ill be used to refer to thc grammar in Appendix It.The element hydrogen is the lightest suhmatter area (supplied by the user), e.g.:stance. element, drug, acid, enzyme, e)rtract, hor--mone , lon, mineral , .coefficient, factor, ctc . The shelf will @hey dust. The shelf will gather a booli.Cf. AGGREGATE.While he \\.as away, the fortune accumulated.The cell accumulates sodium.These -books will only gather dust. -He accumulated a forhu~e.WORD LIST: acid, alcohol, ammonium, blood, calcium, change, digitalis, down, energy, evidence, fluid. hydrogen, interest, knou-ldge, plasma, salt, sweat.NCOUNfi : R a m e : occurs i r~ the environnlent A 01) -t V OW, and not in the environment --tY OW. T -N WN9) .Examples : Nouns not classified as NCOUNTl (LC., mass nouns and many abstract nouns) caqA -book fell.begin a headless relative clause S-N (DN5.1):digital is) A series of coincidences occurred.The reaction 3 drug produces . . .2 A blood flows. Frame : an NCOUNTl which, a s the object of a specified preposition P, occurs tvitllout a P -N preceding a~t i c l e (WN9). The particular P &amp1 cs : which occurs with a given NCOUNT2 is specified in the dictionary entry of that NCOUWTZ. He came bycar Dictionary E n m :(NSIX) GONCLUSION. 11 = NCOUNT1, NCOUNT2: (iIN4) . . . WORD LIST: amount (in), answer on), approach (in), assumption (in, l~y ) , bed (in), case (in), charge (in), collclusion (in) , contra'ct (against, by, from, in, into, on) , course (in, of, on),depyee (in, of1 , end (without), estimate (according to, beyond, by), example (by, for), foot (on), focus (in, into, out of)-, gross (in), hand (at, by, in, on, out of), kind (in) , length (at, in), limit (l,cyond,within, without), line hn, on, off) , mark @f) , measure (beyond, to), number (according tobeyond, by , fn, of, without), parallel (in, without), phase (in, out of), place f according t6, in, into of, out on, point on) , position (in) , process (in), question (beyond, in, into, under, without) , ratio (in), reach (beyond, in, into, out of, withid, s h o~~ (for, in, on), significance (of), turn (in) , viev (from, in, i n b , onj , way (by).Frame: NCOUNTls which can occur without N t b e N a preceding article. alter be o r in the object ---position in SOBTHE and OBJBE (see OBJLIST:(SOBJB E) , (OBJBE)) (WN9) : Ekampl e s :He is president. We-elected him president. He remained president.Ile is president. I am tr-easurer.He is chief investigator.We elected him president.They appointed me treasurer.WORD LIST: collcctor, director, head, investigator , judge, president, secretary.Can occur as the firsir noun i n the string NN -L i.e., a s indirect object --(WPOS2 2) :She bought the lmy a book.(cf. AGGREGATE) or as the host of a right adjunct WII string (relative clause) headcd11y who -/whom (WWHZ) : Frames : N, N I (NI indirect object) 4Examples :She b g " n t the boy a book.Shc w l u t e the ~srorliers a letter.The man who ate the cheese left. 91c showcd i~c-r relations the prt>scint.The manwhom you saw was Bob.N of a right-adjunct P N string with P -into S l~e needs a friend who can care for her. a now subclass \vhio11 colltains all the letters of the English alphabet. It is used in the NQ string a s a variant of Q (T'i'Nl2r: i( N -t V (V NOTNSUBJ: NONHUMAN) (e .g. : believe, deny, discover, -- kno~\~, read) &,ample~ :and other verbs which require a l~urnah sub-ject (e.g.: hand, -laugh, long, skin) fiVSEL2)-. - Cf. NOTNSUM.3 The clock believes that this is so. 8 The account knows that he is wrong.fi The apparatus laughed.WORD ISST: ability, act, assumption, balance, can, day, dose, enzyme, feature, frog, gland, hypothesis, interaction, junction, London, mean, need, organ, pathway; peak, position, property, range, satupation, tension, use, wonder.Frame:a noun i s in fie subclass N: PLURAL if it occurs in the environment These --tV -These N t V OBJ.OBJ and \not in.This --tV OBJ (WAGREE4); e . g . :Examples :These groups answer cd quicldy .8 This groups answered quickly.These men love mry.-8 This men love Mary.WORD LIST: abilities, ages, combinations, data, effects, groups, measures, mucosae, observations, parallels, problems, rises, seconds, tries, uncertainties, uses. valencies, wants, years.NPII. E Q :I+%me :a noun which i s not also a propername is it1 NPREQ if it occurs a s the N of the se- ness , e, weigllt, volume, a r e a , and perhaps .a few others. These words occur a s N t inT N Q N - quence N -Q (QThe line is two inches in w 1 1 . the sequence Q N, P N, where N1 -NUNIT Ile is five years of w . (incl~es, years, etc .) and Q quantifier, including number s (WQ3).The area measures twenty feet in width.bl the case of length sequence Ct\croThe rectangle is two inches along the diameter.inches) a class of nouns, also classified a s NSCALE, can occupy the place of length in P NSCALE: two inches in diameter, in circumfrence , along the diagonal, etc .(The aciverbs across and arowlcl can also occupy the P NSCALE position.) WORD LET: age, altitude, area, breadth, height, intensity, length, luminosity, strength, volume, wavelength, width, circumfrence , diameter, thickness.Frame :occurs in the environment It be P -- The plan for him to go. His attempts to lmve 2) NSENT1: (ASHOZTLD) .The demand that salaries be raised 3) N S E N T~: (ATHAT) The fact that they enrolled 4) NSENT1: (AWN)The auestion whether t o -.vote n a m e s : T r ) N be SN.The demand, that salaries be raised was rebuffed.The plan for him to go to college w a s foremost in their minds.to leave were noticed.The fact that they enrolled is Imown. Yesterday's meeting uras cancelled.Yesterday T went to the movies.Sunday h e will r u n the race. NOTNSUBJ: (NSENTl) 8 The fact cares. NoTNS'I-TBJ: (NTIRI 1 3 ) jl The \\-eel; designed the plan. We split open the paclcage marked "fragil e ". The object string ASOBJBE must be distinguished from the adjwlct sequence a s -t NSTGO. The two may be distinguished by the fact that the asof the ASOBJBE string is parapl~~asable as 'in the capacity o r character of1, e.g.,They servcd as InessenTers.in the capacity or nlessengers whereas theas of the adjtmct sequence is pamplirasable as 'when1 or rwl~ile e.g., They served as young men.when they were young menThe two may also be distinguished by the fact that in sentences containing the ASOBJBE string, the primary stress of the sentence falls on the head noun of tie noun phrase functioning as the OBJBE, e.g.,Enzymes function as catalysts.3 Enzymes fwlction as catalysts.tvhereas, in sentences containing the adjunct sequence, the primary sentence stress falls on the verb, e.g.,John cllangc(1 as a lieutenant. ij John changed as a lieutenant.They served as messengers.Enzymes function as catalysts.a i bartender.This ide-iginated as a vague possibility.That invention began as a joke.John applied as a mechanic.He will continue as a private.He ran as a sprinter.The reaction occurred a s an after-effect.The fact exists as an anomaly.NOTOWLIST: (ASOBJBE) :John changed as a lieutenant.John dte well as a young man.Ididn't go to school as a child.Note 1 : a large number of verbs occar with both the object string and the adjunct, c.g., serve (above). They served (the king) as meshengers.WORD LIST: appear, apply, arise, begin, Continue, enter, exist, fail, function, p , occur, originate, participate, remain, train.The verbs classified as OBJLIST: (ASSERTIDhy a r e a subset of the verbs classified as OBJLIST: (THATS), i.e.:Frame :SUBJ tV (that) S --She Ino~vs John is an "An student.I assume you \\*ill arrive on time. She ~IIOSYS that John is an "An student.(Inao\v OWLIST 2 ASS ERTIOX, THATS) They feel they a r e being ahused .-$ She reported. John is an "AQtudent.He believes the earth is flat.. She reported that John is an n A n student. (report OBJLIST TFlATS She discovered he \\.as a11 cscellent cool<.Ire saiJ \ye h e \ \ -a better' solution.-It should be noted that mnlputa-It seems he is happier anay from home. tional treatment of forms like It seelns that he was here i,s to define a small subclass, VSENT-I ( appear, happen, remain, seem, POT OWLIST: (ASSERTION : mrn cmt) , ~vhich can take OWLIST: (ASSER-2 He added John \\ as a witness. TIOh? , (THATS) where applicable, provided 3 He argped their approach was metaphysical.the subject of the VSENT4 is the expletive It.-2 She reported John was an RA"tudent.WORD LIST: appear, assume, believe, discover, feel, figure, find, imply, hot\., learn, maintain, mean, note, say, se?em, sense, snow, state, suggest., suppose, t~~, understand.Frame : Verbs which occur with the object siring ASTG each occur with a limited set of adjectives in the adjective position:This rings true.3 This rings red ..That story rings true.This limitation on the set of adjedtives She remained red in the face.which occur with verbs spebified a s OBJLIST:Theyfell sick.(ASTGS distinguishes thew verbs from those specified a s OWLIST: (OBJBE) for which no He lav still.. .John tui-ned purple.Math comes easy to him. I clon~and that 11c1 corn ch.Tllc plan provirlcs that hc. Iw o n timv It nccc~ssitatcs t . h t h c l~c on time.iYOIiD LIST: ask, demand, dircct , n~can , movc, orrlcl-, pi.ct'cl-, propose, provicle, recluirc, suggcst .It is neocssary to define this a s an object string fin place of treating it a s an adverbial adjunct plus Sh3 since some scqucnccs haye no analysis i41 terms of an SN s t r i n g plus optional adjunct , c .g. :I Sound oust iijhethcr lic \+.as corning.Ilc pointed out that this was thc best 1l.e pointed out that this \\.as t h e i~c s t approach. approach.8 He pointed that t h i s was t h c hest approach. WOIiD LIST; b r i n g (out, up), figure (out) find (out) , leave @I, ouf) , lct (011) , rnalic (out) u~al-I; (da~i~n), point (out), urrite flown). ,C)BJLIST: (l)Pll:Applics 1 , strings jn which thc aclverbpreposition (or particle), IIP, cannot be analyzed as an adverbial adjlu~ct, c . g . :They lined up. l \ e y lined. C k ; if the verb also occurs witl~out a I>P o r other object, then it occurs in a different sense t h a~ with the D P , a s is often indicatcd by a difference in subject selection: Jolm carricd on. 8 John carried.The point carried.Frame :N tV DP.-They carried on.He showcd off.W e give up.The plane tookoff.Sfle drove in.H e \vent out.a s OBJLIST: @PI) a r c the result of 'mid-They walked down. dling', i.e., they a r e related to a class of 1 ' N D P constructions:Dictionary Entry:They blew the house up. The llouse blew up.Thc particular* D P must be specified for each ycrb.TV: (OBJLIST: .3 . . . .) DP1: .16, . . . .LIST: act (tlp), add (up) , back doi in^, off, out), come (shout, around, to, up) , carry (on), clear but, up), cool (down, off), couple up) , cover Np) , double (back, up), dramr (back, up), dry (but, up) ! fa1 i (away, in, off, out) , follow (tlirougli) , give (in, out, up) , level {off, out) , look-(up) , lose (out), measure (up), phase (out), run (down, on, out, over, up), show (off, up), sleep (in, oirer) , slow (down; up) , split (away4 off, up), start (in, out, up), stop (by i n , off, over, up), take IIe went down to Washington.The particular Dp and P must l~c specified for each verb.He walked around to the bus statioq.He sped on past the exit. hand (around, back, down, in, on, out, over), lead (in), lcave (in, out), level (down, off, out), line (un), live (down), looli (over, up), make (out, over, up), mask (down, off, up), move (in, out), paper (over), point (off, out, up) , pump (in, off, out, up), read (over), reason (out), regain back), rule (out), save (up), show (in, off, out, up), sleep (off), slice (off), slow (down, up), smooth (away, back, down, off, out), space (out), split (away, off, up), stop (up), store (up) , strip (off), switch (off, on) , take (off, out, up), think (out, over), t r y (on, out), turn (ddwn, off, on, over), use (up), warm (up), wash (away, down, off), weigh (down), tvorlc (off, out, over), write (down, in, aff, out, up) .Frame : applies to stripgs in ~i~h i c h the adverbpreposition, (or particle), DP, c c m o t be N t V-D P N P N @PSPN)analyzed as an adverbial adjunct; i.e., mixN -tV N D P P N (DP3PN)up the last name with the first # mix the last name with the first -t 9.As the object of Ving in certain strings \c here Viilg usually is follo~~ecl byof N there is an object form of the DPN P N string where the ofoccurs between D P aild N P N (the splitting up of the 111-oject into three parts). This form is DP4PN.Any verb which takes DP2PN takes all the variants : OBJLIST: (DPZPN), (DP3PN), (DP4PN). The particular DPand P must be specified for each verb.h the WORD LIST, the arrow (-> ) follows the s e t of DPs specified for each verb and precedes the set of Ps specified for that verb.I mixed up the last name wit$ the first. I mixed the last name up with the First. The mixing up of the last name with the first.He split up the project into three parts. WORD LIST: add (in-) with) , bind (up+ with) , call (away-, to) , chain (down, up+ to) , divide (up+ with), end (up* in, with), follow (up+ with), link (up-) to, with), pair (up, off* with, into] , play (off+ against), separate (out, off+ from) sign (ovei-+ to), single (out* for), take (up+ with), trace (back+ to), yield (up+ to).In the WORD LIST, the arrow ( 9Dictionary Entry:follows the set of DPs specified for each MOVE. verb and precedes the set of Ps specified TV: (OEULIST: .3, . . . .) .for that verb..3 D P l P N : .Is,. . . ..18 DPVAL: (iINC), PVAL: (CONi).WORD LIST: add (up + to) , build (up + to) , come (up, around, bacli-? to, with) , double (up + with), face (up+ to), feel (up+ to), fit (in+ with), go (along, d o~m ,in, off, out-> for, in, .3f; \YW)keep (away, up+ from ; to) , lead (up + to) , link (up + to, with) , live @ p -~ to) , look (down, in , out, up+ for, on, to), measure (up+ to), own (up+ to), pair &p, off* \irith), play (up+ to), put (up+ with), reach (out* for), speak (out, up+ for), stand (up* to, for) , try (out+ for).OBJLIST: (DP2 , .DP3 , DP4) :n a m e : DP2 may be distinguished from a pre-Nf t V D P N, @P2) positional phrase P N by the fact that the DP and N permute:(Kf PRO) N tV N LIP pP3)He looked the number up.-He looked up the number.Ekampl e s : whereas the P and N of the prepositional He loolced up the number. fillrase do not permute :He looked the n~~n l k r up. He looked up the shaft. $ He loo1;ed the shaft up.Fbr some verbs which take D P N obje-, the N position may be filled by a Ving string (They kept up their writing to the President). In the machine grammar, a Ving string i s allowed freely in place of N in D P N, and is considered r a r e a s a replacement of N in N DP.As the object of Ving in certain strings where V h g usually is foll~wed by of N there is an object form of the D P string where the ofoccurs between D P and N (the sending in of the entry). This form is DP-f.Any. verb which takes DP2 takes all the variants: OBJLIST: (DP2 , DP3 , DP4).The particular DP(s) must be specified for each verb.Hesent back the gift. Hesent the gift back. WORD LIST: act (out) , add ' (in, on, up), ask (in, out, over, up) , back (up), beat (up), bend (back, up), bin& (down, off, over, up), block (in, off, out, up), bring (about, off, out, up) , c a r r y (out, through), clear (away, off, out, up), cool (down, off), cover (up), deal (out), divide (LIP), draw (back, down, in, off, out, up) , dry (off, out), drive (in, off, out) , eat (away, up) , factor (out) figure (out), find (out), fish (out, up), fit ('in), follow (up) , give (away, back, in, out, over, up),applies to small subclasses of verbs which occur with narticular adverb sub- -although a sct of locative ol~ject strings is not in the prcsent g r a n~m a r . WORD LIST: compare, do, handle, head, lie, place, range, rate, tunnel.The computational treatment of forms like It remains for us to make.the fin'al de-ci&m is to define a small subclass^, VSENT4 (=appear, happen, remain, seem, turn out) which can take OBJLIST: (FORTOVO) where applicable, provided the subject of the VSENT4 i s the expletive it.Note: To distinguish between F'ORTOVO and the object for N + to V (OBJ) where to V (Ow i s an adjunct (He is looking for an assistant to aid him in his work), use there a s the subject of the FORTOVO:He plans for there to be five people on the committee. I asked for there to be a proctor at the exam.Frame:N t V -for N to -V (Om Qamples :I prefer f~r him to go to college.It remaips for u s to make the final decision.I plan for him to do it.I asked far there to be a proctor at the exam.He is longing for her to ask him.She moved for the meeting to adjourn. N, -t V N, a s N1 -- N, is a predicate of N, &ample$ :They served the king as messengers.He entered the army a s a private.She interpreted it as a linguist.Heran the race as a sprinter.Tliey treated him as a lackey. (SASOBJBE)We will considcr John a s our ( They served the king a s young men. (adjunct) ITc discovered the enzyme a s a student. = when they were young men.Note: a number of verbs occur with both the object string and the adjunct sequence, e .g., serve (almvc) .WORD. LIST: begin, continue, enter, interpret, run, serve.Frame: applies to strings in which the adverb Or, if the verb also occurs \vith a noun object alone, it occurs in a tliffcrcnt They trcat thcrh ivcl l/l)adly. senee than with thct N 1-D:N t V N I)J-Tc lmrc tllc nci\iss i\.cll 1.V~e y treated thcnl. They treated them ~vell.She set it down.-Tiler e is a selectional dependency S l 7~ wears h e r age weI1. between the verb and the adverb such that veybs specified as OBJLIST: (ND) call occqr only wit11 either locative adverbs and adverbs of motion (here, there, near-by, up A majority opthe verbs classified as OBJLIST: This dependency helps to clis tinwish the object string N P N from the sequence noun object plus P N adjunct (c.g., They liberated the city on Sunday). Many verbs,, can occur with either the NPN object string and the noun ohject plus P N adjunct, where the preposition is the same in 110th cases:(NN) enter i n b the transfornla- tion N t V N q P N., (t N t V N: N2 where N, NHUhlAN or A G G R E G A T FThey libcrated the city from the cnelny. (NPN) They lil]erated the city from motives of l~olitical advantage. (N PNadjunct)Thc particular P must 11e specified for each varb.One can tr'ansform X into Y.I emptied the water into the sink. -IIe fastened the chain to the door. \V01lD 1,IST: accclcratc (lo) , a t l r a N (to), add (to) , apply (to) , ask (illto, to) , associate (witl~) , attril~utc (to), l~alancc (against, on) , heat (into, to), bring (into, to), catalg sc (into), charge (to), clear fof) , c:oml)inc fivitl~), c o r r e l a~e (uith), demonstrate (to) , dcprivc (04, direct (against, at, to, tnwarrl) , cnt cr (in), cxpc.1 (from), give (to) , icke11tif-y ~\!*ith.) , limit (to) , nlal~e (of), obtain (from), pattern (after), prcscnt (to, ~vit'h), slice (t'rom, off), subjcct (to), take (fi-om, to), turn (against, from, into, o n , to), view hvit.11).OBJLIST: (NPSNWI~,:The particular R e p must be specified for each verb.The, P is restricted in terms of the container verb, not in teims of the contained SNWH. This is evidenced by the fact that the, P of NPSNWH does not permute to the end of tlie SNWH string, e.g.John asked me what he should do about. As distinct from the object string NPVINGSTG, the N, of NPSVINGO is not N, tV N2 P N3 Ving ( O w possessive : Examples : I asked him about John's having been there ( NPlrINGSTG) I asked him a b u t no one having been there.I asked him about no one having been I charge his acquittal to there having beenthere. (NPSVING O) no witnesses.If N, is a pronoun, it is accusative (WPOS5).Note: to avoid confusion of the object string NPSVINGO \vith the sequence N P N plus a right adjunct Ving (He kissed a r y near the door opening on to the balcony), use the expletive there a s MI]:I asIted him about there having been no witnesses.The particular preposition (s) must be specified for each verb. (WPOS15). He attributes* his success to there having been no competitors.He toldu s about therc being no doubt in his lnirld.Dictionary Entry :,ASK. TV: (OB JLIST: . 3 , . . . .) ..3 NPSVINGO: . 1 6 , . . . .PVAL: (ABO'IJT) \IrC)IZD LIST: a d < (about), attribute (to), base (on, upon), brief (about, on), caution (about), ccxltcr (on, almut , around, upon), charge (to), compare (to, with), contact (about), contrast @, wi,tth), corrclatc Ovitll), drducc (from), identify (with), limit (to) , make (of) ,-question (about), 1-elate (to), tell (about), trake (to).OBJLIST: (N'PVINGO):The noun object (N2) of is understood t3 be the subject oaf V h g .The particular preposition (s) must be specified for each verb fiVPOSl5).N i t V-'N2 P Ving (OBJ).I prevented him from ruining his health.I cautioned him qgainst ruining his health. Ilowcver, a vcrl) classified a s occurring with the oh jcct string NPVINGSTG must bc capahlo of occurring with a sequence N P Vingstg in \v11ich the Ving 'has an overt subject aid n ~vhich this over+t sul~jcct is not I toldhim a b u t NIaryls leaving.She askedlrinl a b u t writing programs.T attributed my success to changing my plans. Note that VINGSTG here refers to either the object string NSVINGO o r the object string VINGO FN.The particular preposition(s) must be specified for each verb (WPOS15).WORD LIST: ask (abut), attach (to), attribute (to), base (on, g p n ) , compare (lo, with), comlect fif~ith), d d u c e (from), identify (rvith) , li13k fiirith), malie (on, pattern (after) , prepare (for), question ( a b u t ) , relate (to), separate @' on$, set (on), subject (to), tell ( a b u t ) , trace @).N, is NMTRlAN (JVSKS) Note: Avoid the use of what S a s the S W H in the test frame since nrllat S nlay be the replacement of a given N2 in -N 1 5 (e .g., ! gave him what he neede'?.He t;oldme i~hetller they were coming.They nrrote him who was conling.I asked hill1 \\'hy he did i t .I taught hi111 llo\\-to do it.Il?OIiD LIST: aqli, teach, tell , rite.verbs classified a s occurring wit11 the object string NSTGO include 1) the pure trmsitives (He accomplished his mission) including those ~d~. i c l~ drop the N object (He reads h o l i s ; H e reads).2) verbs which occur with \an NPN object where the P N is droppable (He fqsteqed the c h i n to the door: He fastened the c h i n ) . (Dl-~pping of P N is not an automatic process of the grammar).3) verbs ~vlxick require either a conjoined. or plural object (He equated A and' B; Ile correlated the,hiro sets of values) or a collective noun object (It gathers dust). 4) verbs w11ich rccluirc rcflcsivc objects : (He absented himself).5) measure verbs (The line measures hiro inches; It costs five dollars) .Notc: due to thcir rclativcly infrccjuent He analyzed the compound.John met 11Iary.-He amassed a fortune.He equated A and B. She favors doing it. She favors their doing it.The subject of Ving need not be the same a s the subject of the container sentence; e .g., inJohn described his studying.his --Johno r , alternatively, hissome other person. Cf. VINGO. S n c e NSVINGO is more s e~t e n c elike in its form than the VINGOFN string it i s helpful to include in the test frame for NSmNGO features which a r e characteristic of sentences, e.g.:1) an object after VfrJg: We discussed writing: novels.2) an adverb after the object: -She prefers doing it quickly. 3) a negative element before the Ving:She favors not doing it. He described (his) studying at night.He decided to accelerate their advertising.The group discussed writing novels .In their program of exercise, they include climbing a mountain.The nurse has limited (her) seeing visitors so frequently.He mentioned (his) seeing Mary.They opposed (their) adjourning early.They proposed sending another letter.He questioned having to arrive at 8 P. M.The doctor has restricted his seeing visitors. He s~e s t e d swimming more s l o~~l y .Iunderstand his wanting to leave so early.WORD LIST: almlish, accelerate, nllow , cl~oose, complicate, describe, deterlnine, discuss, eviclence , facilitate, include, h k r , limit, mean, mention, ilotice, oppose, prefcr , prevent , propose, question, restrict, suggest. WORD LIST: accelerate, act, age, appear, care, cllange , come, compete, compound, continue, decrease, demonstrate, clilninish , draw. eat, enter, csist , fail, fish, follow, go, happen, homogenize , lnoa9, last, lengtl~cn, live, looli, matter, move, occur, point, provide, publish, ran, read, rela.. , rest, result, r e t u n , ring, see, sleep, start study, sjveat, take, think, t r y , n.ondel n-oxI<, n-1-ite .a verb is classified a s occurring wit11 the object stri-rig IWIALRECIP i f , nhen it W, and N, tV (P) each other.-L ---occurs ~i i t l l no overt object and with a noun I-:sal~~pl cs : subject which is not silzgular &e., is AGGR 1 .John alcl l \ l a~y lnet each othcl* at school . natural to recoi~struct the. object gach othcr or P -each other (on at lt.:~st ollc seading) ;Your claim and mv claim ccpltlict hi-ith each Tile couple fougl~t C\\ it11 cach othur) , ( 0 t l l L T ) . hi it11 me).Tile parties confci.1-td C\\.ith each other).Dill ant1 k1~ foug'ht (I\ it11 cacll other). Jolm and 3 h -v aqrec hi.1t11 each other), The g2;l*oups separ.ated (fro111 each olher ) . ITe seemed a happy man.Jolm acted strange.The restriction on number agreement between Thcy appear happy to be here. sub jcct and, object (U'AGR EE2) applies here.Hc: became ecstatic tvhen I told him. Note: if the secjuences N:SINGVLAR t V N:PLURAL and/or N:P.LURAL tV N: They looli happy to be here. We felt satisfied.They -feelShe seems right for the job.The eggs smell bad.John appeared an idiot-. Tllc results might seem surprising.Ile became president a year ago. Note: V P P~S which occur nrith ally a limited set of adjectives (r.&g true, blush She remains a strong woman. red, etc .) are classified a s OBJLIST: (ASTG) , -Ile seemed a happy man. not OWLIS'I': (OEUBLJ. 3) DSTG (adverb string-) :John appeared clown and out.Bill felt apart €ram the rest of u s .-H e seems clown and out.They 1ooI;cd ~vcll .They looked n.cl1. They seem well.A restriction limiting adverbs to those wliic11 The matter appears in d3spute.occur after be -(U'POS1 II) applies here.Note: verbs wllich occul* with a niderIt will remain t a his advantage to see them. range of adverbs, i.c. which occur with ad-Thc cake smells of anisette. Note: Verbs classified as occurring nit11 OBJBE: P N , a s opposed to those classified as occurring .rvitll P N , can occur nit11 a range of P ! NSENTP (to h i s advantage, of value, of interest, of significance) constructions. Therefore, verbs Ivhich can occur with this range of constructions should 1)e classified as OBJBE: (PNJ, altllougk otller P N consti*uctions are also possil~le here.WOIID LIST: ASTG: act, appear, become, feel, look, remain, seem; DSTG: appear, feel, look, seem; NSTG: appear, become, remain, seem; PN: appear, remain, seem.-OBJLIST: (ORJECTBEl) : I applies only to the verb bein all its forms (am, -are, be, been, being,i s , w a s , w e r e ) .The sequepces which a r e treated as objects of beinclude: 2) passive V * + (OBJ) (War was n e v e r derlared) . Because of the frequent occurrence of t h e passive construction in scientific writing, it is more economical to list the passive objects for each verb in the word dictionary than to compute them by a rule of passive omission. The correspondences between active and passive objects used in the preparation of dictionary entries is given in POBJLIST below.1) Ving (OBJ) (3 ) ORJBE, i.e., a noun. adjectiye, adverb o r P N string (cf. OBJLIST: (OBJBE):He is 2 carpenter. He is happy. ZIe is here. The matter is ip dispute.The trouble is that no one lcneu..To ask the question is to answer it. It i s not that there was nothing to do. jf You can rely.Ile stands for justice. He stands.Verbs which occur with the object string NPN from wlrich the leftmost N can be dropped (He gives (money) to charity) a r e also included here.In the case of some verbs, a middle form of the verb takes both NPN and P N objects: One can transform X into Y.X transforms into Y.The particular preposition (s) must be specified for each verb (WPOS15).WORD LIST: account (for), act (on), add (to), agree (on, to), amount (to), ansurer (for), ask (about, for) , associate (with) , balance @n) , believe (in) , care (about, for) , change (into, to) , compare (to, Nth) , consist (in, of) , deal (with) , depend (oil, upon) , differ (from,, in, with) , divide (into) , dram ( f r~m , on, to, upon) , drive (at) , enter (in upon) , focus (on) , give (of, to) , happen (across , on, upon) , idenlify (with), long (for), look (at, after, for, into, upon), meet Iwith) , reduce (to), run (for) , su11stht;ute (for), tell (of), transfer (to), wonder (about).OBJLIST: , (PNHOWS) : includes those verbs which occur with how S but not wit11 SNWH, e.g. : Examples: $ He Zil<ed wrl~ether it was clone.. Many of these verbs also occur wit11 P N how S which is ihcluded in this string. This will complicate 1~01ir it iS to be done.They demonstrated (to us) Ilo\v the situation was ha~lclled. WORD LIST: conlplicate, correct, define (for), demonstrate (for, to), describe (for, to), expose (to), film, infer, like, mention (to), restrict, 'eview (for), summarize (for), understand.Since PNN is a permutation of NPN, any verb specified for one n u s t be specified for the other.PNN, h o~~e v e r , usual,ly occurs only \\~Wll N: -N 1 RN:?Mary galre to Jolm the book.Mary gave to Jolm the book 1v11ich he needed for his esanls.The particular preposition (s) i~lust be specified for each verb fiVPOS15) Frame:N, tl' P N, N,,He gave to her the bool; \~Ilich he lfinlself needed.They attribute to hhssaccio the iiltroduction of perspective into meul -1val art.They correlated with speech variatidn several factors nrllich are usually considered sociolog'fcal .They have depleted of its riches the $oil which we cared for so lovingly. The particular prcposition(s) must l)c specified for each vel-11 (TTTPOS1 5).R~O R D 1,IST: admit (tar, comnlurlicate (to), conceal (filom), esplain (to), )lint (to), indic&tc (to), learn (ffonl) , mention (to), prow (to), rclatc (to), say (to), w i t c (to).F'rame : The noun of P N is MIVATAN (IVSNH). N tl' P h' SN The P is fyom, to o r of.The conlputational treatment of forms fiamplcs :like It appeared to Jolm that h b r y was here i s to define a small subclass, VSENT.4 ( -q-I learned from pJoJln that the matter was under pear, happen, remain, seem, turn out) 1i~hic11 cliscussiou. can take the ol~iect string PNTIIATS, whcre I demonstratccl to them that the hypothesis appropriate, provided the subject isit.accounted for several disparai-ate facts. Note: do not classify v c r l~s 1-\.11ich occur wit11 the cspletivc it a s subject and \~Ilich It apl~carecl to him that A'Lqi-y was hcl-c. a1 so occur with a scntcncc string a s subject (It occurred to John that he was nccdcd. That -Dictionary Entry:he was fieeded occurred to John) a s PNTIIATS. The pal-ticular ~~-C I I O S~~~O I I (s) nlusl l)e specified for each verb (JVPOSlf,) \~( ) I < D LIST: admit (to) , nnnounrcl (to) , assc1.t. (to) , c r~~ (to) , (~ommunicnlr (to) , tlcmonstmte (10 , disclose (to) , csplein (to), llillt (to) , i l l u s l i~n t c l (to) , intliratcb (to) , intin~nlc (to,, 1 cn l*n (from), mpntion flu) , l l~t i o n (to), occur ( I , 1 0 (to) , rcnlarlt (to) , i'rcluil'c (on , i*tii?t!al (to) , s;ly (to), seen1 (to), suggest (to) , \\-ritc (to).O.RJLIST: . (lWTIIATS\'( 1) :the vci-1, of the clnl~~rlclcd sclltcllcc is not tcnscd. (cf. OBJLIST: (C 1 SIIOI'LI)) 1.Verbs \\.hicll satisfy the t'l.an~c occur with should V a s \vclI as ,\\pith IT.The i~oull of IyK is S7II'RIAAi\: (I\'SNS).The particular preposition (s) must he specified for each v c r l~ (TI'POSI 5).Thchy 1-tlclui rcyt of John that 11c) att rlncl. li'Ol'\D LIST: a s k (of), dvnlanc-1 (of), c s p c~t (or), l)l'olx,sc> (to) , rcclllil'c\ (017, suggclst c t o~.ORJLIST: (PhTrINGSTG) :1.Yalnc :Siilcc PhSrTNGSTG is a p e r~~~u t a t i o n of X I t l ' I' S., I'IXGSTC; VINGSTGPN, any vcrb sl~ecifiecl for one lnust be specified for the othcl*.X I -t i v IVIXGSTG 1) Xs,Csually, however, the acceptal~ility of the PhTTINGSTG permutatioii dcpends 011 thc 1-halnpl cs : presence of one or more adjuncts ivithin the VIKGSTG:? H e prefers to going out \vith h h r y They lifnitccl. tr, certain 110~11's his sc'ci~lg 1~isito1-s staying home.Y1cy atti*ihutc?cl. to his i \~i f~~' s 1)usincss ac3un~cin He prefers to going out \\.it11 h k r y his snccccding i1.11c.r.c) cvPl.yo~lcL clsc had failccl. staying 110111~ i\.it;h someone else.I lccharged ta a l~eavy \\~orl;loacl his going honlc The particular p,mpositicm (s) must l~e late. specified for each verb @i1POS15).ATTI3IDU.T 1.: .T V : (OBJLIST: .:I, . . . .) . 3 V INGSTC: PN: .14, I'W INGSTG : . I 5, . . . .WOliD LIST: see OBJLIST: (YINGSTGPN) .The I" of the ol).jcct st~i11g PSNtVlI is rc.s t i*ictccl in terms of the container vcrb , not in terms of thc containcul SN'IVII. This is cvidcnccd by the fact tint thc P of Z' ShTVEI docs not pern~utc aro~u.ultl the SNlY1I (Cf. ODJLIST: (ShIVII)) :John asltcd a h u t whether he should go. $'Jol~n asltml xhcthcr h e should go about.Notc: a\-oid use of \\.hat S a s the SNWH in the tcst ffanlc since i\fl~at S may be the rcplaccnlent of n given N ill P N ( c . g . , John landed on \\+at he had been looliil~g for).The particular prepofition (s) must bc spcci ficd for cnc11 vcrl) fiVPOS15).I asked about whether he ~vould come. I inquired into whether he \\rould come.They pondered over whether h e \ivould come.John. wonclerecl about why she did it. f N,, is a pronoun, it is accusative (iVPOS5).They worried over him .drinking so much.He focused on the president flying to norida in a private plane.We asked about there being no food. staring PSITNGO with the scclucncc P N plus He writes about John's absence disturbing a right adjunct Ving (He looked at the door Mary.opening on to the I~a l~o n y ) , use the expletive tllcrc a s the. N2:Dictionary Entry: We aslicd almut there being no food.T'llc ~~a r t i c u l a~ preposition ($) must be TV: (OBJLIST: -3 , . , . . .) .spcci fi'crl, for each verb (WPOS15).PSVINGO: :IS, . . . .PVAL: (4 ON1 ) .worin I,@T: account (for), amount (to) ,ansnvcl* (for), approve (of), argue (about), aslc (about) began (with), ccntel-(on, almut, around, upor,), come (to, of, from), car.e ( a h u t , for), compare (to, with) , dcpcncl (on, upon) , end (in, witl~) , explain (about) , Focus (011) , hear (of, a b u t ) , lie (about), plan 'on) , poiilt (to) , read ( a b u t ) , remarl; (on, about). remembeil ( a h u t ) , speak (of, almut) , t~l k (of, almut) , tllinli ( a b u t ) , \irondei-(about) , write (about),OBJLIST: (PVINGO) :IiYam e : There is no overt subject of Ving N, tV 1' Ving (OBJ) @! l3.e from his pressing the point).The subject oC tP -(3,) is understood to be the Ihamples : subject of Ving.The paf ticular preposition (s) must be I can't keep f~ om smolcing. specified for each vei+ fiVPOS15).IIe refrained from pressiilg tllc point.She succeeded in passing.She is engaged in n~r i t i n g a novcl.Heleft off seeing her.NOT OBJLIST: (PVINGO) :He rcl ies on ( o u~) 111al;ing an in~prcssion (PI'INGSTG) .He couldnlt acco~ult for (thair) nlal;l:lg a 11Gstal;e. (PI7lNGSTG).WORD LIST: acllllit (to) ; convert @), delay (in), ellgage (in) , fail (in) , go (~vithout) . 1;ecp (from) , specialize (in) .OBJLET: (PVINGSTG) :In the object s_tri.ng PVTNGSTG the left adjunct of Ving (specified in the frame as Nqls) is either an overt subject---He asked about their writing progyams. They asked about Jolmls reading of the passage. capable of occurring ~5 t h a sequence P Viagstg in which the Ving has an overt subject and in which this overt subject is not cortferential wit11 the subject of the tV. Note: a P may occur at the beginning o r end of the SNWH string:1 wonder to whom he is referring.I wonder whom he is referring to.I don It I~IO\Y from nll~onl he obtained the information. I don't knon tvt~~nl he obtained the information from.Tllis P in SWVH is not to be catlfused with the P which is dependent on the container verb (cf. OBJLIST: (PN) , (PSh?YH)). This latter P does not occur at the end of'the SNLW string : I ivondered about whether to go. ?hey a r e discussing \\hether to lea~re.I doubt if he can do it.We ca1u10t establish k o \~ this process I\ orks.Note: do not classify verbs which occllr with the expletiveit as subject fJt does11't rnniter ~vhether he comes) a s SN1171 (see OBJLIST: (NTILATS) ) .WORD L I S P affect, ascertain, ask, calculate, check, contemplate, choose, concern, c o n s i~l c~. control, decide, deduce, clc!loie, discern, discuss, c l o~~i , establish, e~ar\lil?e. heal*, indicate, influence, investigate, judge, knol1. , lean?, matter, nleasur-e, mention, mind . !late, observe, prcc!ic,t, nrove, question, remember, report, reveal, say, see, shou . state, tell, verify, iion~ler, I\ rite.(IRJLIST f (SOBJBE) :Frame:LQ the object sirii~g SOBJBE the OBJBE:N, tV K9 OBTBI: is the prtiicate of N2. The machine grammar allows four possible values hr OEJBE:(IE.JBT< noun, acliectit-c, adyer'u, 13 iL' 1$ NSTG (noun string) : p a m p l L * : They considered him their savior-. They elected him president.They consider him their savior. They call him a genius.They termed him a genius. Tlle restrictions on number agreement behireen subject and object (WAGREE?) apply She thoufiht him a good man.He con~iciers them foolish. 2) ASTG (adicctive string), including adjectival Vens and Vings (see VENDADJ I found it well-dcsigned. and VVERYVTNG; also OBJLIST: (SVEN)) :We thougl~t him interesting.Because of the frequent occurrence of the passive construction in scientific writing, it is more ec~no~mical to list the passive objects for each verb V in the word dictionary, than to compute them by a rule of passive 'omission'. The POWLIST values of a given verb are listed under the past participle pen) form of the verb. The correspondence between active and passive objects used in the preparatioh of dictionary entries is as follows: V) BEINGO (John is being a fool). 9) EhIB EDDEDQ (The question is: 1vI157. did John PO 7) 1v.: Object A t t r i b u t e s of t h e V e r b ( C o n t i n u e d ) This paper defines the 109 adjective, noun and verb subclasses of the NYU Ling~~istic String Parser (LSP) The su~classes have bee11 treated here in such a way that they can be used as a guide for classifying new words for the lexicon ahd as a lineistic reference tool. Each entry bebm provides a definition of the subclass, a diagnostic frame, sentence examples. and a word list d r a m from the lexicon of the computer-grammar (ca 10,000 word entries).The subclasses are defined in terms of string grammar. In string analysis, a sentence is decomposed into an elementary sentence, or center string, and adjunct strings. Iil a string, each word class may bapreceded or followed by left or right adjunct string^,^ and the center string as a whole may have adjunct strings ?vhich precede or follow the center string o r occur at interior parts of the string. A string grammar makes restrictions as to which subclasses can co-occur. The subclass definitions, therefo~e, are based mainly on these ocburrence possibilities (e.g., a count noun is specitied as a noun which camiot occur without a preceding article).More precisely, the entire computer grammar consists of a set of approximately 200 contest-free (BND definitions, a set of a b u t 250 restrictions, and a \lord dictionary. The BNF definitions define the center and adjunct strings of the language as well as sentence nominalization (embedded sentence) strings u;hich may occur in subject, object or complement position. In parsing a sentence, once an element of a string (e.g., SUBJECT, VERB, or OBJECT) has been identified in the sentence, restrictioi~s are invoked to test.various properties, including the subclasses of the words ivithin this element or within this element and an element previously identified.When a word is classified for the LSP lexicon it must be assigned to the syntactic classe (N, V, etc.) which appear in the context-free definitions-and to the specific subclasses (e.g., count noun) which are tested for by the restrictions. The trames and definitions are a compact statement of these constraints. Fbr reference to the computer grammar, we have used the code names of strings and restrictions, bttt the text can be read independently of the referenced material,The strings have roughly mnemonic names. An explanation of some of the mnemonics used in the text is included in the rererence guide which follows this introduction.The restrictions referred to are of several main types: agreement restrictions (AGREI noun phrase restrictions (N) , position restrictions (POS) , quantifier restrictions (Q) , selectior restrictions (SEL) , restrictions on sentence embedding (SN) . and WH-string restrictions m)The name of each restriction is preceded by a W or D and followed by an integer, e.g.: WAGRWhile a subclass is precisely defined by its appearance in the restrictions gf the grammar, a person who is classifying words for the lexicon may need additional criteria in order to capture the intent oi the subclass. This is particularly true in defining the verb subclasses which specify the object strings with which a verb can occur (the OBJIIIST of the test). Here the frames and restrictions may not suffice to distinguish occurrences of the words as instances of the subclass from other possible occurrences covered by the grammar.For example, it is important to distinguish an object string occurrence of SVINGO insdney kept people working overtime from a non-object-string occurrence of the same wordclass sequence,' e.g., one consistingof a noun with its adjunct such as N + reducea relative VINGO in They fired people working overtime. Of course, some verbs will have ambiguous occurrences, e.g. ,"keep in the first example. It would' be incorrect, however. to classfire as occurring with a SVINGO object on the basis of the second exzmple. We haye therefore used additional criteria in defining the object strings in order to clarify the intent of particular subclasses. The criteria used are:(1) Ekcision. In an occurrence of an element with its adjunct in a sentence, the adlunct can be excised leaving a well-formed sentence unchanged in meaning and selection from the original sentence (except for detail added by the adjunct). Thus, we can test whether a word sequence in a sentence is an object string occurrence by excising the portion which might be an Wjunct. If the remaining sentence is either different in grammaticality , meaning d r selection from the original sentence, then the sequence as a whole is cansidered an object strifig occurrence. Fbr our purposes, if the sequence is an object string octurrence, then the verb with which it occurs must be sulxlassed for that object string. For example,line, show and c a r x m u s t be subclassed as occurring with the particle string D P , and walk not, s inc e They lined up. $ They lined.He showed off. He showed. (difference in meahing) He cam fed on. The point easried. (difference in selection) He walked on. He walked. (no difference in grammaticality, selection or meaning)(2) Understood reference. If a given noun in one sequence-occurrence is understood as referring to a particular noun N, and, in a different occurrence as referring to an N,, the two occurrences must not be considered as ilistances of the same string. For example, since messengers in (1) refers to they and messengers in (2) refers to boys, the t~vo occurrences of as -N must n& be considered a s the same string:(r) They served the boys a$ messengers.(2) They treated the boys as messengers.(3) Paraphrase. If a semantic contrast can be found in otherwise identical sequeilces, then these sequences cannot be considered as instances of the same string when subclassing he term %equencen applies to word sequences the structural description of nhich is under discussion. a verb. For example, the a s which is equivalent to 'in the capacity oft in ( 3 ) functions as part of the object string A~B J B I . : , while the a s which is equivalent to 'when' in (4) functions as part of an adjunct which does not restrict the verl~:(3). John served a s a lieutenant.(4) Jollll c h a~ a s a lieutenant.hie to the difficulty of judging the appropriateness of a paraphrase, h o~~e v e r n.e have used this criterion sparingly.As we have noted, the frames and definitions precisely reflect the use of the. major classes and subclasses in the presently implemented string grammar. However, it should also be noted that this grammar, and the associated lexical categories, have been defined so a s to be consistent with a, subsequent stage of transformational analysis which is currently being implemented. lh some cases, the same string form has several transformational sources; where this affects the dictionary classification, we have noted it.Something should be said a b u t the form of the dictionary entries as they appear in the computer lexicon. Each word is classified for all its major class occurrences (N, V, eCc.) and its subc~asses within each major class. The classification is based on the usage of the word in the language as a wllole, not its use in a particular text. Ho\vever, purely coIloquia1 and literary uses have not been covered because of the intended application to scientific tex-ts.The classificatio~~s of the niords are arranged in a hierarchical structure: the malor classes may have subelasses and the subclasses in turn may have subclasses. Fbr esample, the adjectiveclear, which can occur as the predicate of a,sentential subject, is in the subclass AS ENT1. The particular type of sentential subjects clear occurs with (IVH and THAT embeddings) require that it be classified in the two subclasses A\i7H and ATIlAT of ASENTI. This part of the lesical entry appears a s follows: CLEAR ADJ: (ASENTI : (AWH, or alternatively : CLEAR ADJ: . l o . 10 = ASENT.1: (AWH, ATHAT).where the particular line number a s s i w (.lo) is arbitrary. Where this type of further subdivision of a subclass is i;acessary, a sample dictionary entry is provided along with the definitions and frames below.It should be noted that while the entries in the lexicoh a r e by word rather than stem the word entries based on a particular stem can refer to portions of a basic entry which they share in common, e.g., the object list of a verb (OBJLIST) is specified once for all forms df the verb (tensed verb tV,present participle Ving, past participle -Ven and infinitive V).The notational conventions used in the subclass definitions and frames a r e as follows:2an ungrammatical sequence x -x Rhe underlined term) is the class being subclassed in the frame o r a particular lexical item used in the frame.s -s (the double undarlin.ec1 term) is the class being subclassed in the frame where the frame also contains a particular lexical item(X) -in a franc! an optional element (S) -in a demition, a further subdivision of a subclass T -article D -adverbOBJa cover tcrm for all. the object strings (see ol~jcct string reference wide)Sh"an embedded sentence of the following types:TEIATS '-That John was here FORTOVO -f q r Mary to go TOVO -to live SVINGO -then1 worlring overtime C l SIIOULD -that John b-e here S W I~ -\~l~~t h q + j ~v l i y / how. . .It should also be noted tM-€ the specified frame which delimits a word is not the only frame in ~vhid.~ fhiZt word can occur; it serves merely as the test frame when classifying words.The presellt paper is an outgrowth of ongoing work on the LSP lexicon throughout its various implementations and applications since 19G5. It draws particularly an a previous write-up of the LSP g r a m r (fl. Sager, " i Computer String Grammar of Ehglish" ,ring Program Report No. 4, Linguistic String Project, New York University, 1968) , diagnostic frames prepared for LSP use by Barbara Anderson, and classification work by many members of the LSP staff over the years.Fbr a recent description of the LSP systembsee R. Grishman, N. D P 1 = Particle (e.g. carry 0 3 OBJECTBE --ORJBL + verbaP objects of be - D P 1 P N = DPI, + P N DP2 = D P 4 N D P 2 P N = DP", PN DP3 = 2 4 Z+ U P B B 3 P N = DP3 -t-P N DP4 = of-permutabion of DP3 D P~F N~ DP4 + P N DSTG = Adverb stririg m R T O V O = For + Subject -+ to -+ Object NA = N t-Adjective NASOBJBE -N + as -t-Object of be - h ? N -t Adverb N N -~( i n d i~e c t qbject) + N NPN = N + PN NPSNWH N P + wh-conzplement - NPSVINGO = N 4 P 4-' SVINGO NPVINGO -N + P t VlNGO NPVINGSTG = N P t-VINGSTG * NS-NWH -N +-SNTT7H NSTGO -Object N NSVINGO = N's + VINGO NTHATS = N A that -+ ASSERTION NTOBE = N + to + b e + Object of be -- NTOVO = N + to -+ V(infinitive) + Object NULLOBJ = Null,P N -prepoqitional phtase PNHQWS --PN + how + ASSERTION PNN --P N t N inverted NPN s t r i n g PNSNiVH =-P N + SNWI-I PNTHATS P N t TIIATS PNTI-LATSVO YN + that -t t+ Ving t Object SVO Subject 1 tenseless V 1 CXqect TIIATS = that --1 ASSEIZTTON TOVO --to -+ tenseless V -i mject VENO = past participle + Object VTNGO -Ving 1 mject VINGOITN = (N's) of -Object VINGSTGPN -VINGSTG + P N I. Adiective Subclasses.an adjective is in AASP if it occurs N be Addj to V OBJ only witti the non-sentential (non-SNI right Adjectives \Yhich occur with both non-It is a > to be assumed that John left. sentential and sentential right adjuncts arc not in AASP (see ASENTI, ASENT3), e.g.:--- adjunct to -VShe is dueto arrive at five.She was right to object. Jolm is certain to go: John is certain that he will go.John is not certain whether to go.John is eager to go.NOT AASP:John is eager for Mary to go: Jolm is certain to go. (ASENTI)Ile is anxious to leave. (ASENT3)WORD LIST: able, fit, free, quick, readv, set, s b w .Fr aine : an adjective is in subclass AINPA if in it occurs in the adjective position in the --Ad.1 9 at qentence adjunct string PA (P =in or atJ; e.g.: in general, at present, in particular Ehamples :(WPOS11) .The particular P must be specific 1In general, we can maintain the following.far each adjective. We do not, at present, know the answer.We cannot say, in advance, what tomorrow will bring.We didn 't know what to think about her statement at first.Dictionary Entry:GENERAL.ADJ: (.lo), . . . ,AINPA: (4 IN4 ).WORD LIST: advance (in), best (at), first (at), full (in), general (in), last (at), least (at), particular (in), present (at), short (I).en adjc.ctivc is in tht. small subclass N Ad 1 X (X # adjunct or conjunct of ad]) AfNRN if it can occur as a single-word right adjunct of a noun (WN50) :Ehamplcs : the pcwplc prcscnt The figure above illustrates this point.The people absent represent the dissenting Non-AINIIN ad jcctivcs in IiN an additional five people the following three items An additional five people were found.The following three items were mentioned.Please make the next several payments on tives before Q N (the tallest three boys) is time. accounted for by a separate statement in WN5; therefore, su~erlative forms should not be listed 3s A PREQ.We chose the first few people to ~velcome him.The next ten people will constitute the control group., WORD LIST: above, additional, another, best, bottom, first, good, last, necessary, nest, other, own, particular, previous, representative, same, top, usual, very, wrong. Franlc : an adjective is in ASCALE if it canQ N A d J . (Adj is not comparative)occur to the right of the measure scquencc QN in which N is in suhclass NUNIT (inches, Fhamylcs: feet, pounds, years, etc.) &i7Q2), e.g., long in -The line is 10 inches long. a ten inch long linc.Tile line is ten inches lollg.Tlzis is a ten u~c h long linc.ASCAL I< includcs w, wide, -dccp ,He is five years old.hroarl , tall, thick, high, old.Ile is a five year old child. Since both ASCALE and non-ASCALE: John i s certain that he sold books Dictio~larp klntl-y:- adjcctivcs can occur in Q N A Q J(AS ENT3)T t is certain that h e sold h l ; s .Therefore, such adjectives sllould be listed as both ASENTl and ASENT3.ASENTl is subdivided according to the type of SN string with ivl&d~ the particular AS ENTls occur; i . e . , 1) ASENTl: (AFORTO) I;br us to leave now would bc easy It would be easy for u s to leave now.-2) AS EINT1: (ASJIOULD)Tlnt h e return is imperative.That they lied is ohvious.It is obvious that they lied I ) ASENTI : (AWH) C LEA I<. -4DJ: 10 10 AS EKT1: (AIVIj, ATEL4T).Whether he will come is u n c e r t a~n . It is uncertain whether he will come AS ENTl: (A FORTO) is further subdivided into three classes according to the type of extraction from the embedded sentence \~hich occurs with a particular adjcctive; viz. : 1) ASENTI: (AFORTO:(OBSEXT)) occurs in -.. N, t be --(for N1) to V -N2:The problem will be easy for Jolm to solve.For Jolm to solve the problem will be easy.occurs in -Ni t be -to V OBJ:John was kind to invite me.related t oFbr .John to invite me was kind.3) AS E N T l i (A FORTO:OEXT)) occurs with neither type of extraction :For John to write a letter now tv~ulcl be curious. He is tall that they passed his doorwav.ASENT3 is subdivided according to the type of SN string wmin which the particular AS ENT3s occur; i.e., 1) ASENT3: (AFORTO) I would be happy for you to come.I am insistent that you go alone. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
605
0
null
null
null
null
null
null
null
null
3c240ba09e53b1df87a1fdb171a6f9c1c6c3ebf5
219302766
null
Pattern-Matching Rules for the Recognition of Natural Language Dialogue Expressions
Nan-mach i m d i a I c ~g i ~e e ue i ng everday convarsat 1 ona I Eng I
{ "name": [ "Colby, Kenneth Mark and", "Parkison, Roger C. and", "Faught, Bill" ], "affiliation": [ null, null, null ] }
null
null
null
1974-09-01
0
11
null
null
null
null
null
B r a c k e t i n g the p a t t e r n i n t o ,shorter segrncnts.A " s i m p l e " 'Negations and anaphora. Hatching tha p a t t e r n w i t h B t o r e d p a t t c r n e having p o i n t e r s t o response f u n c t i o n s i n memory. I f a complete match i s n o t found, a fuzzy match i s a t t e m p t e d by d e l e t i n g elements f r~m the p a t t e r n one a t time. I f n o match i s found, t h e RESPOND module must decide what t o do.Complete and fuzzy matching uhen the p a t t e r n c o n t a i n s two o r more segment 8 . t r a n s l a t e d into.A l i s t i n g o f the simple patterns. (1) WIIERE DO YOU WORK?Kenneth(2) WHAT SORT OF WORK DO YOU DO?( 3 ) WHAT I S YOUR OCCUPATIDN?rn PADRY1 a prokcclurc scans these expressions I o o k i n g * f o r;rrl i n f o r h i a t i c~n -b c a p i n g c o n t e n t i v o such ae "uork", ' f o r -a l i v i n g " , e t c .Whcn i t f i r i d s such a c o n t e n t i v e along w i t h "you" 9 r ".your" i n The recognitiongmodule has 4 main steps:19 I d e n t i f y the uords i n the question and convert them t o i n t e r n a l synonyms.2) Break the input i n t o segments a t c e r t a i n b r a c k e t i n g words.3) Hatch each segment ( independent l y) to' a st'ored p a t tern. (A l i s t o f t h e b r a c k e t i n g terms appears. i n F i g .3). I WHAT BE YOU JOB. 1u h e r c a s a ccyfipl ex p a t t e r n u o u l d be:( ( 0 WtIY BE YOU 1 i IN LUISPITAL I ) .Our qxper ionce W t h i s method o f segmenrar I on snows r n a r conil~ When more than one simple p a t t e r n i s detected i n t h e i n p u t , a scconci f -BY WIiAT ? -DO YOU KNOW ANYTHING ABOUT BOOKIES? [The response functions provide the information that ehe" r e f e r s t o the "bookie' and "get even u i t h " i s a knoun idiom.) Thcy a r c high-frcquoncy uords and i t uoulq be uastmful t n rqpeatedly attempt to re-spel I t h s~~. 2) They cdUld be re-spel led I n t o a c o p p i e t e l y unrelated uord. 31 They might be 'r r t of idiom ah? must be kept around unt i-1 a f t e r the 'id30ms a r e chec ad .,G I V E G I V E GAVE G I V E ATTEND FLOW GO GONE WENT GOD GOO GOD GOD GOD GOOD GOOD GOOD GO00 GOOD GOOD GOOD GOOD GOOD GO00 GOOD GOOD GOOD GOOD GOOD GOOD GOOD GOOD GOnD GOOD GOOD GOOD GOOD CHRI+ SHOT + STAB 4 STRANGLE + V I OLENCE 4 VIOLWT 4 KILL KILL K I L L K I L L K I L4 K I L L KILL KILL K I L L KlLL K I L L K I L L KILL K I L L K I L L K I L L K I L L KILL K I LFOHE I GNER I MM 1 G M N T I TAL I AN I T A L Y HEX I CAN nOflAN ROME S I C I L I A N S I C I L Y H I T NAN -P K I LCER -B MURDERER + TORPEDO -B I TALY I TALY I TALY I TALY I TALY I TALY I TALY I TALY I TALY K I L L E K I L L E K I L L E K IGOD FAf-IER 3 GODFATHER 3 MOVIE 3 M O V I E N O V I M O V I E R O V+ SHI T SMART WISE SMART SMMT SHI T SHI T SMOKE SHI T SHI T SHI T SHIT SHI T SHI T SHI T SHI T SHI T SMOKE 4 SOLUT I ON 3 WAY OUT 4 s o L uSTORY TELL TELL TELL T f L L TELL STRAN AUTHORITATIVE + DOMINATING 4 OVERSOL I CI T~U S + PERMISSIVE 4 S T R I CT 4 S U I C I D A L -B S U I C I D E 3 TAKE YqUn OWN LIF4 WORTH L I V
Main paper: segment i ng 13: B r a c k e t i n g the p a t t e r n i n t o ,shorter segrncnts.A " s i m p l e " 'Negations and anaphora. Hatching tha p a t t e r n w i t h B t o r e d p a t t c r n e having p o i n t e r s t o response f u n c t i o n s i n memory. I f a complete match i s n o t found, a fuzzy match i s a t t e m p t e d by d e l e t i n g elements f r~m the p a t t e r n one a t time. I f n o match i s found, t h e RESPOND module must decide what t o do.Complete and fuzzy matching uhen the p a t t e r n c o n t a i n s two o r more segment 8 . t r a n s l a t e d into.A l i s t i n g o f the simple patterns. (1) WIIERE DO YOU WORK?Kenneth(2) WHAT SORT OF WORK DO YOU DO?( 3 ) WHAT I S YOUR OCCUPATIDN?rn PADRY1 a prokcclurc scans these expressions I o o k i n g * f o r;rrl i n f o r h i a t i c~n -b c a p i n g c o n t e n t i v o such ae "uork", ' f o r -a l i v i n g " , e t c .Whcn i t f i r i d s such a c o n t e n t i v e along w i t h "you" 9 r ".your" i n The recognitiongmodule has 4 main steps:19 I d e n t i f y the uords i n the question and convert them t o i n t e r n a l synonyms.2) Break the input i n t o segments a t c e r t a i n b r a c k e t i n g words.3) Hatch each segment ( independent l y) to' a st'ored p a t tern. (A l i s t o f t h e b r a c k e t i n g terms appears. i n F i g .3). I WHAT BE YOU JOB. 1u h e r c a s a ccyfipl ex p a t t e r n u o u l d be:( ( 0 WtIY BE YOU 1 i IN LUISPITAL I ) .Our qxper ionce W t h i s method o f segmenrar I on snows r n a r conil~ When more than one simple p a t t e r n i s detected i n t h e i n p u t , a scconci f -BY WIiAT ? -DO YOU KNOW ANYTHING ABOUT BOOKIES? [The response functions provide the information that ehe" r e f e r s t o the "bookie' and "get even u i t h " i s a knoun idiom.) Thcy a r c high-frcquoncy uords and i t uoulq be uastmful t n rqpeatedly attempt to re-spel I t h s~~. 2) They cdUld be re-spel led I n t o a c o p p i e t e l y unrelated uord. 31 They might be 'r r t of idiom ah? must be kept around unt i-1 a f t e r the 'id30ms a r e chec ad .,G I V E G I V E GAVE G I V E ATTEND FLOW GO GONE WENT GOD GOO GOD GOD GOD GOOD GOOD GOOD GO00 GOOD GOOD GOOD GOOD GOOD GO00 GOOD GOOD GOOD GOOD GOOD GOOD GOOD GOOD GOnD GOOD GOOD GOOD GOOD CHRI+ SHOT + STAB 4 STRANGLE + V I OLENCE 4 VIOLWT 4 KILL KILL K I L L K I L L K I L4 K I L L KILL KILL K I L L KlLL K I L L K I L L KILL K I L L K I L L K I L L K I L L KILL K I LFOHE I GNER I MM 1 G M N T I TAL I AN I T A L Y HEX I CAN nOflAN ROME S I C I L I A N S I C I L Y H I T NAN -P K I LCER -B MURDERER + TORPEDO -B I TALY I TALY I TALY I TALY I TALY I TALY I TALY I TALY I TALY K I L L E K I L L E K I L L E K IGOD FAf-IER 3 GODFATHER 3 MOVIE 3 M O V I E N O V I M O V I E R O V+ SHI T SMART WISE SMART SMMT SHI T SHI T SMOKE SHI T SHI T SHI T SHIT SHI T SHI T SHI T SHI T SHI T SMOKE 4 SOLUT I ON 3 WAY OUT 4 s o L uSTORY TELL TELL TELL T f L L TELL STRAN AUTHORITATIVE + DOMINATING 4 OVERSOL I CI T~U S + PERMISSIVE 4 S T R I CT 4 S U I C I D A L -B S U I C I D E 3 TAKE YqUn OWN LIF4 WORTH L I V Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
605
0.018182
null
null
null
null
null
null
null
null
c52a941af78a81a2e1d147ef3848c22cf4a34ecd
219303879
null
Technique: Letters with Variable Values and the Mechanical Inflection of {R}umanian Words
numerous to be considered irreqular. The method of storinq the several al,lomqrphs of the stem fortautomatic i n f l e c t i o n misses t h e n a t u r a l uriity of, the word. We* have Constructed a mechanical Morphol o g i cal D i o t i o n a r y / , containing 2 0 5 8 written Rumanian words with a synthetic repres e n t a t i o n of all these phonetic alternations. An algorithm based on this representatlon generates tne inriecrlonai noncompouna fbrms of these words. They, are Rumanian nouns, adjectives, and verbs, the main part belonging to t h e basic word s b c k [8, 171. About 45 percent of them present stem a l t e r n a t i o n s . 1 The algorithm whose logic was given in [ 3 ] is t h e background of a set of programs written in the programming language ASSIRIS for the French computer IRIS 50 and its Rumanian counterpart FELIX C-256. The proqrams were r e c e n t l y run at trhe Territorial Electronic Calculus Center of Tirnisoara, v e r i f y i n g the algorithm. The s y n t h e t i c representation uses G. C. Moisills notion of l e t t e r s w i t h v a r i a b l e v a l u e s [14, 151 , which V. Gut.u Romalo developed [9]. The setting of our research is arcu us's theory of mathematical linguistics [12, 131, Diaconesculs study of word segmentation and t h e degree of regularity [5, 61 , Domonkosl s '1t seems t h a t in Rumanian only 28 percent or e*en less of the t o t a l number of words have these phonetic a l t e r n a t i o n s , but in o u r d i c t i o n a r y . reference i s made generally to the most frequently used words, w i t h r e l a t i v e frequency above 0.22% [17].
{ "name": [ "Bocsa, Minerva" ], "affiliation": [ null ] }
null
null
null
1974-12-01
0
0
null
null
null
null
spelling. N e v e r t h e l e s s , the words w i t h nonc'onstant stem are too numerous to be considered irreqular. The method of storinq the several al,lomqrphs of the stem fortautomatic i n f l e c t i o n misses t h e n a t u r a l uriity of, the word. The algorithm whose logic was given in [ 3 ] is t h e background of a set of programs written in the programming language ASSIRIS for the French computer IRIS 50 and its Rumanian counterpart FELIX C-256. The proqrams were r e c e n t l y run at trhe Territorial Electronic Calculus Center of Tirnisoara, v e r i f y i n g the algorithm. t i c e no.. 3, Bucure~ti, 1959.
null
Main paper: rumanian infle~tion: spelling. N e v e r t h e l e s s , the words w i t h nonc'onstant stem are too numerous to be considered irreqular. The method of storinq the several al,lomqrphs of the stem fortautomatic i n f l e c t i o n misses t h e n a t u r a l uriity of, the word. The algorithm whose logic was given in [ 3 ] is t h e background of a set of programs written in the programming language ASSIRIS for the French computer IRIS 50 and its Rumanian counterpart FELIX C-256. The proqrams were r e c e n t l y run at trhe Territorial Electronic Calculus Center of Tirnisoara, v e r i f y i n g the algorithm. t i c e no.. 3, Bucure~ti, 1959. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
602
0
null
null
null
null
null
null
null
null
de6cd50742fe3728ea6a041869a9f1dcc2493c19
59849058
null
Understanding by Conceptual Inference
CAUSE C0055 C0059) C0066: (CAUSE ,CBBSS C08G1) C08G5: (CAUSE C91055 C0063) CBGJG4: (TS CBBr33 C0016) C0063: (aMFEELs #B ILL^ #NEGEflOT I ON #JOHN1 1 C00G2: (TIME C0961 C0016) C006 1 : . (POSCHANGE #MARY 1 #JOY CB062: (TS' C0059 C0016) C8059: (WANT #B I LL1 %0058).
{ "name": [ "Rieger, Chuck" ], "affiliation": [ null ] }
null
null
null
1974-12-01
0
2
null
Ally theory o t l a n g u a y a m u s t a l s o be a t h e o r y o f i n f e r e n c e a n d memory. I t d o e s n o t a p p e a r t o be p o s s i b l e t o " u n d e r s t a n d " e v e n t~l e s i m p l e s t o f u t t e r a n c e s i n a c o n t e x t u a l l y m e a n i n g f u l way i n a s y s t e m i n which l a n g u a g e f a i l s t o i n t e r a c t w i t h a l a n g u a g ef r e e memory a n d b e l i e f s y s t e m , o r i n a s y s t e m w ! l i c h , l a c k s a s p o n t a n e o u s i n f e r e n c e r e f l e x . P e o p l e a p p l y a t r e m e n d o u s amount of c o g n i t i v e e f f o r t t o u n d e r s t a n d i n g t h e meaning c o n t e n t o f l a n g u a g e i n c o n t e x t . Yost of t h i s e f f o r t i s o f t h e f o r m o f s p o n t a n e o u s c o n c e p t u a l i n f e r e n c e s w n i c :~ o c c u r i n a l a n g u a g e -i n d e p e n d e n t m e a n i n g e n v i r o n m e n t . I h a v e d e v e l o p e d a t h e o r y of how humans p r o c e s s t h e m e a n i n g c o n t e n t of u t t e r a n c e s I n c o n t e x t . '~'11~ t h e o r y i s c a l l e d C o n c e p t u a l Llcmory, a n d has b e e n implemented l~y a c o m p u t e r p r o g r a m w h i c :~ i s d e s i g n e d t o a c c e p t a s i n p u t a n a l y z e d C n n c e p t u a l Dependency ( S c h a n k e t a l . ) m e a n i n g g r a p h s , t o g e n e r a t e many c o n c e p t u a l i n f e r e n c e s a s a u t om a t i c r e s p o n s e s , then t o i d e n t i f y p o i n t s q f c o n t a c t among t h o s e i n f e r e n c e s i n " i n f e~e n c e s p a c e " . P o i n t s o f c o n t a c t e s t a b l i s h new p a t n w a y s t i l r o u g h e x i s t i n g memory s t r u c t u r e s , and h e n c e " k n i t " e a c h u t t e r a n c e i n w i t n i t s s u r r o u n d i n g c o n t e x t .S i x t e e n classes of c o n c e p t u a l i n f e r e n c e h a v e b e e n i d e n t i f i e d a n d i m p l e m e n t e d , a t l e a s t a t t h e p r o t o t y 2 e l e v e l . T n e s c c l a s s e s a p p e a r t o be e s s e n t i a l t o a l l h i g h e r -l e v e l l a n g u a g e corn7re;1ension p r o c e s s e s . Among them a r e c a u s a t i v e / r e s u l t a t i v e ( t h o s e which I n t e r a c t i o n s o f c o n c e p t u a l i n f e r e n c e w i t i~ t h e l a n g u a g e p r o c e s s e s of (1) w o r d sense p r o m o t i o n i n c o n t e x t , and ( 2 ) i d e n t -ification of referents to memory tokens are discussed. A theoretically important inference-reference "relaxation cycle" is i d e n k~f i e d , and its solution discussed.The theory provides the basis of a computationally effective model of language comprehension at a deep conceptual level, and should therefore be of interest to computational linguists, psychologists and computer scientists alike. 1. The ~leed for a Theory of Conceptual Memory and Inference aesearch in natural language over the past twenty years has been focussed primarily on processes relating to the analysis of individual sentences (parsing). Most of the early work was devoted to syntax. Recently, however, there has been a considerable thrust in the areas of semantic, and importantly, conceptual analysis (see ~2, (Ml) , (Sl) and (C1) for example) . Whereas a syntactic analysis elucidates a sentence's surface syntactic structure, typically by producing some type of phrase-structure parse tree, conceptual analysis elucidates a sentence's meaning (the "~icture" it produces), typically via production of an interconnected network of concepts which specifies the interrelationships among the cohcepts referenced by the words of the sentence. On the one hand, syntactic sentence analysis can more often than not be performed "locally" that is, on single sentences, disregarding any sort of global context; and it is reasonably clear that syntax has generally very little to do with the meaning of the thoughts it expresses. Hence, although syntax is an impoatant link in the understanding chain, it is little more than an abstract system of encoding which does not for the most part relate in any meaningful way to the information it encodes. On the other hand, conceptual sentence analysis, by its very definition, is forced into, the realm of gen,e~d~ WULLU knowledge; a conceptual analyzer's "syntax" is the set of rules which can produce the range of all "reasonable" events that might occur in the real world. Hence, in order to parse corlceptually, the conceptual, analyzer must lnteract with a repository of world knowledge and world knowledge handlers (inferential processes). This need for such an analyzer-accessible korld knowledge repository has provided part sf the morivation for the development of the following theory of conceptual inference and memory however, the production of a conceptual network from an isolated sentence is only the first step in the understanding process. After this first step, the real question is: what happens to this co~ceptual network after it has been produced by the analyzer? That is, if we regard the conceptual analyzer as a specialized component of a larger memory, then the allocation of memory resources in reaction to each sentence follows the pattern: (phase 1) get the sentence into a form which is understandable, then (phase 2) understand it! It is a desire to characterize phase 2 which has served as the primary motivation fbr developing this theory of memory and inference. In this sense, the theory is intended to be a charting-out oef the kinds of processes which must surely occur each time a sentence's conceptual network enters the system. Although it is not intended to be an adequate or v e r i f i a b l e model of how these processes miqht actually occur in humans, the theory described in this paper has nevertheless been implemented hs a computer model under PDP-10 Stanford 1.6 LISP. While the implementation follows as best it can an intuitively correct approach to the various processes described, the main intent of the underlyinghheory is to propose a set of memory processes which, taken together, could behave in a manner similar to the way a human behaves when he "understands language" .The attentive human mind is a volatile processor. My conjecture is that information simply cannot be put into it in a passive way; there are very primitive inference reflexes in its logical architecture which each input meaning stimulus triggers. I will call these primitive inference reflexes "conceptual inferences", and regard them as one class of subconscious memory process. I say "subconscious" because the concern is with a relatively lowlevel stratum of "higher-level cognition", particularly insofar as a human applies it to the understanding of language-communicated information. The eventual goal is to synthesize in an artificial system the rbugh flow of information which occurs in any normal adult response to a meaningfully-connected sequence of natural language utterances. This of course is a rather ambitious project. In this paper I will discuss some important classes of conceptual inference and their relation to a specific formalism I have developed (Rl) .Let me first attem?t, by a fairly ludicrous example, to convince you (1) that your mind is more than a simple receptacle £or data, and (2) that you often have little control over the thoughts that pop up in response to something you perceive. Read the following sentence, pretending you were in the midst of an absorbing novel' EARLIER THAT EVENING, MARY SAID SHE HAD KILLED HERSELF.One of two things probably occurred: either you chose as referent of "herself "some person other than Mary (in which case everything works out fine), or (as many people seem to do) you first identified "herself" as a reference to Mary. In this case, something undoubtedly seemed awry: you ~ealized elther that your choice of referent was erroneous, that the. sentence was part of some unspecified "weird" context, or that there was simply an out-and-out contradiction. Of course, all three interpretations II are unusual in some sense because of a patzntly obvious" contradiction in the picture this utterance elicits. The sentence is syntactically aqd semantically impeccable; only when we "think about it" does the bis fog horn upstairs a1ert:us to the implicit contradiction:MARY SPEAK AT TIME T enablement infer-ence MARY AEIVE AT TIME T MARY NOT ~I V E - AT TIME T 1' state-duration inference MARY CEASES BEING ALIVE AT TIME T-d T resultative inference MARY KILLS HERSELF AT TIME T-dHere is the argument: before reading the sentence, you probably had no suspicion that what you were about to read contalned an implicit contradictiun. Yet you probably discovered that contradiction effortlessly! Could there have been any a prior-i "goal direction" to the three simple inferences above? My conclusio~ is that there could not have been. If we view tne mind as a multi-dimensional "inference space", then each incoming thought produces a spherical burst of activity about the point where it lands in this space (the place where the conceptual network representing it is stored). The horizon of this sphere consists of an advancing wavefront of inferencesspontaneous proDes Which are sent out from the point. Most will lose momentum and eventually atrophy; but a few will conjoin with inferences on the horizons of other points' spheres. The sum of these "points of contact" represents tne integration of the thought into the existing fabric of the memory in that each point of contact establishes a new pathway between the new thought and existing knowledge (or perhaps among several sxisting pieces of knowledge). This to me is a pleasing memory paradigm, and there is a tempting analogy to be drawn with neurons and actual physical w a v e f r o n t s as proposed years ago by researchers such a s J o h n Eccles ( E l ) . The d r a w i n g o f t h i s a n a l o g y is, however, l e f t for t h e p l e a s u r e of you, t h e r e a d e r . This k i l l i n g example was of c o u r s e more p e d a g o g i c a l t h a n s e r i o u s , s i n c e i c i s a loaded ~t t e r a n c e i n v o l v i n g r a t h e r black and w h i t e , almost t r i v i a l l n t e r e n c e s . But it suggests a p o w e r f u l l o w -l e v e l m e c h a n i c s f o r g e n e r a l l a n g u a g e comprehension. L a t e r , I w i l l r e f e r you t o a n example w h i c h shows how t h e implemented model, called MEMORY and described i n (Rl), r e a c t s to t h e more i n t e r e s t i n g example MARY KISSED J O H N BECAUSE HE H I T B I L L , which i s , p e r c e i v e d i n a p a r t i c u l a r c o n t e x t . I t does so i n a way that i n t e g r a t e s t h e t h o u g h t into t h e framework of t h a t c o n t e x t and which r e s u l t s i n a " c a u s a l chain e x p a n s i o n " i n v o l v i n g s i x p r o b a b i l i s t i c i n f e r e n c e s .
Central to this theory are sixteen classes of spontaneous conceptual inferences. These classes are abstract enough to be divorced from any particular meaning representation formalism. However, since they were developed concurrently with a larger moiiel of conceptual memory (R1) which is functionally a part of a language comprehension system involving a conceptual analyzer and generator (MARGIE (S3)), it will help make the following presentation more concrete if we first have a brief look at the operation and goals of the conceptual memory, in the context of the com~lete language comprehension system. The memory adopts Schank et al.'s theory (Sl.S2) of Conceptual aependency (CD) as its basis for representation. CD is a theory of meaning representation which posits the existence of a small number of primitive actions (eleven are used by the conceptual memory), a number of primitive states, and a small set of connectives (links) which can join the actlons and states together into conceptual graphs (networks) . Typical -of the -links e=l -,Each primitive action has a case framework which defines conceptual slots which must be filled whenever the act appears in a conceptual graph. There are in addition TIME, Location and LNSTrumental llnks, and these, as are all conceptual cases, are obligatory, even if they must be i r i f s r e i i t i a l l y filled in by the conceptual memory (CM). Assuming t h e c o n c e p t u a l a n a l y z e r (see (R2) ) h a s c o n s t r u c t e d , i n c o n s u l t a t i o n w i t h t h e CM, a c o n c e p t u a l g r a p h o f t h e s o r t t y p i f i e d by T i g u r e 1, t h e f i r s t s t e p f o r t h e CM i s t o b e g i n " i n t e g r a t i n g " it i n t o some i n t e r n a l memory s t r u c t u r e which i s more amenable t o t h e k i n d s o f a c t i v e i n f e z e n c e m a n i p u l a t i o n s t h e CM wants t o perform. ? h i s i n i t i a l i n t e g r a t i o n o c c u r s i n t h r e e s t a g e s . Thus, for i n s t a n c e , a n i n t e r n a l memory t o k e n with no f e a t u r e s i s s i m p l y "something" i f it must be e x p r e s s e d by l a n g u a g e , whereas t h e t o k e n i l l u s t r a t e d i n F i g u r e 2 would r e p r e s e n t p a r t o f o u r knowledge a b o u t B i l l ' s f r i e n d Mary Smith,, a f e m a l e human who R e f e r e n c e i d e n t i f i c a t i o n i s t h e f i r s t s t a g e o f t h e i n i t i a l i n t e g r a t i o n of the graph i n t o i n t e r n a l memory s t r u c t u x e s . T h e . .gether several p o i n t e r s t o concepts, tokens and o t h e r s t r u c t u r e s ) , which is t h e substance of t h e new inference 4. a d e t a u l t "significance factor" which is a rough, ad hoc measure of t h e i n f e r e n c e ' s probable r e l a t i v e s i g n i f i c a n c e ( t h i s i s used only i f a more sophisticsrted process, t o be described, f a i l s ) 5. a REASONS l i s t , rihich is a list1 of a l l other s t z u c t u r e s i n t h e Chl which were tested by t h e discrimination net leading up t o t h i s inference atom. &very dependency s t r u c t u r e has a REASONS l i s t recording how t h e s t r u ct u r e arose, and t h e .REASONS l i s t plays a v i t a l r o l e i n t h e generation of c e r t a i n t p e s of inference b c t~e e n eacn neli inference ds i t a r i s e s and. e.ui.s t inp memor). depa~dcncy 5 truet u r e s . Because "fuzziness" in t h e matching process implies ;lcccss t o a vast number of h e u r i s t i c s ( t o i l l & t r a t c : uoulcl i t be more l i k e our friend. t h e ---(IPROG NEGCHANGE (ilN PE SC1 (X1 X2) ( ICON0 ( (EVENT UN) ( C O~ ( (F1 (@f SA PE @#PERSON) 1 aNEGCHANGE1 (@WANT PE (GU (9POSCHANGE PE 'SC) 1 I (0.95 1.0 (CAR UNL) (@TS a* ( T I UN)) +FEnPLE OFTEN WANT TO BETTER -f HEflSELVES AFTER SOME NEGCHANGE (1.0 (CAR UNII) (COND -1 (AND (SETQ X1 (F1 (ezMFEEL* @-@SFIEGEtlOTI ON PE) 1 1 IS€ To X2 ! GLOBALF I EI D 1. 1 (I R eNEGCHANGE2 (ePOSCHANGE X1 @#JOY (0.9 ! . 0 , KAR UN) XZ) *PERSON GETS HAPPY WHEN ENEMY (eTIME @s ( T I UN)) WSUFFERS NEGCHANGE a f l . 0 (CAR -UN) 1 ) 1 (COND ( (AND (SETQ X 1 (CAUSER (CAR UN) 1 I (NOT (EQ (CAR X 4 (a2 (LOR X I ) ) 1 ) 1 1 ( I R QNEGCMANGE~ (@*nFEEt% ,PE e#NECEf;OT 1 ON (CAR X I 1 *PEOPLE N' T L I KE (8.95 1.0 (CAR UN1 (CD3 X 1 ) ) -OTHERS HO HURT THEM (@TS ea ( T I UNl1 l?" (1.0 (CAR UN))) 1 1 1 ( (HASPRP PE (el SA PE @#PHYSOBJ) 1 (CON0 ( (AND (SET0 X1 (F1 (sxOUN* PE 1 I I (SESQ X2 (CAUSER (CAR UNT) (NOT (EQ X 1 (CAR C2) 1 ) 1 ( I R *NEGCHGNtE4 (exflFEEL* X 1 eBNEGEflOTION (CAR X2) I #IF X DAMAGES Y ' S PROPERT (0.85 1.0 (CAR UN) X1 (CDR X2) #THEN X MIGHT FEEL ANGER (TS e* [ T I UN)) -TOWARD Y (1.0 (CAR UN))) 1An inference ~nslecule used by the current program.The NEGCHANGE inference molecule.Lawyer or our friend the carpenter to own a radial arm saw?), the evaluator' delegates most of the matching responsibility to programsagain organized by conceptual predicatescalled normality molecules ("N-molecules") . N-molecules, which will1 be discussed more later, can apply detailed heuristics to ferret oqt fuzzy confirmations and contradictions. As I will describe, N-molecules also implement one class of conceptual inference Confirmations and contradictions discovered by the evaluator are noted on special lists which serve a s sources for possible subsequent responses by the CM. In addition, confirmations lead.to invocation of the structure merger, which physically replaces the two matching structures by one new aggregate structure, and thereby knits together t w o lines of inference. As .events go, this is one of the most exciting in the CM.Inference cutoff occurs when the product of an inference's STRENGTH (likelihood) and its significance factor falls below a threshold (0.25) . This ultimately restricts the r a d l u s of each sphere in the inference space, and in the current model, the threshold is set. low to allow considerable expansion. cut o f f i nferences -TT- N-MOLECULES \ REORDERER FIGURE 5 ==3> ( * * ) r? . -The i n T e r e n c e .monitor'. I t is phenonienological t h a t mbst of t h e human language experience focuses on a c t i o n s , t h e i r intcndcd and/or r e s u l t i n g s t a t e s , and tllc causal- In the following descriptions of these 16 c l a s s e s , keep i n mind t h a t a l l types o f -i n f e r e n c e a r e apelicable t o every subcomponent of every utterance, and t h a t t h e 0 1 is e s s e n t i a l l y a p a r a l l e l s i l w l a t i o n . Also bear i n mind t h a t t h e inference evaluator is constantly perf61-rning matching operations on each new inference i n order t o d e t e c t i n t e r e s t i n g ' i n t e r a c t i o n s between inference spheres. I t sllould a l s o be emphasized t h a t conccl7tuni inferences are l~o ha b i l i s t i c and p r e d i c t i v e i n n a t u r c , and t h a t . 1w . makine then~ i n nl~parently iiasteful q u a n t i t i e s , the 0 1 is n o t seeking one-r'esult o r t n i t h . Rather, i n f e r e n t fa1 expansion . i s a,n endeavor which broadens each piece of informat ion i n t o i t s s u r r~u n d i l l g spectrum t o f i l l out the inCormation-rich s i t u a t i o n t o ~Ishich t h e information-lean u t t e r a n c c ]night r e f e r . The ~~1 ' ' s gropings w i l l r e s a b l e more closely the solutiorl of il jigsaw puzzle than t h e more goaldirected s o l u t i o n o f a cross\trord p..~zzlc.# \ / T T T T T T T h 7Ft; end t o TOFF I NFS ! I NFS ( * * * * * ... (.-*) ====,> ( * * * ,* * * 1 111111 / 1behind each i n f e r e n c e class. . S e e (~1 ) for a more c o r n p r e h e n s i v e~ t r e a t m e n t .The Bl must be able t o i d e n t i f y and attempt-to f i l l i n each missing s l o t of an incoming conceptual graph. ~I P L E S : **~ohn~.was driving home from work. He h i t B i l l ' s c a t . (inference) I t was a c a r which John propelled i n t o the c a t .**Jolln bought a chalk l i n e . (inference) I-t was probably from a hardware s t o r e t h a t John bought the chalk Jine.Our use of language presupposes a tremendous underlying lufoiqledge about t h e rcorld. Because of t h i s , even i n , say, t h e most explicit t e c l u~i c a l ~r i t --ing, c e r t a i n assumptions a r e made by t h e ~i~i t a r (speakera about t h e comprehender ' s knowledge -t h a t he can fill i n t h e plethora of . d e t a i l surrounding each thought. In t h e 01, t h i s corresponds t o f i l l i n g i n a l l t h e missing conceptual s l o t s i n a graph. The u t i l i t y of such a process is t~iofold'. Firsr, Cbl f a i l u r e s t o specify a missing concept can serve a s a source of requests..for moreinfarmation (or goals t o seek out tlla't information by CM actions i f . 0 1 is controlling a robot). Second, by predictively completing the graph by application of general p a t t e r n howledge of t h e modeled world, novel r e l a t i o n s q o n g s p e c i f i c concepts and ~o k e n s w i l l a r i s e , and these can lead t o p o t e n t i a l l y s i g n i f i c a n t discoveries by other inferences.To i l l u s t r a t e , a very common missing s l o t is the instrumental case.lie generally leave it t o tile imaginative powers of t h e hearer to su-mise the probable i n s t r m e n t a l action by which some a c t i o n occurred: That she w y have been premature i n t h e specification (and:had l a t e r t o undo i t ) is of secondary importance t o the phenomenon t h a t she did. so spontaneously .(In the CM.specification inferences, as a l l inferences, a r e implemented i n the form of structured progrryns which r e a l i z e discrimination nets whose fzern~inal nodes a r e concepts and tokens rather-than inferences, a s i n generrjl inference molecu'les. These specification procedures a r e c a l l e d specifier molecules ("S-molecules"), and a r e q u i t e similar t o inference molecules. are quick ant1 t o t h e point, q u i t e f l e x i b l e , and have-as much " a e s t h e t i c potelltiB1"a.s e-ven t h e most e l e g a n t d e a t a r a t i v e structures.l i f e -s i r e procedure for this very narrow process of specifying t h e lnissillg object -of a PROPEL a c t i o n ivould obviously rec~fiirc-many more t e s t s f o r r e l a t e d contexts ("Jolm was racing clown. t h e h i l l on lzis bike. H e h i t i l l . ) But independ-cnt of t h e f idel i t y with ~,lhicil any given S-molecule executes i t s t a s k , there i s a vcry important claim buried hoth here and i n t h e other i n f e r e n t i a l procedures i n tile ( 3 1 . I t is that t h e r e are c e r - If an action i s perceived, its probable resulting s t a t e s should be inferred (RESULTATIVE). If -a state i s perceived, the general nature of i t s probable causing action (or a specific action, i f possible) should be inferred (CAUSATILT).**blary h i t Pete with a rock. queries for mor-e information. Cabsal e.qansion successes on the other hand result i n important intervening a c t ions and states \t?~ich draw out ("~oucI~"] surfounding context and serve as t h e basis f a r inferences i n other categories. Appendix A contains the computer printout from >lE\X)R1,, tracing a causal expansion $or "Mary kissed John because he hit B i l l " i n a particular context \3hich makes t h e explanation plausible.CLASS 4 : MOTIVATIONAL .INFERENCES The desires (intentions) of an actor can frequently be inferred by analyzitlg the s t a t e s (ESULTATIVE inferences) which result 'from an action he executes. These \VAN?-STATE patterns are essential t o understanding and should be made i n abundance. **John pointed out t o Lkry that she hadn-. t done her chores. immediate causality of theI action. I n the Ch! candidates f o r bDTIVATIONAL inferences a r e the WSULTATIVE inf er'ences f3 the oil can produce trorn an action A: for each RESULTATIVE inference Ri which t h e (34 could make from A , it conjectures that perhaps the actor of A des5,red I ? iSince t h e generat ion of PRTIVATIONAL inference is dependent upon the results of another class of ihference (in general, the actor could have desired things causally removed by several inferences from the immediate resul t S of his act ion) , the bY3TIVATIOW inference process i s implemented by a special proceuur* POSTSCAN 1~~11ich is invoked betweer "passes" of the nuin breadth-first monitor. Thesc passes w i l l bc discussed nlorc l a t e r .Once generated, each >DTIVATIOW w' i11 generally lead back-\,lard, via CAUSATILT inferences, into an e n t i r c causal chain ~dlich lead up t o t h e action. This chain w i l l f12quently connect i n interesting ways with chains working fonuard from otllcr actions.5 . 4 CLASS 5 : . E.NA.BL.1N.G. I.JSl@ERENCES Every action has a set of enabling conditions -conditions which must be met f o r the action t c~ begin or proceed. The O f needs a rich h0\4ledge of these conditions (states), and should infer s u t a b l e ones t o surrounJ. each perceived action."~olm saw IrJary yesterday, (inference) Jahn and ' k r y were i n the same-general locjtion sometime 'yesterday. **Mary told Pete that John MS at the store-(inference) Mary h e~+ r -t l i a t Jolm was a t ' the store.The example a t the beginning df-the paper contained a c o~~t r n d i c t i o n which could be discovered only ,making a very simple enabl ing inference about the action of speaking (any action for that mattcr) , namcly that tllc Whenever some W A N T STATE of a p o t e n t i a l actor is knohn, predictions about .possible actions t h e a c t o r might perform 'to achieve the s t a t e should be attempted. These predict i o n s w i l l provide potent potential points of contact f o r subsequently perceived actions. EXANPLES : **John '~vants some n a i l s . (inference) John might attemp-t t o acquire some na-ils. **Mary is furious a t Rita.(inference) b h r y might do something t o hurt Rita. DISCUSSION:Action prediction inferences serve the' Inverse r o l e of EfOT1VATIONA.L inierences, i n t h a t they work forward from a kno~m \VANT STATE pattern i n t o predictions about future actions which could produce t h e desired s t a t e . J u s t a s a bRTILT~lTIONAL inference r e l i e s upon RESUZTATIVE inferences, an ACTIOX PRLDICTION inference r e l i e s upon CI\USATIVE Wferences which can be generated from t h e s t a t e t h e p o t e n t i a l actor desires. Because it is often impossible t o a n t i c i p a t e the ef - Thus it is through bDTIVATIOhlAL, ACTION PRFBICTION and ENABLING inferences t h a t the CM can model [ p r~d i c t ) t h e pr-oblem-solving bel~avior of each actor.Predicted act i~n s ~lihich match up \ii t h subsequently perceived conceptual input serve a s a very r e a l measure of t h e 8!'s success a t piecing together connected discourse and s t o r i e s . 1 suspect i n addition t h a t ACTION P~I C T I O N inferences will play a key r o l e i n t h e eventual solutions of t h e "co~ltextuhl guidance of inference" problem. Levy { L l ) ha5 some i n t e r e s t i n g beginning thoughts on t h i s topic. he is near l a r y , and a ;\K)TI\-ATIOLU inference is . t h a t he \cants t o be. near \LIRY A t t h i s point an 1;UULDEZT PEDIC'l7OS infcrcnce can bc made t o repres e n t t h e gencral class of i n t c r n c t i o n s .Jolm might have i n mind. T h i s \<ill be o f '~p a~t i c u 1 a r s i g n i f i c a n c e i f , f o r instance, t h e 0 1 h~orit; already t h a t Jolm had something t.o t c l l her, s i n c e thcn t h e infcrr-cd a c t ion p a t t e r n 1,-oiould match q u i t e ~c l l t h e nction of verbal c o m u~i c a t i o n i n \ i h i c h m t h c s t a t e of s p a t i a l proximity plays a key enabling r o l e .Control over some physical obJ-ect P is usually desired by a potential sctol: because )LC is algaged* i n an alg.oritl?m i n 1~11ich P plays a role. The 0 1 silould attempt to infer a probable action from its knowledge of Y's normal funct ion. EXQIPES: **Nary \cants the book. She. cursed the man ih front of her. . .**>hry saw t h a t Baby B i l l y was running out i n l o the street. (inference) Fhry \<ill pick B i l l y qff t h e ground (IhTER-\l:\TIox)She ran a f t e r him ...Closely r e l a t e d ta t h e other enabling inferences, these forms attempt t o apply horiledge about enilblement r e l a t i o n s t o i n f e r the cause of an a c t i o n ' s f a i l u r e (in t h e case of MISSING LWLBlEEVT),, o r t o p r e d i c t a \ L ' h \ ' T XOT-STATE lihich can lead b~-act ion predict ion inference to possible a c t i o n s of intervention on the p a r t of t h e 1\A\Ter. In the second example above, Pkry (and the 0 1 ' f i r s t niust r e a l i z e (via RESULTATWE inferences) t h e p o t e n t i a l l y undesirable consequences-of B i l l y ' s running action ( i . e . , possible hZGCWUGE f o r B i l l y ) From th.is, t h e CbI can r e t r a c e , l o~a t e t h e running action ~chich could lead t o such a hTGmtYGE, c o l l e c t i t s enabling states., then conj-ecture t h a t Mary might desire ,to annul one o r more af them. Mong them tor instance would be t h a t B i l l y ' s feet be i n intermittent PkB'S-COhT with t h e ground. From t h e (\\' ANT (NOT (EIIYSCONI' FEET GROUbLD))) structure, a-subsequent ACTION PN3ICTION inference can a r i s e , predicting t h n t bhryjmight put an end t o (PtIYSCOK FEET GROUND) . This j v i l l i n turn ~C~U Jher to be located near B i l l y , and t h a t prediction \<ill match the RliSUIJT12TIi7~ inicrence made from her directed running ( t h e ncxt utterance input), h i t t i n g t h e two thoughts together. Nodeling the knowiedge of p o t e n t i a l actors is fundamentally d i f f i c u l t .Yet it is e s s e n t i a l , since most a l l intention/prediction-related inferences n~ust be based i n part on guesses about what Mo~ilcdge each actor has a v a i lable t o him a t various times. The ChI currently models others' howledge by "introspecting" on its orm: assuming another person P has access t o -'the same kinds of information ns t h e 0 1 , P might be expected. In f a c t , a l l inferences must r e l y upon default o s s u n y t i~n p~ ahoutit n o n y l i t ? ; , sincc most of t h e Q? s . h o~~~l e d g c ' (and presumal~ly a. laman' s ) csists i n the folm of general ~a t t e r n s , rather than specific r e l a t i o n s among specific concepts and tokens. The next lass of .infcrcnce 111yl-anents my l~e l i e f t h a t p a t t e r n s , just as inficrences, should bc rcalizcd i n 'the 0 1 1, ) . A successful N-molecule assessrn6nt r e s u l t s i n the* creation of .the assessed information .as a per~~anent\l, eeyl i c i t memory set ructure \cl:hose STRISGTI i is the assessed compatibility.. 'This structure is thc normatire inference.One is quickly arced by his o m a b i l i t y to rotc (usually quite nccuratc1)-1 comonsensc conjecture s~ic11 as these, and. thc-process sccms usually t o hc quite sensitive t o features of t h e e n t i t i e s i n v o l~c d -n tllc conjecture. I t is my feeling t h a t importahT insights. can bc gni~lecl v i a a ~f i~r c thorough t t investigation of the normative inference" process -i 11 huntms .'A.not11cr role of K-molcculcs is' menr ioncd in (111) ~i t h rcspcct t o the infercncc-rcfercnce cycle I v i l l dcscril>c shortly. 1 . 9 s h o~ the substance of a prototype N-molecule f o r assessihg dependent). structures of the form (0hX 1' )ob (person I' o m s object X ) .s P a rneniber. of a pure cobimunal s o c i e t y , o r i s i t an i n f a n t ? i f so, v e r y u n l i k e l y that P puns,X o t h e r w i s e , bbst interesting S t a t e s in the worlcl . a r e t r a n s i e n t . Thc M must have the a b i l i t y t o make s p e c i f i c predictions about themcpecterf (fuzzy] cluration of an a r b i t r a r y s t a t e so t h a t informattion i n the CM car1 be kept up t o date.**~ohn handed Mary the orange peel. {tomorrow' I S Wry s t i l l holding t h e orang6 peel? (inference) Almost c e r t a i n l y not.* *~i t a a t e lunch a half hour ago, Is she hungry yet? (inference) Unlikely.Time features of s t a t e s r e l a t e -i n c r i t i c a l ways t o the likelillood those s t a t e s w i l l be t r u e a t s' ome given time. The thought of a scenario ~c l~e r e l n the 0 1 is informed t h a t hhry is holding an orange peel, then. 50 years " l a t e r , uses that information i n the generation of some other i n f e rence is a b i t unsettling! The M must simply possess a low-level function whose job it is t o preciilct mornal durations of s t a t e s based on the p a r t i c u l a r s of t h e s t a t e s , and 'to use that informatim i n marking as "terminated" those s t a t e s whose 1 ikelihood has diminished below some thresl~olld .hly conjecture i s t h a t a human notices and updates the temporal t r u t h of a ; t a t e only when he is about t o use it i n some cognitive a c t i v i t y --' that most of the transient howledge i n our-heads is out .of date W t i l \ie again attempt t o use it in,-say,.some inference. Accordingly, before using any s t a t e information, the CM f i r s t f i l t e r s it. througlt the STATE DURATIOIt inference' proccss t o a r r i v e a t an updated estimate of t h e state's l i k e l ihood as a .function of i t s known s t a r t i n g time ( i t s TS f e a t u r e , -i n CD notation-) .The *lenlentation of t h i s process i n the Q! is as follows: an ( NDLTR S ?) structure is constructed f6r t h e state S who% duration-is to be predicted, and this is passed t o the NDUR s p e c i f i e r molecufe: The NDUR Smolecule applies discrimination tests on featur~s of the ~b j e c t s involved i n S. Terminal nodes m tne net are duration concepts (typically fuzzy ones), such as #ORDERliOUR, #ORDERY&U. If a terminal node can be successf u l l y reached, thus locat-ing such a concept D, t h e property MNU: (Cltnractcri s t i c time-.function) is retrieved from D's property list. C 1 I .W is a s t e p function of STRE;\GTH vs. the amount o f time some state has been i~ existence (Fig. 10) . From t h i s function a STREhrm i s computed for S and bpcomes S's. predicted likelillood. If the STREVGTl1 turns out t o be sufficiently low, a (TI: 3 ?ow) structure is predictively generated bhny inferences can bc b s e d solely on commonly sbserved o r learned associations, rather than upon "logical" re: lations such a s causation., motivation, and so forth. In a rough way, we can compare these inferences t o t h e phenomenon of visual imagery which constructs a "picture" of a thdught ' s surrounding environment. Such inferences sllould be made i n abundance. Based on the way a thought is conununicated (especially the often telling presence or absence of information) , inferences cm be made about the speakerf s reasons for speaking.L W L E S : . . **Don't eat green gronks.(inference) Other kinds of gronks are probably edible **Mary threw out the rotten part of the fig. (inference) She threw it out because it was rotten. **John was unable t o get an aspirin.(inference) John wanted t o get. an aspirin.**Rita like thexhair, but it was green.(inference) Tl~e c a e ' % color is. a negative deature to.Rita (or the speaker).I have included this class only to represent the largely unexplored domain o t interences & a m from the way a thought is phrased. The 0 1 w i l l eventualiy need an explicit model of conversation,. and this model w i l l , incorporate inferences from this class. Typical of such inferences are t-hose, which translate the inclusion of ~e f e r e n t i a l l y superfluous features of an obj ect into an implied causality relation (the f i g example) , those which infer desire from failure (the aspirin examplej those which infer features of an 0,rdinary X from features of special kinds of X . (the gronk e x q ? l e ) , and so .forth. These issues w i l l lead t o a more,goal directed-model than J am current1.y exploring.Summary of t h e ~n r ' e r e n c e --Component -1 ilavc 1 1 0~ sketched 16 inference classes \\rllich, I conjccturc, l i e a t the corc o r t h c hrni~lm intcrcnce reflex. The central 11ypotl1esi.s is t 11at a lnmlcin lxlg~iagc comprchc~rJer pcrl'o~n~s 1110 1-c suhoo~~sc ious comlmtnt ion. on 1 ;c:un i ng st ruc t u r c s tlim ally other thcory of l a n~, u~~c comjlr-clrcns i on has vet ;isLlovlc;lgccl. \\hen the currcnt 0 1 is turncd loose, i t v i l l often gcnercltc u;.\,.ll.ds of 100 infcrcnccs €roc! a l'nir l'y I~;ulcll stimulus swh as ". Jolm !~o \ -c cc11t1-31 t o any rc.;t~'ictcd d o~a i n \;.hicl~ inl-011-es 1-olit ioiial ; I C~C~B . I t i s 3 current ch:~lle~.gc t p f i n d ~11511 a restricted, y e t i n t c r c~~t i n g , rioxain t o \i!lic!l thcsc idcns can !?c tr:ms:--1antcJ 2nd ;qlplicd i n 5li;;:tly !.\orc i:oal-Zir'ccteticnvil-onrl~cnts. In case ( c ) , ld~cre a s c t of candidates can 1)e locat'cd, T receives t h e set of f e a t u r e s l l V i n n i n t h e intersection of a11 candidates' occurl'enec sets ' ( t h i s w i l l I3c a t 1.eAst tllc dcscl'iptivc s e t ) . I n e i t h e r case, t h e 0 1 then llas an i n t e r n a l token, t o 4i0,rk a i t h , :~llow ing tllc conCcptuaf graph i n bhicb references t o it occur t o be i-.cntativcly Integrated i n t o infercnccs , and eventual 1.)-r~g u l -n s t o i t s quicsccrlt st3 t c . Onc 11>?>1-0dLlc.tof tile infcrcnc ing is that -thc occurrence z c t of cach' memory obj cc t involv- Descriptive set for "The big red dog who a t e t h e bird" f u l l y * t o exactly one) . Tlius , w11e11 tlic inference rct-lex has ceased, t h e reapplies t h e 1-efei'ence i n t e r s e c t ion a l g o r i'rhms t o cnch untdcn.t i f ied :okcn t o seek ~u t any inference-tlhrif,icd refcrenccs. Successful idcntifjrcations.at t h i s ppint r e s u l t i n t h c merging (by t h e ~s a o s t r u c t u r e merger mcntioncd e a r l i e r y of t h e tempmar;\token's occur.rcncc set with t h e i d e n t conceptual patternss from each input is t h a t many r e l a t e d concepts and tokens implicitly in~+olved* i n thc s i t u a t i o n are activated, ora ltouched ."This can be put -to use i n two rpys. F i r s t , %mplicitly touched concepts can c l a r i f y 1%??1at might otherwise be an u t t e r l y opaque subsequent teference. I f , f o r instance, someone says (outside of a parvicular context): "The nurses were nice", you w i l l probably inquire "\tIlat nurses?" I f , on t h e other hana, someone says : ''John \ias ~u n over by a milk truck. \\%en he woke up. t h e nurses were nice" you w i l l experience neither doubt about t h e referents of ' " t h e~u r s e s " , nor s u v r i s e at t h e i r mention. I presume t h a t a -subconscious f i l l i n g -o u t of the s i t u a t i o n "Jolm \\;as run over by .a milk truck!' i m p l i c i t l y a c t i v a t e s an e n t i r e s e t of coneptually relevant concepts, "prechnrging" ideas of 110s- underlying the nurse example. I t is more often than not the. "gestaltt1 meaning context of an utterance which r e s t r i c t s the kinds of meaningful associations a human makes. In contrast t~ the nurse example above, most people would agree t h a t the reference t o "the nurses" i n the fol1'01cing s i t u a t i o n i s a b i t peculiar:,In the dark of the night, John kid ~i a l l o s~e d through the knee-deep mud t o the north \call of t h e deserted animal hospital. The nurses were nice. I t does not appear t o bc possible t o "understand:' cvcn t h e sinll~lcst of uttcrallccs i n a contextwll}: meaningful \say i n a system i n \ \ . i~i~h language f a i l s t o interact w i t 1 1 a language-free belief system, o r i n a system idlich l a c k n spontaneous inference rcflcs.One \;cry k n p o r t a t t h c o r c t i c a l issue concerns csnctly ]low ~nuch "inferelice energy" is expended. before t h e f a c t (prediction, expectation) versus !IOW 111uch is expejded after t h e fact t o clear up specific y r o b l a~a o f hov: the utterance fits tile context. ?ly belief is t l m t t h e r e is a g r e a t deal 01 cxplofa'to~y, e s s e n t i a l l y undirected iinfercncing which is frequently ovcrlookcd and dlicll cannot be repressed because it is t h e language-related manifestot i o n of tile ~~~u c h broader nlotivationtll structure of the b~a i n . I1athe \. 1 * ,Mary* s p l a c i n g ,her d p s i n c o n t a c t w i t h dohn was p r o b a b l y T 'caused -b y Mary 'fee I ing a pqs i t i ve enlot i on t owarcj John, causa t i ve I * Marg* s l ips m r e , i n c o n t a c t u i t.h. dorirl (C0824 . C00321Th'is i s t h e ' p a r t i a l l y i n t e g r a t e d ntenlqry s t r u c t u r e , a s f t e r r e f e r e n c e s have been established. No r e f e r e n c e a n l b i g u i t y ,is asSuhed. to e x i s t f o r t h i s example.C0035 i s the resu! t i n g mehlbry.structur-e f o r t h i s utterancp.--------We suppress a l ! b u t t h i s s t r u c t u r e on' the s t a r t i n g i n f e r e n c e queue.(sue wi ' l be see: ng 'about one. I Here, because r13r was. f e e 1 i ng a n e g a t i ve Y emotion touard 8 i I a t t h e tirtle, when B i l l underwent a smd l l NEGCHARGE ; t h e pr ed i c t i on can be made t h a t Maru mau'.have e x~e r ienced a degree of joy. hand. hot! ce on. t h e REASONS and OFFSPRl NG semis the r e s u l t s uF' o t h e r i n f e r e n c i n g uhich was n o t discussed above.Looking, propagation and s t r e n g t h f a c t o r s f o r each m d i f y i h g s t r u c , t u r e F i y r e 4 i l l u s t r a t e s the small implemen-tedt o make S ' s 101~ likelihood eQlicit. The STATE.DURATIOX inference thus acts as a cleansing f i l t e r on state infomation rd~icll is fed to various other inferehce processes.Jbny llassociativel' inferences can be made to produce nel*. features o f ' an object (or aspects of s situation) from known features. If something wags-its tail, it is probably an animal o'f some s o r t , i f it bitesethe mailman's leg, ir.is probably a dog, i f it has a gray beard and speaks, it is probably an old man, if it honks in a d i s t i n c t i v e way, it is probably some s o r t of vehicle, etc. These classes are -?nT~ereiltl-y unstructured, so I will say no.more about them here, except t h a t they frequently contribute fea-• tures which help c l e a r up reference ambiguities and i n i t i a l reference fail-. ures.istening t o an a m y i n g l y slow speaker), and t h i s is tantalizing evidence t h a t soncthing l i k e a proto-sentence generator is thrashing about ups.t:tirs.
null
F I G U R E 1 1 \ ' r MARY < = = = > DOC o n c e p t u a l dep,endency r e p r e s e n t a t i o n o f t n e s e n t T e n c e ":dary . k i s s e d J o h n because he hit B i l l . II second and thiqrd s t a g e s are ( 2 ) t h e i s o l a t i o n o f s u b g r a p h s which w i l l form t h e b e g i n n i n gi n f e r e n c e , q u e u e --( i n p u t t o t h e spont a n e o u s i n f e r e n c e component) , and A f t e r t h e i n i t i a l i n t e g r a t i o n , t h e i n f e r e n c e component i s a p p l i e d s i m u l t a n e o u s l y t o each memory s t r u c t u r e ( " p o i n t i n i n f e r e n c e s p a c e " ) on t h e s t a r t i n g i n f e r e n c e queue. t The control s t r u c t u r e which implen~ents t h e Cbl irlfcrcnce r e f l e x is o b r e a d t h -f i r s t monitor whose queue at. any moment is a l i s t of p o i n t c r s t o depencbhcy s t r u c t u r e s which have arisen by infcrencc from t h c beginning sti-uctures isolated during t h e i n i t i a l integration. I t is t h e inference monitor's task t o exmine each dependency structure on t h e queue i n t u r n , i s o l a t e its predicate, prepare i t s arguments i n a standard format, c o l l e c t several t i 1 1 e : aspects from t h e s t r u c t u r e ' s occurrence sct-, then c a l l the inference*molecule associated with t h e predicate, passing along t h e arguments mil time info~mation.( I S A # #PERSON) ( S E X # #FEPniLE) (i4AlIIE # I!L\RY) (SUWJAIIE # SL4ITII) (OITNS # C0713) (RESIDENCE # CO8.46) (TSTART # C0654)t h i n 19481 va 1 BILL va l X? <asg> PHYSCONT c---- L~P S c=es> PHYSCONT +---- 7' JOHN I par' (dl ---, BILL / \ F I G U R E 2 +--JOHN P <=--= --------,EQUATIONA l l inferential howledge in the (3rI is contained i n inference molecules, which 1 ie in one-one correspondence with conceptual predicates. r e t i c a l types t o be described) which can be made from t h e clependency s t r u c t u r e . Each p o t e n t i a l inference w i t h i n t h e inference nlolecule i s called an inference atom.Tile contribution of an inference atom which has been found applicable t o t h e dependency s t r u c t u r e reports eight pieces of infonnation t o a component of t h e monitor c a l l e d t h e s t r u c t u r e generator, whose job it is t o embody each ne\i inference i n a memory s t r u c t u r e . These.eigh't pieces of i nformation are the following:1. a unique mnemnic which indicates t o ~QIlich of the 16 t h e o r e t i c a l clqsses t h e ne\c inference belongs ( t h i s mnemonic is associated liitT.1 the new st.ructure only tcmp~rarily on t h e inference queue for subsequent control purposes) 2 . the "reference name" of the genernt-infi inf el-enGe atom (each atom has a unique name which is :issociatcd with t h e new memory s t r u c t u r e f o r c o n t r o l purposes)
null
Main paper: john <===> propel e---x?: F I G U R E 1 1 \ ' r MARY < = = = > DOC o n c e p t u a l dep,endency r e p r e s e n t a t i o n o f t n e s e n t T e n c e ":dary . k i s s e d J o h n because he hit B i l l . II second and thiqrd s t a g e s are ( 2 ) t h e i s o l a t i o n o f s u b g r a p h s which w i l l form t h e b e g i n n i n gi n f e r e n c e , q u e u e --( i n p u t t o t h e spont a n e o u s i n f e r e n c e component) , and A f t e r t h e i n i t i a l i n t e g r a t i o n , t h e i n f e r e n c e component i s a p p l i e d s i m u l t a n e o u s l y t o each memory s t r u c t u r e ( " p o i n t i n i n f e r e n c e s p a c e " ) on t h e s t a r t i n g i n f e r e n c e queue. t The control s t r u c t u r e which implen~ents t h e Cbl irlfcrcnce r e f l e x is o b r e a d t h -f i r s t monitor whose queue at. any moment is a l i s t of p o i n t c r s t o depencbhcy s t r u c t u r e s which have arisen by infcrencc from t h c beginning sti-uctures isolated during t h e i n i t i a l integration. I t is t h e inference monitor's task t o exmine each dependency structure on t h e queue i n t u r n , i s o l a t e its predicate, prepare i t s arguments i n a standard format, c o l l e c t several t i 1 1 e : aspects from t h e s t r u c t u r e ' s occurrence sct-, then c a l l the inference*molecule associated with t h e predicate, passing along t h e arguments mil time info~mation.( I S A # #PERSON) ( S E X # #FEPniLE) (i4AlIIE # I!L\RY) (SUWJAIIE # SL4ITII) (OITNS # C0713) (RESIDENCE # CO8.46) (TSTART # C0654)t h i n 19481 va 1 BILL va l X? <asg> PHYSCONT c---- L~P S c=es> PHYSCONT +---- 7' JOHN I par' (dl ---, BILL / \ F I G U R E 2 +--JOHN P <=--= --------,EQUATIONA l l inferential howledge in the (3rI is contained i n inference molecules, which 1 ie in one-one correspondence with conceptual predicates. r e t i c a l types t o be described) which can be made from t h e clependency s t r u c t u r e . Each p o t e n t i a l inference w i t h i n t h e inference nlolecule i s called an inference atom.Tile contribution of an inference atom which has been found applicable t o t h e dependency s t r u c t u r e reports eight pieces of infonnation t o a component of t h e monitor c a l l e d t h e s t r u c t u r e generator, whose job it is t o embody each ne\i inference i n a memory s t r u c t u r e . These.eigh't pieces of i nformation are the following:1. a unique mnemnic which indicates t o ~QIlich of the 16 t h e o r e t i c a l clqsses t h e ne\c inference belongs ( t h i s mnemonic is associated liitT.1 the new st.ructure only tcmp~rarily on t h e inference queue for subsequent control purposes) 2 . the "reference name" of the genernt-infi inf el-enGe atom (each atom has a unique name which is :issociatcd with t h e new memory s t r u c t u r e f o r c o n t r o l purposes) a simple example: The attentive human mind is a volatile processor. My conjecture is that information simply cannot be put into it in a passive way; there are very primitive inference reflexes in its logical architecture which each input meaning stimulus triggers. I will call these primitive inference reflexes "conceptual inferences", and regard them as one class of subconscious memory process. I say "subconscious" because the concern is with a relatively lowlevel stratum of "higher-level cognition", particularly insofar as a human applies it to the understanding of language-communicated information. The eventual goal is to synthesize in an artificial system the rbugh flow of information which occurs in any normal adult response to a meaningfully-connected sequence of natural language utterances. This of course is a rather ambitious project. In this paper I will discuss some important classes of conceptual inference and their relation to a specific formalism I have developed (Rl) .Let me first attem?t, by a fairly ludicrous example, to convince you (1) that your mind is more than a simple receptacle £or data, and (2) that you often have little control over the thoughts that pop up in response to something you perceive. Read the following sentence, pretending you were in the midst of an absorbing novel' EARLIER THAT EVENING, MARY SAID SHE HAD KILLED HERSELF.One of two things probably occurred: either you chose as referent of "herself "some person other than Mary (in which case everything works out fine), or (as many people seem to do) you first identified "herself" as a reference to Mary. In this case, something undoubtedly seemed awry: you ~ealized elther that your choice of referent was erroneous, that the. sentence was part of some unspecified "weird" context, or that there was simply an out-and-out contradiction. Of course, all three interpretations II are unusual in some sense because of a patzntly obvious" contradiction in the picture this utterance elicits. The sentence is syntactically aqd semantically impeccable; only when we "think about it" does the bis fog horn upstairs a1ert:us to the implicit contradiction:MARY SPEAK AT TIME T enablement infer-ence MARY AEIVE AT TIME T MARY NOT ~I V E - AT TIME T 1' state-duration inference MARY CEASES BEING ALIVE AT TIME T-d T resultative inference MARY KILLS HERSELF AT TIME T-dHere is the argument: before reading the sentence, you probably had no suspicion that what you were about to read contalned an implicit contradictiun. Yet you probably discovered that contradiction effortlessly! Could there have been any a prior-i "goal direction" to the three simple inferences above? My conclusio~ is that there could not have been. If we view tne mind as a multi-dimensional "inference space", then each incoming thought produces a spherical burst of activity about the point where it lands in this space (the place where the conceptual network representing it is stored). The horizon of this sphere consists of an advancing wavefront of inferencesspontaneous proDes Which are sent out from the point. Most will lose momentum and eventually atrophy; but a few will conjoin with inferences on the horizons of other points' spheres. The sum of these "points of contact" represents tne integration of the thought into the existing fabric of the memory in that each point of contact establishes a new pathway between the new thought and existing knowledge (or perhaps among several sxisting pieces of knowledge). This to me is a pleasing memory paradigm, and there is a tempting analogy to be drawn with neurons and actual physical w a v e f r o n t s as proposed years ago by researchers such a s J o h n Eccles ( E l ) . The d r a w i n g o f t h i s a n a l o g y is, however, l e f t for t h e p l e a s u r e of you, t h e r e a d e r . This k i l l i n g example was of c o u r s e more p e d a g o g i c a l t h a n s e r i o u s , s i n c e i c i s a loaded ~t t e r a n c e i n v o l v i n g r a t h e r black and w h i t e , almost t r i v i a l l n t e r e n c e s . But it suggests a p o w e r f u l l o w -l e v e l m e c h a n i c s f o r g e n e r a l l a n g u a g e comprehension. L a t e r , I w i l l r e f e r you t o a n example w h i c h shows how t h e implemented model, called MEMORY and described i n (Rl), r e a c t s to t h e more i n t e r e s t i n g example MARY KISSED J O H N BECAUSE HE H I T B I L L , which i s , p e r c e i v e d i n a p a r t i c u l a r c o n t e x t . I t does so i n a way that i n t e g r a t e s t h e t h o u g h t into t h e framework of t h a t c o n t e x t and which r e s u l t s i n a " c a u s a l chain e x p a n s i o n " i n v o l v i n g s i x p r o b a b i l i s t i c i n f e r e n c e s . background: Central to this theory are sixteen classes of spontaneous conceptual inferences. These classes are abstract enough to be divorced from any particular meaning representation formalism. However, since they were developed concurrently with a larger moiiel of conceptual memory (R1) which is functionally a part of a language comprehension system involving a conceptual analyzer and generator (MARGIE (S3)), it will help make the following presentation more concrete if we first have a brief look at the operation and goals of the conceptual memory, in the context of the com~lete language comprehension system. The memory adopts Schank et al.'s theory (Sl.S2) of Conceptual aependency (CD) as its basis for representation. CD is a theory of meaning representation which posits the existence of a small number of primitive actions (eleven are used by the conceptual memory), a number of primitive states, and a small set of connectives (links) which can join the actlons and states together into conceptual graphs (networks) . Typical -of the -links e=l -,Each primitive action has a case framework which defines conceptual slots which must be filled whenever the act appears in a conceptual graph. There are in addition TIME, Location and LNSTrumental llnks, and these, as are all conceptual cases, are obligatory, even if they must be i r i f s r e i i t i a l l y filled in by the conceptual memory (CM). Assuming t h e c o n c e p t u a l a n a l y z e r (see (R2) ) h a s c o n s t r u c t e d , i n c o n s u l t a t i o n w i t h t h e CM, a c o n c e p t u a l g r a p h o f t h e s o r t t y p i f i e d by T i g u r e 1, t h e f i r s t s t e p f o r t h e CM i s t o b e g i n " i n t e g r a t i n g " it i n t o some i n t e r n a l memory s t r u c t u r e which i s more amenable t o t h e k i n d s o f a c t i v e i n f e z e n c e m a n i p u l a t i o n s t h e CM wants t o perform. ? h i s i n i t i a l i n t e g r a t i o n o c c u r s i n t h r e e s t a g e s . Thus, for i n s t a n c e , a n i n t e r n a l memory t o k e n with no f e a t u r e s i s s i m p l y "something" i f it must be e x p r e s s e d by l a n g u a g e , whereas t h e t o k e n i l l u s t r a t e d i n F i g u r e 2 would r e p r e s e n t p a r t o f o u r knowledge a b o u t B i l l ' s f r i e n d Mary Smith,, a f e m a l e human who R e f e r e n c e i d e n t i f i c a t i o n i s t h e f i r s t s t a g e o f t h e i n i t i a l i n t e g r a t i o n of the graph i n t o i n t e r n a l memory s t r u c t u x e s . T h e . .gether several p o i n t e r s t o concepts, tokens and o t h e r s t r u c t u r e s ) , which is t h e substance of t h e new inference 4. a d e t a u l t "significance factor" which is a rough, ad hoc measure of t h e i n f e r e n c e ' s probable r e l a t i v e s i g n i f i c a n c e ( t h i s i s used only i f a more sophisticsrted process, t o be described, f a i l s ) 5. a REASONS l i s t , rihich is a list1 of a l l other s t z u c t u r e s i n t h e Chl which were tested by t h e discrimination net leading up t o t h i s inference atom. &very dependency s t r u c t u r e has a REASONS l i s t recording how t h e s t r u ct u r e arose, and t h e .REASONS l i s t plays a v i t a l r o l e i n t h e generation of c e r t a i n t p e s of inference b c t~e e n eacn neli inference ds i t a r i s e s and. e.ui.s t inp memor). depa~dcncy 5 truet u r e s . Because "fuzziness" in t h e matching process implies ;lcccss t o a vast number of h e u r i s t i c s ( t o i l l & t r a t c : uoulcl i t be more l i k e our friend. t h e ---(IPROG NEGCHANGE (ilN PE SC1 (X1 X2) ( ICON0 ( (EVENT UN) ( C O~ ( (F1 (@f SA PE @#PERSON) 1 aNEGCHANGE1 (@WANT PE (GU (9POSCHANGE PE 'SC) 1 I (0.95 1.0 (CAR UNL) (@TS a* ( T I UN)) +FEnPLE OFTEN WANT TO BETTER -f HEflSELVES AFTER SOME NEGCHANGE (1.0 (CAR UNII) (COND -1 (AND (SETQ X1 (F1 (ezMFEEL* @-@SFIEGEtlOTI ON PE) 1 1 IS€ To X2 ! GLOBALF I EI D 1. 1 (I R eNEGCHANGE2 (ePOSCHANGE X1 @#JOY (0.9 ! . 0 , KAR UN) XZ) *PERSON GETS HAPPY WHEN ENEMY (eTIME @s ( T I UN)) WSUFFERS NEGCHANGE a f l . 0 (CAR -UN) 1 ) 1 (COND ( (AND (SETQ X 1 (CAUSER (CAR UN) 1 I (NOT (EQ (CAR X 4 (a2 (LOR X I ) ) 1 ) 1 1 ( I R QNEGCMANGE~ (@*nFEEt% ,PE e#NECEf;OT 1 ON (CAR X I 1 *PEOPLE N' T L I KE (8.95 1.0 (CAR UN1 (CD3 X 1 ) ) -OTHERS HO HURT THEM (@TS ea ( T I UNl1 l?" (1.0 (CAR UN))) 1 1 1 ( (HASPRP PE (el SA PE @#PHYSOBJ) 1 (CON0 ( (AND (SET0 X1 (F1 (sxOUN* PE 1 I I (SESQ X2 (CAUSER (CAR UNT) (NOT (EQ X 1 (CAR C2) 1 ) 1 ( I R *NEGCHGNtE4 (exflFEEL* X 1 eBNEGEflOTION (CAR X2) I #IF X DAMAGES Y ' S PROPERT (0.85 1.0 (CAR UN) X1 (CDR X2) #THEN X MIGHT FEEL ANGER (TS e* [ T I UN)) -TOWARD Y (1.0 (CAR UN))) 1An inference ~nslecule used by the current program.The NEGCHANGE inference molecule.Lawyer or our friend the carpenter to own a radial arm saw?), the evaluator' delegates most of the matching responsibility to programsagain organized by conceptual predicatescalled normality molecules ("N-molecules") . N-molecules, which will1 be discussed more later, can apply detailed heuristics to ferret oqt fuzzy confirmations and contradictions. As I will describe, N-molecules also implement one class of conceptual inference Confirmations and contradictions discovered by the evaluator are noted on special lists which serve a s sources for possible subsequent responses by the CM. In addition, confirmations lead.to invocation of the structure merger, which physically replaces the two matching structures by one new aggregate structure, and thereby knits together t w o lines of inference. As .events go, this is one of the most exciting in the CM.Inference cutoff occurs when the product of an inference's STRENGTH (likelihood) and its significance factor falls below a threshold (0.25) . This ultimately restricts the r a d l u s of each sphere in the inference space, and in the current model, the threshold is set. low to allow considerable expansion. cut o f f i nferences -TT- N-MOLECULES \ REORDERER FIGURE 5 ==3> ( * * ) r? . -The i n T e r e n c e .monitor'. I t is phenonienological t h a t mbst of t h e human language experience focuses on a c t i o n s , t h e i r intcndcd and/or r e s u l t i n g s t a t e s , and tllc causal- In the following descriptions of these 16 c l a s s e s , keep i n mind t h a t a l l types o f -i n f e r e n c e a r e apelicable t o every subcomponent of every utterance, and t h a t t h e 0 1 is e s s e n t i a l l y a p a r a l l e l s i l w l a t i o n . Also bear i n mind t h a t t h e inference evaluator is constantly perf61-rning matching operations on each new inference i n order t o d e t e c t i n t e r e s t i n g ' i n t e r a c t i o n s between inference spheres. I t sllould a l s o be emphasized t h a t conccl7tuni inferences are l~o ha b i l i s t i c and p r e d i c t i v e i n n a t u r c , and t h a t . 1w . makine then~ i n nl~parently iiasteful q u a n t i t i e s , the 0 1 is n o t seeking one-r'esult o r t n i t h . Rather, i n f e r e n t fa1 expansion . i s a,n endeavor which broadens each piece of informat ion i n t o i t s s u r r~u n d i l l g spectrum t o f i l l out the inCormation-rich s i t u a t i o n t o ~Ishich t h e information-lean u t t e r a n c c ]night r e f e r . The ~~1 ' ' s gropings w i l l r e s a b l e more closely the solutiorl of il jigsaw puzzle than t h e more goaldirected s o l u t i o n o f a cross\trord p..~zzlc.# \ / T T T T T T T h 7Ft; end t o TOFF I NFS ! I NFS ( * * * * * ... (.-*) ====,> ( * * * ,* * * 1 111111 / 1behind each i n f e r e n c e class. . S e e (~1 ) for a more c o r n p r e h e n s i v e~ t r e a t m e n t .The Bl must be able t o i d e n t i f y and attempt-to f i l l i n each missing s l o t of an incoming conceptual graph. ~I P L E S : **~ohn~.was driving home from work. He h i t B i l l ' s c a t . (inference) I t was a c a r which John propelled i n t o the c a t .**Jolln bought a chalk l i n e . (inference) I-t was probably from a hardware s t o r e t h a t John bought the chalk Jine.Our use of language presupposes a tremendous underlying lufoiqledge about t h e rcorld. Because of t h i s , even i n , say, t h e most explicit t e c l u~i c a l ~r i t --ing, c e r t a i n assumptions a r e made by t h e ~i~i t a r (speakera about t h e comprehender ' s knowledge -t h a t he can fill i n t h e plethora of . d e t a i l surrounding each thought. In t h e 01, t h i s corresponds t o f i l l i n g i n a l l t h e missing conceptual s l o t s i n a graph. The u t i l i t y of such a process is t~iofold'. Firsr, Cbl f a i l u r e s t o specify a missing concept can serve a s a source of requests..for moreinfarmation (or goals t o seek out tlla't information by CM actions i f . 0 1 is controlling a robot). Second, by predictively completing the graph by application of general p a t t e r n howledge of t h e modeled world, novel r e l a t i o n s q o n g s p e c i f i c concepts and ~o k e n s w i l l a r i s e , and these can lead t o p o t e n t i a l l y s i g n i f i c a n t discoveries by other inferences.To i l l u s t r a t e , a very common missing s l o t is the instrumental case.lie generally leave it t o tile imaginative powers of t h e hearer to su-mise the probable i n s t r m e n t a l action by which some a c t i o n occurred: That she w y have been premature i n t h e specification (and:had l a t e r t o undo i t ) is of secondary importance t o the phenomenon t h a t she did. so spontaneously .(In the CM.specification inferences, as a l l inferences, a r e implemented i n the form of structured progrryns which r e a l i z e discrimination nets whose fzern~inal nodes a r e concepts and tokens rather-than inferences, a s i n generrjl inference molecu'les. These specification procedures a r e c a l l e d specifier molecules ("S-molecules"), and a r e q u i t e similar t o inference molecules. are quick ant1 t o t h e point, q u i t e f l e x i b l e , and have-as much " a e s t h e t i c potelltiB1"a.s e-ven t h e most e l e g a n t d e a t a r a t i v e structures.l i f e -s i r e procedure for this very narrow process of specifying t h e lnissillg object -of a PROPEL a c t i o n ivould obviously rec~fiirc-many more t e s t s f o r r e l a t e d contexts ("Jolm was racing clown. t h e h i l l on lzis bike. H e h i t i l l . ) But independ-cnt of t h e f idel i t y with ~,lhicil any given S-molecule executes i t s t a s k , there i s a vcry important claim buried hoth here and i n t h e other i n f e r e n t i a l procedures i n tile ( 3 1 . I t is that t h e r e are c e r - If an action i s perceived, its probable resulting s t a t e s should be inferred (RESULTATIVE). If -a state i s perceived, the general nature of i t s probable causing action (or a specific action, i f possible) should be inferred (CAUSATILT).**blary h i t Pete with a rock. queries for mor-e information. Cabsal e.qansion successes on the other hand result i n important intervening a c t ions and states \t?~ich draw out ("~oucI~"] surfounding context and serve as t h e basis f a r inferences i n other categories. Appendix A contains the computer printout from >lE\X)R1,, tracing a causal expansion $or "Mary kissed John because he hit B i l l " i n a particular context \3hich makes t h e explanation plausible.CLASS 4 : MOTIVATIONAL .INFERENCES The desires (intentions) of an actor can frequently be inferred by analyzitlg the s t a t e s (ESULTATIVE inferences) which result 'from an action he executes. These \VAN?-STATE patterns are essential t o understanding and should be made i n abundance. **John pointed out t o Lkry that she hadn-. t done her chores. immediate causality of theI action. I n the Ch! candidates f o r bDTIVATIONAL inferences a r e the WSULTATIVE inf er'ences f3 the oil can produce trorn an action A: for each RESULTATIVE inference Ri which t h e (34 could make from A , it conjectures that perhaps the actor of A des5,red I ? iSince t h e generat ion of PRTIVATIONAL inference is dependent upon the results of another class of ihference (in general, the actor could have desired things causally removed by several inferences from the immediate resul t S of his act ion) , the bY3TIVATIOW inference process i s implemented by a special proceuur* POSTSCAN 1~~11ich is invoked betweer "passes" of the nuin breadth-first monitor. Thesc passes w i l l bc discussed nlorc l a t e r .Once generated, each >DTIVATIOW w' i11 generally lead back-\,lard, via CAUSATILT inferences, into an e n t i r c causal chain ~dlich lead up t o t h e action. This chain w i l l f12quently connect i n interesting ways with chains working fonuard from otllcr actions.5 . 4 CLASS 5 : . E.NA.BL.1N.G. I.JSl@ERENCES Every action has a set of enabling conditions -conditions which must be met f o r the action t c~ begin or proceed. The O f needs a rich h0\4ledge of these conditions (states), and should infer s u t a b l e ones t o surrounJ. each perceived action."~olm saw IrJary yesterday, (inference) Jahn and ' k r y were i n the same-general locjtion sometime 'yesterday. **Mary told Pete that John MS at the store-(inference) Mary h e~+ r -t l i a t Jolm was a t ' the store.The example a t the beginning df-the paper contained a c o~~t r n d i c t i o n which could be discovered only ,making a very simple enabl ing inference about the action of speaking (any action for that mattcr) , namcly that tllc Whenever some W A N T STATE of a p o t e n t i a l actor is knohn, predictions about .possible actions t h e a c t o r might perform 'to achieve the s t a t e should be attempted. These predict i o n s w i l l provide potent potential points of contact f o r subsequently perceived actions. EXANPLES : **John '~vants some n a i l s . (inference) John might attemp-t t o acquire some na-ils. **Mary is furious a t Rita.(inference) b h r y might do something t o hurt Rita. DISCUSSION:Action prediction inferences serve the' Inverse r o l e of EfOT1VATIONA.L inierences, i n t h a t they work forward from a kno~m \VANT STATE pattern i n t o predictions about future actions which could produce t h e desired s t a t e . J u s t a s a bRTILT~lTIONAL inference r e l i e s upon RESUZTATIVE inferences, an ACTIOX PRLDICTION inference r e l i e s upon CI\USATIVE Wferences which can be generated from t h e s t a t e t h e p o t e n t i a l actor desires. Because it is often impossible t o a n t i c i p a t e the ef - Thus it is through bDTIVATIOhlAL, ACTION PRFBICTION and ENABLING inferences t h a t the CM can model [ p r~d i c t ) t h e pr-oblem-solving bel~avior of each actor.Predicted act i~n s ~lihich match up \ii t h subsequently perceived conceptual input serve a s a very r e a l measure of t h e 8!'s success a t piecing together connected discourse and s t o r i e s . 1 suspect i n addition t h a t ACTION P~I C T I O N inferences will play a key r o l e i n t h e eventual solutions of t h e "co~ltextuhl guidance of inference" problem. Levy { L l ) ha5 some i n t e r e s t i n g beginning thoughts on t h i s topic. he is near l a r y , and a ;\K)TI\-ATIOLU inference is . t h a t he \cants t o be. near \LIRY A t t h i s point an 1;UULDEZT PEDIC'l7OS infcrcnce can bc made t o repres e n t t h e gencral class of i n t c r n c t i o n s .Jolm might have i n mind. T h i s \<ill be o f '~p a~t i c u 1 a r s i g n i f i c a n c e i f , f o r instance, t h e 0 1 h~orit; already t h a t Jolm had something t.o t c l l her, s i n c e thcn t h e infcrr-cd a c t ion p a t t e r n 1,-oiould match q u i t e ~c l l t h e nction of verbal c o m u~i c a t i o n i n \ i h i c h m t h c s t a t e of s p a t i a l proximity plays a key enabling r o l e .Control over some physical obJ-ect P is usually desired by a potential sctol: because )LC is algaged* i n an alg.oritl?m i n 1~11ich P plays a role. The 0 1 silould attempt to infer a probable action from its knowledge of Y's normal funct ion. EXQIPES: **Nary \cants the book. She. cursed the man ih front of her. . .**>hry saw t h a t Baby B i l l y was running out i n l o the street. (inference) Fhry \<ill pick B i l l y qff t h e ground (IhTER-\l:\TIox)She ran a f t e r him ...Closely r e l a t e d ta t h e other enabling inferences, these forms attempt t o apply horiledge about enilblement r e l a t i o n s t o i n f e r the cause of an a c t i o n ' s f a i l u r e (in t h e case of MISSING LWLBlEEVT),, o r t o p r e d i c t a \ L ' h \ ' T XOT-STATE lihich can lead b~-act ion predict ion inference to possible a c t i o n s of intervention on the p a r t of t h e 1\A\Ter. In the second example above, Pkry (and the 0 1 ' f i r s t niust r e a l i z e (via RESULTATWE inferences) t h e p o t e n t i a l l y undesirable consequences-of B i l l y ' s running action ( i . e . , possible hZGCWUGE f o r B i l l y ) From th.is, t h e CbI can r e t r a c e , l o~a t e t h e running action ~chich could lead t o such a hTGmtYGE, c o l l e c t i t s enabling states., then conj-ecture t h a t Mary might desire ,to annul one o r more af them. Mong them tor instance would be t h a t B i l l y ' s feet be i n intermittent PkB'S-COhT with t h e ground. From t h e (\\' ANT (NOT (EIIYSCONI' FEET GROUbLD))) structure, a-subsequent ACTION PN3ICTION inference can a r i s e , predicting t h n t bhryjmight put an end t o (PtIYSCOK FEET GROUND) . This j v i l l i n turn ~C~U Jher to be located near B i l l y , and t h a t prediction \<ill match the RliSUIJT12TIi7~ inicrence made from her directed running ( t h e ncxt utterance input), h i t t i n g t h e two thoughts together. Nodeling the knowiedge of p o t e n t i a l actors is fundamentally d i f f i c u l t .Yet it is e s s e n t i a l , since most a l l intention/prediction-related inferences n~ust be based i n part on guesses about what Mo~ilcdge each actor has a v a i lable t o him a t various times. The ChI currently models others' howledge by "introspecting" on its orm: assuming another person P has access t o -'the same kinds of information ns t h e 0 1 , P might be expected. In f a c t , a l l inferences must r e l y upon default o s s u n y t i~n p~ ahoutit n o n y l i t ? ; , sincc most of t h e Q? s . h o~~~l e d g c ' (and presumal~ly a. laman' s ) csists i n the folm of general ~a t t e r n s , rather than specific r e l a t i o n s among specific concepts and tokens. The next lass of .infcrcnce 111yl-anents my l~e l i e f t h a t p a t t e r n s , just as inficrences, should bc rcalizcd i n 'the 0 1 1, ) . A successful N-molecule assessrn6nt r e s u l t s i n the* creation of .the assessed information .as a per~~anent\l, eeyl i c i t memory set ructure \cl:hose STRISGTI i is the assessed compatibility.. 'This structure is thc normatire inference.One is quickly arced by his o m a b i l i t y to rotc (usually quite nccuratc1)-1 comonsensc conjecture s~ic11 as these, and. thc-process sccms usually t o hc quite sensitive t o features of t h e e n t i t i e s i n v o l~c d -n tllc conjecture. I t is my feeling t h a t importahT insights. can bc gni~lecl v i a a ~f i~r c thorough t t investigation of the normative inference" process -i 11 huntms .'A.not11cr role of K-molcculcs is' menr ioncd in (111) ~i t h rcspcct t o the infercncc-rcfercnce cycle I v i l l dcscril>c shortly. 1 . 9 s h o~ the substance of a prototype N-molecule f o r assessihg dependent). structures of the form (0hX 1' )ob (person I' o m s object X ) .s P a rneniber. of a pure cobimunal s o c i e t y , o r i s i t an i n f a n t ? i f so, v e r y u n l i k e l y that P puns,X o t h e r w i s e , bbst interesting S t a t e s in the worlcl . a r e t r a n s i e n t . Thc M must have the a b i l i t y t o make s p e c i f i c predictions about themcpecterf (fuzzy] cluration of an a r b i t r a r y s t a t e so t h a t informattion i n the CM car1 be kept up t o date.**~ohn handed Mary the orange peel. {tomorrow' I S Wry s t i l l holding t h e orang6 peel? (inference) Almost c e r t a i n l y not.* *~i t a a t e lunch a half hour ago, Is she hungry yet? (inference) Unlikely.Time features of s t a t e s r e l a t e -i n c r i t i c a l ways t o the likelillood those s t a t e s w i l l be t r u e a t s' ome given time. The thought of a scenario ~c l~e r e l n the 0 1 is informed t h a t hhry is holding an orange peel, then. 50 years " l a t e r , uses that information i n the generation of some other i n f e rence is a b i t unsettling! The M must simply possess a low-level function whose job it is t o preciilct mornal durations of s t a t e s based on the p a r t i c u l a r s of t h e s t a t e s , and 'to use that informatim i n marking as "terminated" those s t a t e s whose 1 ikelihood has diminished below some thresl~olld .hly conjecture i s t h a t a human notices and updates the temporal t r u t h of a ; t a t e only when he is about t o use it i n some cognitive a c t i v i t y --' that most of the transient howledge i n our-heads is out .of date W t i l \ie again attempt t o use it in,-say,.some inference. Accordingly, before using any s t a t e information, the CM f i r s t f i l t e r s it. througlt the STATE DURATIOIt inference' proccss t o a r r i v e a t an updated estimate of t h e state's l i k e l ihood as a .function of i t s known s t a r t i n g time ( i t s TS f e a t u r e , -i n CD notation-) .The *lenlentation of t h i s process i n the Q! is as follows: an ( NDLTR S ?) structure is constructed f6r t h e state S who% duration-is to be predicted, and this is passed t o the NDUR s p e c i f i e r molecufe: The NDUR Smolecule applies discrimination tests on featur~s of the ~b j e c t s involved i n S. Terminal nodes m tne net are duration concepts (typically fuzzy ones), such as #ORDERliOUR, #ORDERY&U. If a terminal node can be successf u l l y reached, thus locat-ing such a concept D, t h e property MNU: (Cltnractcri s t i c time-.function) is retrieved from D's property list. C 1 I .W is a s t e p function of STRE;\GTH vs. the amount o f time some state has been i~ existence (Fig. 10) . From t h i s function a STREhrm i s computed for S and bpcomes S's. predicted likelillood. If the STREVGTl1 turns out t o be sufficiently low, a (TI: 3 ?ow) structure is predictively generated bhny inferences can bc b s e d solely on commonly sbserved o r learned associations, rather than upon "logical" re: lations such a s causation., motivation, and so forth. In a rough way, we can compare these inferences t o t h e phenomenon of visual imagery which constructs a "picture" of a thdught ' s surrounding environment. Such inferences sllould be made i n abundance. Based on the way a thought is conununicated (especially the often telling presence or absence of information) , inferences cm be made about the speakerf s reasons for speaking.L W L E S : . . **Don't eat green gronks.(inference) Other kinds of gronks are probably edible **Mary threw out the rotten part of the fig. (inference) She threw it out because it was rotten. **John was unable t o get an aspirin.(inference) John wanted t o get. an aspirin.**Rita like thexhair, but it was green.(inference) Tl~e c a e ' % color is. a negative deature to.Rita (or the speaker).I have included this class only to represent the largely unexplored domain o t interences & a m from the way a thought is phrased. The 0 1 w i l l eventualiy need an explicit model of conversation,. and this model w i l l , incorporate inferences from this class. Typical of such inferences are t-hose, which translate the inclusion of ~e f e r e n t i a l l y superfluous features of an obj ect into an implied causality relation (the f i g example) , those which infer desire from failure (the aspirin examplej those which infer features of an 0,rdinary X from features of special kinds of X . (the gronk e x q ? l e ) , and so .forth. These issues w i l l lead t o a more,goal directed-model than J am current1.y exploring.Summary of t h e ~n r ' e r e n c e --Component -1 ilavc 1 1 0~ sketched 16 inference classes \\rllich, I conjccturc, l i e a t the corc o r t h c hrni~lm intcrcnce reflex. The central 11ypotl1esi.s is t 11at a lnmlcin lxlg~iagc comprchc~rJer pcrl'o~n~s 1110 1-c suhoo~~sc ious comlmtnt ion. on 1 ;c:un i ng st ruc t u r c s tlim ally other thcory of l a n~, u~~c comjlr-clrcns i on has vet ;isLlovlc;lgccl. \\hen the currcnt 0 1 is turncd loose, i t v i l l often gcnercltc u;.\,.ll.ds of 100 infcrcnccs €roc! a l'nir l'y I~;ulcll stimulus swh as ". Jolm !~o \ -c cc11t1-31 t o any rc.;t~'ictcd d o~a i n \;.hicl~ inl-011-es 1-olit ioiial ; I C~C~B . I t i s 3 current ch:~lle~.gc t p f i n d ~11511 a restricted, y e t i n t c r c~~t i n g , rioxain t o \i!lic!l thcsc idcns can !?c tr:ms:--1antcJ 2nd ;qlplicd i n 5li;;:tly !.\orc i:oal-Zir'ccteticnvil-onrl~cnts. In case ( c ) , ld~cre a s c t of candidates can 1)e locat'cd, T receives t h e set of f e a t u r e s l l V i n n i n t h e intersection of a11 candidates' occurl'enec sets ' ( t h i s w i l l I3c a t 1.eAst tllc dcscl'iptivc s e t ) . I n e i t h e r case, t h e 0 1 then llas an i n t e r n a l token, t o 4i0,rk a i t h , :~llow ing tllc conCcptuaf graph i n bhicb references t o it occur t o be i-.cntativcly Integrated i n t o infercnccs , and eventual 1.)-r~g u l -n s t o i t s quicsccrlt st3 t c . Onc 11>?>1-0dLlc.tof tile infcrcnc ing is that -thc occurrence z c t of cach' memory obj cc t involv- Descriptive set for "The big red dog who a t e t h e bird" f u l l y * t o exactly one) . Tlius , w11e11 tlic inference rct-lex has ceased, t h e reapplies t h e 1-efei'ence i n t e r s e c t ion a l g o r i'rhms t o cnch untdcn.t i f ied :okcn t o seek ~u t any inference-tlhrif,icd refcrenccs. Successful idcntifjrcations.at t h i s ppint r e s u l t i n t h c merging (by t h e ~s a o s t r u c t u r e merger mcntioncd e a r l i e r y of t h e tempmar;\token's occur.rcncc set with t h e i d e n t conceptual patternss from each input is t h a t many r e l a t e d concepts and tokens implicitly in~+olved* i n thc s i t u a t i o n are activated, ora ltouched ."This can be put -to use i n two rpys. F i r s t , %mplicitly touched concepts can c l a r i f y 1%??1at might otherwise be an u t t e r l y opaque subsequent teference. I f , f o r instance, someone says (outside of a parvicular context): "The nurses were nice", you w i l l probably inquire "\tIlat nurses?" I f , on t h e other hana, someone says : ''John \ias ~u n over by a milk truck. \\%en he woke up. t h e nurses were nice" you w i l l experience neither doubt about t h e referents of ' " t h e~u r s e s " , nor s u v r i s e at t h e i r mention. I presume t h a t a -subconscious f i l l i n g -o u t of the s i t u a t i o n "Jolm \\;as run over by .a milk truck!' i m p l i c i t l y a c t i v a t e s an e n t i r e s e t of coneptually relevant concepts, "prechnrging" ideas of 110s- underlying the nurse example. I t is more often than not the. "gestaltt1 meaning context of an utterance which r e s t r i c t s the kinds of meaningful associations a human makes. In contrast t~ the nurse example above, most people would agree t h a t the reference t o "the nurses" i n the fol1'01cing s i t u a t i o n i s a b i t peculiar:,In the dark of the night, John kid ~i a l l o s~e d through the knee-deep mud t o the north \call of t h e deserted animal hospital. The nurses were nice. I t does not appear t o bc possible t o "understand:' cvcn t h e sinll~lcst of uttcrallccs i n a contextwll}: meaningful \say i n a system i n \ \ . i~i~h language f a i l s t o interact w i t 1 1 a language-free belief system, o r i n a system idlich l a c k n spontaneous inference rcflcs.One \;cry k n p o r t a t t h c o r c t i c a l issue concerns csnctly ]low ~nuch "inferelice energy" is expended. before t h e f a c t (prediction, expectation) versus !IOW 111uch is expejded after t h e fact t o clear up specific y r o b l a~a o f hov: the utterance fits tile context. ?ly belief is t l m t t h e r e is a g r e a t deal 01 cxplofa'to~y, e s s e n t i a l l y undirected iinfercncing which is frequently ovcrlookcd and dlicll cannot be repressed because it is t h e language-related manifestot i o n of tile ~~~u c h broader nlotivationtll structure of the b~a i n . I1athe \. 1 * ,Mary* s p l a c i n g ,her d p s i n c o n t a c t w i t h dohn was p r o b a b l y T 'caused -b y Mary 'fee I ing a pqs i t i ve enlot i on t owarcj John, causa t i ve I * Marg* s l ips m r e , i n c o n t a c t u i t.h. dorirl (C0824 . C00321Th'is i s t h e ' p a r t i a l l y i n t e g r a t e d ntenlqry s t r u c t u r e , a s f t e r r e f e r e n c e s have been established. No r e f e r e n c e a n l b i g u i t y ,is asSuhed. to e x i s t f o r t h i s example.C0035 i s the resu! t i n g mehlbry.structur-e f o r t h i s utterancp.--------We suppress a l ! b u t t h i s s t r u c t u r e on' the s t a r t i n g i n f e r e n c e queue.(sue wi ' l be see: ng 'about one. I Here, because r13r was. f e e 1 i ng a n e g a t i ve Y emotion touard 8 i I a t t h e tirtle, when B i l l underwent a smd l l NEGCHARGE ; t h e pr ed i c t i on can be made t h a t Maru mau'.have e x~e r ienced a degree of joy. hand. hot! ce on. t h e REASONS and OFFSPRl NG semis the r e s u l t s uF' o t h e r i n f e r e n c i n g uhich was n o t discussed above.Looking, propagation and s t r e n g t h f a c t o r s f o r each m d i f y i h g s t r u c , t u r e F i y r e 4 i l l u s t r a t e s the small implemen-tedt o make S ' s 101~ likelihood eQlicit. The STATE.DURATIOX inference thus acts as a cleansing f i l t e r on state infomation rd~icll is fed to various other inferehce processes.Jbny llassociativel' inferences can be made to produce nel*. features o f ' an object (or aspects of s situation) from known features. If something wags-its tail, it is probably an animal o'f some s o r t , i f it bitesethe mailman's leg, ir.is probably a dog, i f it has a gray beard and speaks, it is probably an old man, if it honks in a d i s t i n c t i v e way, it is probably some s o r t of vehicle, etc. These classes are -?nT~ereiltl-y unstructured, so I will say no.more about them here, except t h a t they frequently contribute fea-• tures which help c l e a r up reference ambiguities and i n i t i a l reference fail-. ures.istening t o an a m y i n g l y slow speaker), and t h i s is tantalizing evidence t h a t soncthing l i k e a proto-sentence generator is thrashing about ups.t:tirs. ab s't rac t: Ally theory o t l a n g u a y a m u s t a l s o be a t h e o r y o f i n f e r e n c e a n d memory. I t d o e s n o t a p p e a r t o be p o s s i b l e t o " u n d e r s t a n d " e v e n t~l e s i m p l e s t o f u t t e r a n c e s i n a c o n t e x t u a l l y m e a n i n g f u l way i n a s y s t e m i n which l a n g u a g e f a i l s t o i n t e r a c t w i t h a l a n g u a g ef r e e memory a n d b e l i e f s y s t e m , o r i n a s y s t e m w ! l i c h , l a c k s a s p o n t a n e o u s i n f e r e n c e r e f l e x . P e o p l e a p p l y a t r e m e n d o u s amount of c o g n i t i v e e f f o r t t o u n d e r s t a n d i n g t h e meaning c o n t e n t o f l a n g u a g e i n c o n t e x t . Yost of t h i s e f f o r t i s o f t h e f o r m o f s p o n t a n e o u s c o n c e p t u a l i n f e r e n c e s w n i c :~ o c c u r i n a l a n g u a g e -i n d e p e n d e n t m e a n i n g e n v i r o n m e n t . I h a v e d e v e l o p e d a t h e o r y of how humans p r o c e s s t h e m e a n i n g c o n t e n t of u t t e r a n c e s I n c o n t e x t . '~'11~ t h e o r y i s c a l l e d C o n c e p t u a l Llcmory, a n d has b e e n implemented l~y a c o m p u t e r p r o g r a m w h i c :~ i s d e s i g n e d t o a c c e p t a s i n p u t a n a l y z e d C n n c e p t u a l Dependency ( S c h a n k e t a l . ) m e a n i n g g r a p h s , t o g e n e r a t e many c o n c e p t u a l i n f e r e n c e s a s a u t om a t i c r e s p o n s e s , then t o i d e n t i f y p o i n t s q f c o n t a c t among t h o s e i n f e r e n c e s i n " i n f e~e n c e s p a c e " . P o i n t s o f c o n t a c t e s t a b l i s h new p a t n w a y s t i l r o u g h e x i s t i n g memory s t r u c t u r e s , and h e n c e " k n i t " e a c h u t t e r a n c e i n w i t n i t s s u r r o u n d i n g c o n t e x t .S i x t e e n classes of c o n c e p t u a l i n f e r e n c e h a v e b e e n i d e n t i f i e d a n d i m p l e m e n t e d , a t l e a s t a t t h e p r o t o t y 2 e l e v e l . T n e s c c l a s s e s a p p e a r t o be e s s e n t i a l t o a l l h i g h e r -l e v e l l a n g u a g e corn7re;1ension p r o c e s s e s . Among them a r e c a u s a t i v e / r e s u l t a t i v e ( t h o s e which I n t e r a c t i o n s o f c o n c e p t u a l i n f e r e n c e w i t i~ t h e l a n g u a g e p r o c e s s e s of (1) w o r d sense p r o m o t i o n i n c o n t e x t , and ( 2 ) i d e n t -ification of referents to memory tokens are discussed. A theoretically important inference-reference "relaxation cycle" is i d e n k~f i e d , and its solution discussed.The theory provides the basis of a computationally effective model of language comprehension at a deep conceptual level, and should therefore be of interest to computational linguists, psychologists and computer scientists alike. 1. The ~leed for a Theory of Conceptual Memory and Inference aesearch in natural language over the past twenty years has been focussed primarily on processes relating to the analysis of individual sentences (parsing). Most of the early work was devoted to syntax. Recently, however, there has been a considerable thrust in the areas of semantic, and importantly, conceptual analysis (see ~2, (Ml) , (Sl) and (C1) for example) . Whereas a syntactic analysis elucidates a sentence's surface syntactic structure, typically by producing some type of phrase-structure parse tree, conceptual analysis elucidates a sentence's meaning (the "~icture" it produces), typically via production of an interconnected network of concepts which specifies the interrelationships among the cohcepts referenced by the words of the sentence. On the one hand, syntactic sentence analysis can more often than not be performed "locally" that is, on single sentences, disregarding any sort of global context; and it is reasonably clear that syntax has generally very little to do with the meaning of the thoughts it expresses. Hence, although syntax is an impoatant link in the understanding chain, it is little more than an abstract system of encoding which does not for the most part relate in any meaningful way to the information it encodes. On the other hand, conceptual sentence analysis, by its very definition, is forced into, the realm of gen,e~d~ WULLU knowledge; a conceptual analyzer's "syntax" is the set of rules which can produce the range of all "reasonable" events that might occur in the real world. Hence, in order to parse corlceptually, the conceptual, analyzer must lnteract with a repository of world knowledge and world knowledge handlers (inferential processes). This need for such an analyzer-accessible korld knowledge repository has provided part sf the morivation for the development of the following theory of conceptual inference and memory however, the production of a conceptual network from an isolated sentence is only the first step in the understanding process. After this first step, the real question is: what happens to this co~ceptual network after it has been produced by the analyzer? That is, if we regard the conceptual analyzer as a specialized component of a larger memory, then the allocation of memory resources in reaction to each sentence follows the pattern: (phase 1) get the sentence into a form which is understandable, then (phase 2) understand it! It is a desire to characterize phase 2 which has served as the primary motivation fbr developing this theory of memory and inference. In this sense, the theory is intended to be a charting-out oef the kinds of processes which must surely occur each time a sentence's conceptual network enters the system. Although it is not intended to be an adequate or v e r i f i a b l e model of how these processes miqht actually occur in humans, the theory described in this paper has nevertheless been implemented hs a computer model under PDP-10 Stanford 1.6 LISP. While the implementation follows as best it can an intuitively correct approach to the various processes described, the main intent of the underlyinghheory is to propose a set of memory processes which, taken together, could behave in a manner similar to the way a human behaves when he "understands language" . Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
602
0.003322
null
null
null
null
null
null
null
null
bc3ec542651a8246e1f8ee4052dd137c744ba128
219303034
null
String Transformations in the {R}equest System
The REQUEST System is an experimental n a t u r a l language query system based on r. large transfo~ational grammar of English. In the original implementation of the system the process of computing the underlying sLructures of input queries involved a sequence of three steps: (1) preprocessing (including dictionary lookup), (2) surface phrase structure parsing, and (3) transformational parsing. This scheme has since been modified to permit transformational operations not only on the full trees available after completion of surface parsing, but also on the strings of lexical trees which are the output of the preprocessing phase. Transformational rules of this latter type which are invoded prior to surface parsing, are k n o ~n as string transformations.
{ "name": [ "Plath, Warren J." ], "affiliation": [ null ] }
null
null
null
1974-12-01
2
0
null
null
null
null
null
Since they must be defined in the absence of such structural markers as the location of clause boundaries, string transformations aye necessarily relatively local in scope. Despite t h i s inherent limitation, they have so far proved to be an extremely useful and surprisingly versatile addition t o the REQUEST System. Applications to date have included homograph resolution, analysis of classifier constructions, idiom handling, and the suppression of large numbers of unwanted surface parses. While by no means a panacea for transformational parsing, the use of string transformations in REQUEST has permitted relatively rapid and painless extension crf the English subset in a number of important areas without corresponding adverse impact on the size of the lexicon, the complexity of.the surface grammar, and the number of surface parses produced. 161. The v e r s i o n c u r r e n t l y being used in REQUEST i s t h e r e s u l t of signific a n t r e v i s i o n s and extensions by M. Pivovonsky, who (vzith the a i e i t h e r a NOUN, o r a NO14 and an S1 (the relative c l a u s e construction).Each NOUN dominates a n INDEX node which is specified a s a constant To what c o m p a n i e s did XY Z. sell o i l ?[ 2 ) a.. What was the city which ABC's h e a d q u a r t e r s w a s located in i n 1 969?b. What was the. c i t y i n which A B C ' s h e a d q u a r t e r s .was l o c a k d in 19697 Starting i n l a t e 1971, t e s t s began on a n i n v e r s e t r a n s f o r n l a t~o n a l g r a m m a r whose g e n e r a t i v e c o u n t e r p a r t had b e e n developed with the aid of Thus, the s t r u c t u r a l p a t t e r n in F i g u r e 2 i n d i c a t e s t h a t the r u l e CS'YCLSFR r e q u i r e s t h a t t h e p r e p r o c e s s e d s t r i n g be partitionable into t h e following six-segme'nt sequence: (1 ) a n a r b i t r a r y i n i t i a l s e g m e n t (possibly n u l l ) designated (X . 1 ) , ( 2 ) a n o c c u r r e n c b of t h c definite a r t i c l e T H E , and ORZ, which r a n g e o v e r s e t s of (feature value, f e a t u r e n a m e ) p a i r s . + STATE + Y E A R ) ) ) l ' (INDEX . 4 ) (CITY . 3 ) (STATE. 3 ) Condition: ( E Q U A L ORX (QUOTE {t (NODENAMEOF 3 ) ) ) )(CSBLOCK BLOCK OB Q N E )Stl'uctural Pattern: followed by a n optional comma, followed by a s t a t e n a m e (INDEX (t CONST *** t STATE)), where the a c t u a l city n a m e is a single t r e e (W . 2 ) and the I Such a situation would always a r i s e in processing such inputs a s City "the 1 1 o l Ncw York", effectively resolving t h e ambiguity of the E x c e s s i v e strength, in t h e s e n s e of m a r k i n g some s t r a (ORDFORM STRING OB A L L )EQUATIONEQUATIONStructural P a t t e r n :((X . 1 ) ((VADJ-. 2 ) (+ C A R D ) ) (ORD 3 ) (X . 4 ) )Condition: N I L S t r u c t u r a l Change:((DELETE 3 ) )F e a t u r e Chtinge:((DELETE 2 (CARDL) (INSERT 2 ((4-ORD))) )F i g u r e 5 : The S t r i n g T r a n s f o r m a t i o n ''Ordinal F o r m a t i o n " the lexical Lookup phase. ORDFORM simply finds each i n s t a n c e in t h e prep r o c e s s e d string where a (VADJ (t CARD)) immediately p r e c e d e s an ORD, h l e t e s the ORD t r e e , and changes the f e a t u r e on the V A D J from (t CARD)to (t O.RD), t h e r e b y identifying that item a s a n o r d i n a l numeral r a t h e r than a cardinal. + INC)) RANK [NQUOTE 1) THROUGH 8 + ORD))f i n a l a r b i t r a r y s t r i n g of t r e e s (X . 7 ) . The s t r u c t u d a l change i n c l u d e s a r e p l a c e m e n t and two deletions. In t h e c a s e of ( I Z c ) , t h e o v e r a l l effect of t h i s s t r u c t u r a l change is s e m a n t i c c l a s s T h e p a t t e r n e l e m e n t s f a l o w i h g t h e Kleene star expression specify t h a t it m u s t be followed by: ($ a n o t h e r i n s t a n c e of a p r o p e r noun of the a p p r o p r i a t e c l a s s {this w i l l be the i n i t i a l i n s t a n c e if the null value of the Kleene s t a r e x p r e s s i o n is the on%y one t h a t m a t c h e s ) ; t YEAR +~c o ) ) )(INDEX (ORX (t CITY t STATE I t Y E A R + G O ) ) ) I I A N D . 4 ) (ORR . 5 ) Condition. ' ( C O M M A . 3 ) NIL t ((NOUN 8 ) (+ S G ) ) (X . 7 ) ) I E X 9 ) (ORXI) (NOT (ANALYSIS 1 T (QUOTE ( ( ( X ) ) ( ( I N D E X (ORX))) ( ( C O M M A ) ) ) ) )S t r u c t u r a l Change:( 1 2 3 4 5 6 7 )(1 0 0 0 0 ( 2 6 ) 7 )F e a t u r e C h a~l g c .----((CONI) (4 ((INSERT 9 ((t A N D S E T ) ) ) (INSERT 8 ((-SG))) ) ) (5 (INSERT 9 ((4-ORSET))) 1))F i g u r e 7: The ~u l e "City, S t a t e , Year, C o m p a n y Conjunction" (ii) a n optional comma; (iii) all h s t a n c e of e i t h e r of the coordinating conjunctions "and" o r "or" I represented internally a s ORR, since O R is already used to signal t h e p r e s e n c e of a disjunctive pattern e l e m e n t to t h e rule-processing, routine) ; (iv) t h e final instance of a semantically compatible p r o p e r noun, and (v) the u s u a l end variable.The s t r u c t u r a l change specifies (1) that the t e r m i n a l e l e m e n t s of all but the rightmost conjunct (which a r e collectively a s whereupon the i n t e r p r e t i v e component produced what appeared t o be a n a p p r o p f i a t e a n s w e r -in the c a s e of (14), a n e a r n i n g s table with 18 e n t r i e s e a r n i n g s , a u t o production, a n d r a i n f a l l . -which a r e i n h e r e n t l y additive a n d a r e m e a s u r e d on a c u m u l a t i v e b a s i s -and q u a n t i t i e s like e m p l o y m e n t , a s s e t s and t e m p e r a t u r e , which a r e m e a s u r e d on a n i n s t a n t a n e o u s p a s l s . r. tions suggests that t h e r e m a y be some value in viewing the facility in terms of such relationships. However, the rule w r i t e r is entirely f r e e t o ignore linguistic considcrations of this s o r t and define any of a wide r a n g e of tr'ee manipulations a s s t r i n g t r a n s f o r n~a t i o n s . Accordingly, the s t r i n g t r a n sfdrmation facility can,with some justification, simply be viewed as a con- (list of t r e e s 2 ) . . . (list of t r e e s n ) ), w h e r e t h e a r g u m e n t s of t h e i OR s t a --I I --------. -----c -L I . ----- ( ( N M N L ) 2 3 ) ) 3 1 (DELETE 2) 1t h e conjoined n a m e s of all companies satisfying t h a t d e s c r i p t i o n .Thus,( 1 0 3 0 5 )N I L rr--I-.Ir---.r.r-----rrrrrr.r----rr----L-------( ( C I T Y S T A T S T RI N G 08 A L L ) ( ( X m 1 ) ( ( ( I N D E X 6 1 ( + CONST + C I T Y ) ) ( W2)) ( O R ( ( C O M M A m 3 ) ) N I L ) ( ( I N D E X ( + C O N S T + S T A T E ) ) ( W , 4 ) )( X , 5 ) N I L ( 1 2 . 3 4 ( 1 ( 2 4 ) 00 5 ) ( ( I N S E R T 6 ( ( + C I T Y S T A T E ) ) ) ) --w I I I I I I I I I I I I -----I ----I . -----------) ) 1 ( X 7 ) ( NOT ( A N A L Y S I S 1 T ( QUOTE ( ( ( X I ) ( ( I N D E X ( O R X ) ) ) ( ( C O M M A ) )1( 1 2 3 4 5 6 7 )( 1 0 0 0 0 ( 2 6 ) 7 ) ( (COND ( 4( ( I N S E R T 9 ( ( + A N D S E T ) ) I ( I N S E T 8 ( ( 7 SG))) 1 1 ( 5( 19 ( ( + O~S E T ) ) ) 1 -----.l -l -. I I I C e . I L U L . ---. C C I . L I I C I I I I I I I I( (GENAFCNJ S T R I N G 08 A L L ) ( ( X . 1 ) ( * ( ( & N O E X ( O R X ( + C I T Y + S T A T E + YEAR + C O ) ) ) ( W . 2 ) ) (GENRF 3) (COMMA 4 ) ) ((INDEX ( O R X ( + CITY + STATE + YEAR + C O ) ) ) ( W . 2 ) ) ( G E N A F . 3 ) ( O R ( ( C O M M A 4 ) ) N I L ) ( O R ( ( A N D 5 ) ) ( ( O R R 6 , ) ) ) ( ( ( N O U N 10) ( + SG)) ( ( ( I N D E X 11) f O R X ) I ( W r 7 ) ) (GENAF 8 ) ( X . 9 ) (NOT ( A N A L Y S I S 1 T ( QUOf E ( ( ( X I ) ( ( I N~E X ( O R X ) ) ) ((GENAF) 1 ( ( C O M M A ) ) 1 1 ( 1 . 2 3 4 5 6 7 8 u ) ( 1 0 0 0 0 0 ( 2 ' 7 ) 8 9 ) I ( C O N D ( 5 ( (fNSERT 11 ( ( + A N D S E T ) ) ) ( I N S E R T U O ( ( 0 SG))) ( 6 ( I N S E R T I 1 ( ( + O R S E T ) ) ) 1 ) 1 ) r rc -------. . . . --------T -d . I -' ---. . ----I ( ( P P C O N J STRING 08 A L L ) ( t x . 1 ) ( * ( O R ( ( P R E P ( W rn 2 ) ) ) ( ( P R E P O F ( W 2))) 1 ( ( I N D E X ( O R X ( + C I T Y + STATE + YEAR + C O ) ) ) ( W 3 ) ) (COMMA 1 4) ) ( OR ( ( P R E P ( W 2 ) ) ) ( ( P R E P O F ( W . 2 ) ) ) ( ( I N D E X ( O R X ( + C I T Y + STATE + YFAP + C O ) ) ) ( W e 3)) ( O R ( ( C Q M N X 4 ) ) N I L ) ( O R ( ( A N D 5 ) ) { ( O R R 6 ) ) ) ( OR ( ( P R E P ( W m 7 ) ) ) ((PREPOF ( W rn 7 ) ) ) 1 (((NOUN 10) ( + S G ) ) ((fINDEX . 1 3 ) I O R X ) ) ( W 8)) 1 ( X . 9 ) 1 (AND ( NOT (.ANALY'S I s 1 f ( QUOTE ( ! ( X I 1 ( O R ( ' ( ( P B E P ) ) ) ( ( t P R € . P O F ) ) ) 1 ( ( I N D E X ( O R X ) ) ) ( ( C O M M A 1 1 1 ) 1 ) ( C O M P A R E L I S T I T E M 2 7 ) ( 1 2 3 4 5 6 7 8 9 ) ( 1 0 0 , O 0 0 7 ( 3 8 ) 91 ( ( C O N 0 ( 5 ( ( I N S E R T I1 ( ( + A N D S E T I ) ) ( I N S E R T 10 ( 1 -SG))L*) ( 6 ( I N S E R T 11 ( ( + O R S E T ) \ ) 1 ) ) ) ( { R T I M E D S T S T R I N G 00 ALL) ( ( X * 1 ) ((PROPNOM . 2 ) (NOUN ( I N D E X ( + Y E A R ) ) ) ) (NMNL ( + P E R I O D I C ) )T ( QUOTE ( (OR ( ( ( P R E P ) ) ( ( I N D E X ( ( + Y E A R I ) ) ) ( ( X ) ) ( O R ( ( ( P R E P ) ) ) ( ( ( P R E P O F ) ) ) 1 ( ( I N D EX 4 ( + C O ) 1.1 1 ( ( P R E P ) 1 ( ( I N D E X ( ( + Y E A R ) ) ) ) ( ( X I ) 1 ) 1 ) 1 ) (CON0 ( 5 3 1 ( T T ) ) ) ( (REPLACE ( 2 4) 4) 1 N I L 1 --------------.--------.-( ( L T I M E D S T S T R~N G 00 A L L ) ( ( X rn 1 ) ( O R ( ( O R ( ( I N D E X ( + C O ) ) GENAF ((NMNL 2 ) ( + P E R I O D I C ) ) 1 ( ( O R ( T H E ) N I L ) ( ( N h N L . 2 ) ( + PERIODIC)) ( O R ( ( OR ( P R E P ) ( P R E P O F ) ) ( ( N P R O P N C I M . 4 ) (NOUN ( I N D E X ( + CO)))) 1 NIL: I ) 1 , 9 ( ( O R ( ( I N D E X ( + C O ) ' ) GENAF ((NMNL . 3) ( + P E R f O D I C ) ) 1 ( (OR ( T H E ) N I L ) ((NMNL rn 3 ) ( + P E R I O D I C ) ) ( O R ( ( O R (PREP) ( P R E P O F ) ( I N D E X ( + C O ) ) 1 N I L I 1 1 ( * COMMA (OR ( ( I N D E X ( + C O ) ) GENAF (NMNL ( + P E R I O D I C ) ) ) -( O R ( T H E ) N I L ) (NMNL ( + P E R I O D I C ) ) ( OR ( (OR ( P R E P ) ( P R E P O F ) ) ( I N D E X ( + C O ) ) 1 N I L 1 1 'COMMA ( O R ( ( T N D E X ( + CO)) GENAF ((NMNL 2 ) ( + PERIODIC)) ( ( O R ( T H E ) N I L ) ((NMNL . 2) ( + PERIODIC)) I OR ( ( O R ( P R E P ) ( P R E P O F ) ) f (.NPROPNOM . 4f (NOUN ( I N D E X ( + C O ) 1 ) ) N I L ) ) ) ) ) ( * COMWA ( O R ( ( I N D E X ( + C 0 ) ) GENAF ( ( N M N L rn 3) ( + P E R I O D I C ! ) ( ( O R ( T H E ) N I L ) ((NMNL. . 3) ( + PbERIODIC) 1 ( OR ( ( O R ( P R E P ) ( P R E P O F ) ) ( I N D E X ( + a C O ) ) N I L 1 1 1 ) ( Q R ( ( C O M M A . 5 ) ) N I L ) AND ( OR ( ( I N D E X ( + C O ) ) GENAF ( N~N L ( + P E R I O D I C ) ) ( P R E P . 6 ) ((PROPNOM . 7 ) (NOUN I -I N D E X ( + Y E A R ) ) 1 ) 1 ( ( 0 .~ ITHE) N I L ) (NMNL ( + PERIODIC))T (QUOTE 1 (OR ( ( ( X I ) ( ( I N D E X ( ( + C O ) ) ) ) ( ( G E N A F ) ) ) ( ( ( X ) ) ( ( T H E ) ) ) t 1 ( X ) ) ((NMNL ( ( + PERIODIC)))) ( O R I (OR ( ( ( P R E P ) ) ) ( ( ( P R E P O F ) ) ) 1((INDEX t ( + CQI))) ) N I L ( ( C O M M A ) ) 1 ) 1 ) ) 1 (COND ( 5-3 1 ( 7 T I ) ) ( (COND ( 4 (RFPLACE ( 4 6 7 ) 4 ) ) ( T (REPLACE ( 2 6 7 ) 2 1 1 1 1 N I L ---~. .Y~-------------I L . I . L . L I .~---~~.(( LGtNDTSTS T R I N G 00 A L L ) ( X , 1 ) ( O R ( T H E ) N I L ) ( O R (OR ( ( I N D E X ( + YEAR)) ( ( N M N~ . 2) ( + P E R J O D I C ) ) 1 ( ( ( N M N L 2 ) ( + PE$IODIC)) ( O R (PREP ( I N D E X ( + Y E A R ) ) ) W -I L ) ) 1 1 ( O R ( ( I N D E X ( + Y E A R ) 1 ( ( N M N L 3) ( + P E R I O D I C ) ) 1 ( ( ( N M N L , 3) ( + P E R I O D I C ) ) (OR (PREP ( I N D E X ( + Y E A R ) ) I N I L ) ( *T t QUO1 E ( (OR ( ( ( X I ) ( ( T H E ) ) ) ( ( ( X I ) ( I N M N L ( ( + P E R I O D I C ) ) ) ) (OR ( ( + Y E A R ) ) ) ) N I L ( ( C O M M A ) ) ( ((lo)( ( I N D E X ( ( + Y E A R ) ) ) ) 1 ) 1 ) 1 ) ( C O N D ( 4 3 ) ( T T I ) ) ((REPLACE ( 2 5 6 ) 2)) N I L 1 ((CARDNOUN STRING 0 8 ALL) ( I X . 1 ) ( ( ( V A D J . 2) ( + C A R D ) ) ( W -5 ) ) ( OR ( ( A 3)) ( ( V A U X 3 ) ) ( ( C O M M A 3)) ((DAUX m 3 ) ) ( ( P R E P m 3)) ((PUNCT 3)) ( ( T H E -3 ) ) ( ( V , 3)) ( ( ( V A D J . 3) ( + C A R D ) ) ) ( ( V A D J . 3) P R E P ) ( ( V P A R T 3)) ( X 4 ) ( NOTAND ( A N A L Y S I S 3 N I L (QUQTE t ( (~) ) ) ) ) (ANALYSIS 4 T ( QUOTE ( ( ( I N D E X ( ( -CONST) 1 ) ( ( X ) ) 1 ) 1 ) 1 ( (REPLACE ( t ( N P 1 ( ( N O M I ((NOUN ( ( + S G ) ( -H U M A N ) ) ) ( ( I N D E X ( ( + CONSTI ( + C A R D ) ) ) 5 ) 1 1 1 2 ) ) NIL' 1 ( ( A B T A P P R X S T R I N G 06 ALL) ( ( X " 1 ) ( ( P R E P 2) ABOUT) ( O R I(IVAD3 e 3) ( + CARD))) ( (EQUAL 3) ) ( ( W H . 4 ) SOMF ( O R . ( L A R G E ) ( M A N Y ) (MUCH)) ( X * 5 ) 1 ( C O N 0 ( 4 (NULL 1) 1 T 7 -1 1 (-(R'EPLACE (. ( ( ' A D v ) ( ( V ( ( + A D J ) ) 1 ( ( A P P R O X I ) 1 2 1 ) N I L 1 ---....--; L ---c -. . I c I --------( ( C O M P N Q F R S T R I N G OB ALL) ( ( X . 1 )( ( ( T H E ) ) ( ( X I) ) 1 ) 1 (COND ( (NULL 4 ) 8 ) T T I ) ) ( (COND ( 4 f C O N D ( 7 (REPLACE ( ((WHADJ) ( ( A D V ( ( + E X T I ) ) ( ( V ( ( + A D 3 1 + Q U A N T ) ) ) ( ( W H ) ( ( S O M E ) ) 1 1 ( ( V ( ( + ADJ) ( + QUANT) ( + P O L ) ) ) ( I Y A N Y ) ) 1 1 3 ) 1 ( 8 ( R E P L A C E ( ( { W H A D J ) ( ( A D V ( ( + E X T I ) ) ( ( V I ( + A D 3 1 f + QUANTJ)) I (WH) ( ( S O M E ) ) 1 1 ( ( V ( ( + A 0 3 1 + QUANf) ( + P O L ) ) ) ( ( M U C H ) ) 1 1 3 ) ) ) ) ( T (REPLACE ( ( ( O N O M ) ( / A D V ( ( + E X T I ) ) ( ( V( ( + A O J ) ( + Q U A N T ) ) ) ( ( W H ) ( ( S O M E ) ) 1 f (NOM) ( ( N O U N ( ( 0 HUMAN) ( + SG)]) ( ( V ( ( + ADJ) ( + QUANT) ( + P O L ) ) ) ( ( M U C H ) ) 9 1 1 1 1 3 ) 1 ) (CONO ( (AND 4 (NOT 5 ) ) (DELETE 4 ) (DELETE 2) ) N I L 1-----rr.lrC--rrrr-lrII--.--rrcrrrrrrrrrr--.---*-rr--( ( S P R P P R E V S T R I N G 00 A L L ) ( ( X . l l ( O R (((PREP 2 ) ( W rn 5 ) ) ) ( ( ( P R E P O F 3) ( W rn 5 ) ) ) ) ( X . 4 ) 1 (NOT ( A N A L Y S I S 4 NXL ( QUOTE t ( O R ( ( ( B A U X ) ) ) ( ( (COMMA) 1 ) ( ( ( O A U X ) ) )
Main paper: . 2 e a r l y experience with the parser: Starting i n l a t e 1971, t e s t s began on a n i n v e r s e t r a n s f o r n l a t~o n a l g r a m m a r whose g e n e r a t i v e c o u n t e r p a r t had b e e n developed with the aid of Thus, the s t r u c t u r a l p a t t e r n in F i g u r e 2 i n d i c a t e s t h a t the r u l e CS'YCLSFR r e q u i r e s t h a t t h e p r e p r o c e s s e d s t r i n g be partitionable into t h e following six-segme'nt sequence: (1 ) a n a r b i t r a r y i n i t i a l s e g m e n t (possibly n u l l ) designated (X . 1 ) , ( 2 ) a n o c c u r r e n c b of t h c definite a r t i c l e T H E , and ORZ, which r a n g e o v e r s e t s of (feature value, f e a t u r e n a m e ) p a i r s . + STATE + Y E A R ) ) ) l ' (INDEX . 4 ) (CITY . 3 ) (STATE. 3 ) Condition: ( E Q U A L ORX (QUOTE {t (NODENAMEOF 3 ) ) ) )(CSBLOCK BLOCK OB Q N E )Stl'uctural Pattern: followed by a n optional comma, followed by a s t a t e n a m e (INDEX (t CONST *** t STATE)), where the a c t u a l city n a m e is a single t r e e (W . 2 ) and the I Such a situation would always a r i s e in processing such inputs a s City "the 1 1 o l Ncw York", effectively resolving t h e ambiguity of the E x c e s s i v e strength, in t h e s e n s e of m a r k i n g some s t r a (ORDFORM STRING OB A L L )EQUATIONEQUATIONStructural P a t t e r n :((X . 1 ) ((VADJ-. 2 ) (+ C A R D ) ) (ORD 3 ) (X . 4 ) )Condition: N I L S t r u c t u r a l Change:((DELETE 3 ) )F e a t u r e Chtinge:((DELETE 2 (CARDL) (INSERT 2 ((4-ORD))) )F i g u r e 5 : The S t r i n g T r a n s f o r m a t i o n ''Ordinal F o r m a t i o n " the lexical Lookup phase. ORDFORM simply finds each i n s t a n c e in t h e prep r o c e s s e d string where a (VADJ (t CARD)) immediately p r e c e d e s an ORD, h l e t e s the ORD t r e e , and changes the f e a t u r e on the V A D J from (t CARD)to (t O.RD), t h e r e b y identifying that item a s a n o r d i n a l numeral r a t h e r than a cardinal. + INC)) RANK [NQUOTE 1) THROUGH 8 + ORD))f i n a l a r b i t r a r y s t r i n g of t r e e s (X . 7 ) . The s t r u c t u d a l change i n c l u d e s a r e p l a c e m e n t and two deletions. In t h e c a s e of ( I Z c ) , t h e o v e r a l l effect of t h i s s t r u c t u r a l change is s e m a n t i c c l a s s T h e p a t t e r n e l e m e n t s f a l o w i h g t h e Kleene star expression specify t h a t it m u s t be followed by: ($ a n o t h e r i n s t a n c e of a p r o p e r noun of the a p p r o p r i a t e c l a s s {this w i l l be the i n i t i a l i n s t a n c e if the null value of the Kleene s t a r e x p r e s s i o n is the on%y one t h a t m a t c h e s ) ; t YEAR +~c o ) ) )(INDEX (ORX (t CITY t STATE I t Y E A R + G O ) ) ) I I A N D . 4 ) (ORR . 5 ) Condition. ' ( C O M M A . 3 ) NIL t ((NOUN 8 ) (+ S G ) ) (X . 7 ) ) I E X 9 ) (ORXI) (NOT (ANALYSIS 1 T (QUOTE ( ( ( X ) ) ( ( I N D E X (ORX))) ( ( C O M M A ) ) ) ) )S t r u c t u r a l Change:( 1 2 3 4 5 6 7 )(1 0 0 0 0 ( 2 6 ) 7 )F e a t u r e C h a~l g c .----((CONI) (4 ((INSERT 9 ((t A N D S E T ) ) ) (INSERT 8 ((-SG))) ) ) (5 (INSERT 9 ((4-ORSET))) 1))F i g u r e 7: The ~u l e "City, S t a t e , Year, C o m p a n y Conjunction" (ii) a n optional comma; (iii) all h s t a n c e of e i t h e r of the coordinating conjunctions "and" o r "or" I represented internally a s ORR, since O R is already used to signal t h e p r e s e n c e of a disjunctive pattern e l e m e n t to t h e rule-processing, routine) ; (iv) t h e final instance of a semantically compatible p r o p e r noun, and (v) the u s u a l end variable.The s t r u c t u r a l change specifies (1) that the t e r m i n a l e l e m e n t s of all but the rightmost conjunct (which a r e collectively a s whereupon the i n t e r p r e t i v e component produced what appeared t o be a n a p p r o p f i a t e a n s w e r -in the c a s e of (14), a n e a r n i n g s table with 18 e n t r i e s e a r n i n g s , a u t o production, a n d r a i n f a l l . -which a r e i n h e r e n t l y additive a n d a r e m e a s u r e d on a c u m u l a t i v e b a s i s -and q u a n t i t i e s like e m p l o y m e n t , a s s e t s and t e m p e r a t u r e , which a r e m e a s u r e d on a n i n s t a n t a n e o u s p a s l s . r. tions suggests that t h e r e m a y be some value in viewing the facility in terms of such relationships. However, the rule w r i t e r is entirely f r e e t o ignore linguistic considcrations of this s o r t and define any of a wide r a n g e of tr'ee manipulations a s s t r i n g t r a n s f o r n~a t i o n s . Accordingly, the s t r i n g t r a n sfdrmation facility can,with some justification, simply be viewed as a con- (list of t r e e s 2 ) . . . (list of t r e e s n ) ), w h e r e t h e a r g u m e n t s of t h e i OR s t a --I I --------. -----c -L I . ----- ( ( N M N L ) 2 3 ) ) 3 1 (DELETE 2) 1t h e conjoined n a m e s of all companies satisfying t h a t d e s c r i p t i o n .Thus,( 1 0 3 0 5 )N I L rr--I-.Ir---.r.r-----rrrrrr.r----rr----L-------( ( C I T Y S T A T S T RI N G 08 A L L ) ( ( X m 1 ) ( ( ( I N D E X 6 1 ( + CONST + C I T Y ) ) ( W2)) ( O R ( ( C O M M A m 3 ) ) N I L ) ( ( I N D E X ( + C O N S T + S T A T E ) ) ( W , 4 ) )( X , 5 ) N I L ( 1 2 . 3 4 ( 1 ( 2 4 ) 00 5 ) ( ( I N S E R T 6 ( ( + C I T Y S T A T E ) ) ) ) --w I I I I I I I I I I I I -----I ----I . -----------) ) 1 ( X 7 ) ( NOT ( A N A L Y S I S 1 T ( QUOTE ( ( ( X I ) ( ( I N D E X ( O R X ) ) ) ( ( C O M M A ) )1( 1 2 3 4 5 6 7 )( 1 0 0 0 0 ( 2 6 ) 7 ) ( (COND ( 4( ( I N S E R T 9 ( ( + A N D S E T ) ) I ( I N S E T 8 ( ( 7 SG))) 1 1 ( 5( 19 ( ( + O~S E T ) ) ) 1 -----.l -l -. I I I C e . I L U L . ---. C C I . L I I C I I I I I I I I( (GENAFCNJ S T R I N G 08 A L L ) ( ( X . 1 ) ( * ( ( & N O E X ( O R X ( + C I T Y + S T A T E + YEAR + C O ) ) ) ( W . 2 ) ) (GENRF 3) (COMMA 4 ) ) ((INDEX ( O R X ( + CITY + STATE + YEAR + C O ) ) ) ( W . 2 ) ) ( G E N A F . 3 ) ( O R ( ( C O M M A 4 ) ) N I L ) ( O R ( ( A N D 5 ) ) ( ( O R R 6 , ) ) ) ( ( ( N O U N 10) ( + SG)) ( ( ( I N D E X 11) f O R X ) I ( W r 7 ) ) (GENAF 8 ) ( X . 9 ) (NOT ( A N A L Y S I S 1 T ( QUOf E ( ( ( X I ) ( ( I N~E X ( O R X ) ) ) ((GENAF) 1 ( ( C O M M A ) ) 1 1 ( 1 . 2 3 4 5 6 7 8 u ) ( 1 0 0 0 0 0 ( 2 ' 7 ) 8 9 ) I ( C O N D ( 5 ( (fNSERT 11 ( ( + A N D S E T ) ) ) ( I N S E R T U O ( ( 0 SG))) ( 6 ( I N S E R T I 1 ( ( + O R S E T ) ) ) 1 ) 1 ) r rc -------. . . . --------T -d . I -' ---. . ----I ( ( P P C O N J STRING 08 A L L ) ( t x . 1 ) ( * ( O R ( ( P R E P ( W rn 2 ) ) ) ( ( P R E P O F ( W 2))) 1 ( ( I N D E X ( O R X ( + C I T Y + STATE + YEAR + C O ) ) ) ( W 3 ) ) (COMMA 1 4) ) ( OR ( ( P R E P ( W 2 ) ) ) ( ( P R E P O F ( W . 2 ) ) ) ( ( I N D E X ( O R X ( + C I T Y + STATE + YFAP + C O ) ) ) ( W e 3)) ( O R ( ( C Q M N X 4 ) ) N I L ) ( O R ( ( A N D 5 ) ) { ( O R R 6 ) ) ) ( OR ( ( P R E P ( W m 7 ) ) ) ((PREPOF ( W rn 7 ) ) ) 1 (((NOUN 10) ( + S G ) ) ((fINDEX . 1 3 ) I O R X ) ) ( W 8)) 1 ( X . 9 ) 1 (AND ( NOT (.ANALY'S I s 1 f ( QUOTE ( ! ( X I 1 ( O R ( ' ( ( P B E P ) ) ) ( ( t P R € . P O F ) ) ) 1 ( ( I N D E X ( O R X ) ) ) ( ( C O M M A 1 1 1 ) 1 ) ( C O M P A R E L I S T I T E M 2 7 ) ( 1 2 3 4 5 6 7 8 9 ) ( 1 0 0 , O 0 0 7 ( 3 8 ) 91 ( ( C O N 0 ( 5 ( ( I N S E R T I1 ( ( + A N D S E T I ) ) ( I N S E R T 10 ( 1 -SG))L*) ( 6 ( I N S E R T 11 ( ( + O R S E T ) \ ) 1 ) ) ) ( { R T I M E D S T S T R I N G 00 ALL) ( ( X * 1 ) ((PROPNOM . 2 ) (NOUN ( I N D E X ( + Y E A R ) ) ) ) (NMNL ( + P E R I O D I C ) )T ( QUOTE ( (OR ( ( ( P R E P ) ) ( ( I N D E X ( ( + Y E A R I ) ) ) ( ( X ) ) ( O R ( ( ( P R E P ) ) ) ( ( ( P R E P O F ) ) ) 1 ( ( I N D EX 4 ( + C O ) 1.1 1 ( ( P R E P ) 1 ( ( I N D E X ( ( + Y E A R ) ) ) ) ( ( X I ) 1 ) 1 ) 1 ) (CON0 ( 5 3 1 ( T T ) ) ) ( (REPLACE ( 2 4) 4) 1 N I L 1 --------------.--------.-( ( L T I M E D S T S T R~N G 00 A L L ) ( ( X rn 1 ) ( O R ( ( O R ( ( I N D E X ( + C O ) ) GENAF ((NMNL 2 ) ( + P E R I O D I C ) ) 1 ( ( O R ( T H E ) N I L ) ( ( N h N L . 2 ) ( + PERIODIC)) ( O R ( ( OR ( P R E P ) ( P R E P O F ) ) ( ( N P R O P N C I M . 4 ) (NOUN ( I N D E X ( + CO)))) 1 NIL: I ) 1 , 9 ( ( O R ( ( I N D E X ( + C O ) ' ) GENAF ((NMNL . 3) ( + P E R f O D I C ) ) 1 ( (OR ( T H E ) N I L ) ((NMNL rn 3 ) ( + P E R I O D I C ) ) ( O R ( ( O R (PREP) ( P R E P O F ) ( I N D E X ( + C O ) ) 1 N I L I 1 1 ( * COMMA (OR ( ( I N D E X ( + C O ) ) GENAF (NMNL ( + P E R I O D I C ) ) ) -( O R ( T H E ) N I L ) (NMNL ( + P E R I O D I C ) ) ( OR ( (OR ( P R E P ) ( P R E P O F ) ) ( I N D E X ( + C O ) ) 1 N I L 1 1 'COMMA ( O R ( ( T N D E X ( + CO)) GENAF ((NMNL 2 ) ( + PERIODIC)) ( ( O R ( T H E ) N I L ) ((NMNL . 2) ( + PERIODIC)) I OR ( ( O R ( P R E P ) ( P R E P O F ) ) f (.NPROPNOM . 4f (NOUN ( I N D E X ( + C O ) 1 ) ) N I L ) ) ) ) ) ( * COMWA ( O R ( ( I N D E X ( + C 0 ) ) GENAF ( ( N M N L rn 3) ( + P E R I O D I C ! ) ( ( O R ( T H E ) N I L ) ((NMNL. . 3) ( + PbERIODIC) 1 ( OR ( ( O R ( P R E P ) ( P R E P O F ) ) ( I N D E X ( + a C O ) ) N I L 1 1 1 ) ( Q R ( ( C O M M A . 5 ) ) N I L ) AND ( OR ( ( I N D E X ( + C O ) ) GENAF ( N~N L ( + P E R I O D I C ) ) ( P R E P . 6 ) ((PROPNOM . 7 ) (NOUN I -I N D E X ( + Y E A R ) ) 1 ) 1 ( ( 0 .~ ITHE) N I L ) (NMNL ( + PERIODIC))T (QUOTE 1 (OR ( ( ( X I ) ( ( I N D E X ( ( + C O ) ) ) ) ( ( G E N A F ) ) ) ( ( ( X ) ) ( ( T H E ) ) ) t 1 ( X ) ) ((NMNL ( ( + PERIODIC)))) ( O R I (OR ( ( ( P R E P ) ) ) ( ( ( P R E P O F ) ) ) 1((INDEX t ( + CQI))) ) N I L ( ( C O M M A ) ) 1 ) 1 ) ) 1 (COND ( 5-3 1 ( 7 T I ) ) ( (COND ( 4 (RFPLACE ( 4 6 7 ) 4 ) ) ( T (REPLACE ( 2 6 7 ) 2 1 1 1 1 N I L ---~. .Y~-------------I L . I . L . L I .~---~~.(( LGtNDTSTS T R I N G 00 A L L ) ( X , 1 ) ( O R ( T H E ) N I L ) ( O R (OR ( ( I N D E X ( + YEAR)) ( ( N M N~ . 2) ( + P E R J O D I C ) ) 1 ( ( ( N M N L 2 ) ( + PE$IODIC)) ( O R (PREP ( I N D E X ( + Y E A R ) ) ) W -I L ) ) 1 1 ( O R ( ( I N D E X ( + Y E A R ) 1 ( ( N M N L 3) ( + P E R I O D I C ) ) 1 ( ( ( N M N L , 3) ( + P E R I O D I C ) ) (OR (PREP ( I N D E X ( + Y E A R ) ) I N I L ) ( *T t QUO1 E ( (OR ( ( ( X I ) ( ( T H E ) ) ) ( ( ( X I ) ( I N M N L ( ( + P E R I O D I C ) ) ) ) (OR ( ( + Y E A R ) ) ) ) N I L ( ( C O M M A ) ) ( ((lo)( ( I N D E X ( ( + Y E A R ) ) ) ) 1 ) 1 ) 1 ) ( C O N D ( 4 3 ) ( T T I ) ) ((REPLACE ( 2 5 6 ) 2)) N I L 1 ((CARDNOUN STRING 0 8 ALL) ( I X . 1 ) ( ( ( V A D J . 2) ( + C A R D ) ) ( W -5 ) ) ( OR ( ( A 3)) ( ( V A U X 3 ) ) ( ( C O M M A 3)) ((DAUX m 3 ) ) ( ( P R E P m 3)) ((PUNCT 3)) ( ( T H E -3 ) ) ( ( V , 3)) ( ( ( V A D J . 3) ( + C A R D ) ) ) ( ( V A D J . 3) P R E P ) ( ( V P A R T 3)) ( X 4 ) ( NOTAND ( A N A L Y S I S 3 N I L (QUQTE t ( (~) ) ) ) ) (ANALYSIS 4 T ( QUOTE ( ( ( I N D E X ( ( -CONST) 1 ) ( ( X ) ) 1 ) 1 ) 1 ( (REPLACE ( t ( N P 1 ( ( N O M I ((NOUN ( ( + S G ) ( -H U M A N ) ) ) ( ( I N D E X ( ( + CONSTI ( + C A R D ) ) ) 5 ) 1 1 1 2 ) ) NIL' 1 ( ( A B T A P P R X S T R I N G 06 ALL) ( ( X " 1 ) ( ( P R E P 2) ABOUT) ( O R I(IVAD3 e 3) ( + CARD))) ( (EQUAL 3) ) ( ( W H . 4 ) SOMF ( O R . ( L A R G E ) ( M A N Y ) (MUCH)) ( X * 5 ) 1 ( C O N 0 ( 4 (NULL 1) 1 T 7 -1 1 (-(R'EPLACE (. ( ( ' A D v ) ( ( V ( ( + A D J ) ) 1 ( ( A P P R O X I ) 1 2 1 ) N I L 1 ---....--; L ---c -. . I c I --------( ( C O M P N Q F R S T R I N G OB ALL) ( ( X . 1 )( ( ( T H E ) ) ( ( X I) ) 1 ) 1 (COND ( (NULL 4 ) 8 ) T T I ) ) ( (COND ( 4 f C O N D ( 7 (REPLACE ( ((WHADJ) ( ( A D V ( ( + E X T I ) ) ( ( V ( ( + A D 3 1 + Q U A N T ) ) ) ( ( W H ) ( ( S O M E ) ) 1 1 ( ( V ( ( + ADJ) ( + QUANT) ( + P O L ) ) ) ( I Y A N Y ) ) 1 1 3 ) 1 ( 8 ( R E P L A C E ( ( { W H A D J ) ( ( A D V ( ( + E X T I ) ) ( ( V I ( + A D 3 1 f + QUANTJ)) I (WH) ( ( S O M E ) ) 1 1 ( ( V ( ( + A 0 3 1 + QUANf) ( + P O L ) ) ) ( ( M U C H ) ) 1 1 3 ) ) ) ) ( T (REPLACE ( ( ( O N O M ) ( / A D V ( ( + E X T I ) ) ( ( V( ( + A O J ) ( + Q U A N T ) ) ) ( ( W H ) ( ( S O M E ) ) 1 f (NOM) ( ( N O U N ( ( 0 HUMAN) ( + SG)]) ( ( V ( ( + ADJ) ( + QUANT) ( + P O L ) ) ) ( ( M U C H ) ) 9 1 1 1 1 3 ) 1 ) (CONO ( (AND 4 (NOT 5 ) ) (DELETE 4 ) (DELETE 2) ) N I L 1-----rr.lrC--rrrr-lrII--.--rrcrrrrrrrrrr--.---*-rr--( ( S P R P P R E V S T R I N G 00 A L L ) ( ( X . l l ( O R (((PREP 2 ) ( W rn 5 ) ) ) ( ( ( P R E P O F 3) ( W rn 5 ) ) ) ) ( X . 4 ) 1 (NOT ( A N A L Y S I S 4 NXL ( QUOTE t ( O R ( ( ( B A U X ) ) ) ( ( (COMMA) 1 ) ( ( ( O A U X ) ) ) : Since they must be defined in the absence of such structural markers as the location of clause boundaries, string transformations aye necessarily relatively local in scope. Despite t h i s inherent limitation, they have so far proved to be an extremely useful and surprisingly versatile addition t o the REQUEST System. Applications to date have included homograph resolution, analysis of classifier constructions, idiom handling, and the suppression of large numbers of unwanted surface parses. While by no means a panacea for transformational parsing, the use of string transformations in REQUEST has permitted relatively rapid and painless extension crf the English subset in a number of important areas without corresponding adverse impact on the size of the lexicon, the complexity of.the surface grammar, and the number of surface parses produced. 161. The v e r s i o n c u r r e n t l y being used in REQUEST i s t h e r e s u l t of signific a n t r e v i s i o n s and extensions by M. Pivovonsky, who (vzith the a i e i t h e r a NOUN, o r a NO14 and an S1 (the relative c l a u s e construction).Each NOUN dominates a n INDEX node which is specified a s a constant To what c o m p a n i e s did XY Z. sell o i l ?[ 2 ) a.. What was the city which ABC's h e a d q u a r t e r s w a s located in i n 1 969?b. What was the. c i t y i n which A B C ' s h e a d q u a r t e r s .was l o c a k d in 19697 Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
602
0
null
null
null
null
null
null
null
null
47c464c7d589680bdd4cd0cdf6a3ab28b15b8cbe
219307922
null
Verbalization and Translation by Machine
A l s o active d u r i n g more t h a n half of this period were Masayoshi S h i b ~t n n i aqd L i n d a oobek. Associated d u r i n g s h o r t e r p e r i o d s of time were
{ "name": [ "Chafe, Wallace L." ], "affiliation": [ null ] }
null
null
null
1974-12-01
0
0
null
null
null
null
gram=* 01 t i~c t:)rIyet L a n r r 1~~3 f r e . J e r b g l l i z~t i o n :i:ld t r a n s l a t i j n I n an e a r l i e r r e p o r t t h a t t h e r e are two c h r n e n s l~n s of high q u a l l t y t r a n s l s t l q n , whnch we termed naturalnes-s and fldelltg, N a t u r a l n e s s 1 s achleved when t h e t u g e t language v e r b a l~z a t~o n adheres t o a l l t h e constralnts of t h a t language; the o u t p u t w~l l then sound "natural". We a r e l e d , t h e n , t o the g e n e r a l p i c t u r e o f t r a n s l a t i o n which is shown in F i g u r e 1. The t w o v e r t l c i l l columns r e p r e s e n t the two v e r b a l i z a t i o n s whlch a r e involved: .In t h e l e f t the s o u r c e languasge v e r b a l i z a t l o n and on the right t h e t a r g e t v e r b a l i z a t i o n . The o t h e r n n j r~r c~m:,qnent o f t h e t r c a n s l a t i r ) n proced!lre i : i t h e t r a n s l a t i o n componept. I t i s e q u i v a l e n t t o a vnrbql i m t i a n -1 1 1 trlc tarr;c?t language. 'The p r o c e s s e s wklich rn:~Be u~ t i v e r b a l -i z : l t~n q apem, to the extent t h a t they are a l c o r i t h m l c , those which cxnrrtss t c a r g c t 1nny;uaye c o n s t r a~n t s a n d , t o t h e e x t e n t t h a t t h e y :.T c r e a - t h e proccnnes t h n t went i n t o EI p?rt.iairlnr verbalizatinn. The kr-nn8l a t i o n comannentis R v e~b~l i~a t i o n , , thpr1p;h one of R sneciaL sort, and t h e r e a~a i n a d e t a i l e d understmrlinp; of v e r b n l i z n t i o n p r ocesses i s necezsary. This r e p o r t , then, will be most cr)ncerned wi.th t h e n a t u r e of verbalization. W e w i l l a l s o d e v o t e c o n s i d e r a 7~l e space t o t h e n a t u r e of t h a t speci-al sort o f verba1iz:rtion which i n t r n n sl a t i o n . ' We will have t h e l e a s t t o say about parsinc. Examples w i l l be c i t e d from English and Japanese.F o r bout t h e l a s t nine months o f t h e p r o j e c t we were concerned w i t h the development o f :m i n t x r n c t i v e computer pro,p;ram t h n t would implement t h e v e r b a l i z a t i o n n r o c e s s e s we hy-potheslzed. f~l t h o u p -h t i prQy;ram remained primitive, t h e intention was t h a t i t would ~r a d u a l l~ achieve increased sophistication i n its a b i l l t g to simul a t e verb: l i z a t i o n , t r a n s l a t i o n , and garsing. As it presently simulates t h e Drocesses o f v e r b a l i z a t i o n , i t beeins with an item t h a t r e p r e s e n t s t h e i n i t i a l holistic idea which t h e sneaker or writer of a t e x t wishes o c~)nmunicate. I t then asks t h e user, s e a t e d a t a t e l e $ t y n e , t o make t h e s e r i e s o f creative c h o i c e s t h a t a r e hecessnry kn the production o f t h e f a n a l t e x t . As it s i m ul a t e s t r a n s l a t i o n i t should likewise be a b l e t o a p p l y the algorithmic Drocesses o f t h e targt:t l a n~u a g e automatically, and also to apply c e r t a i n c r e a t i v e processes on its own by looking a t the source 1onp;uaf;e v n r b a l i z a t i t~n t o see wnat c r e a t i v e choices were made there. hhenever j.t i s n o t a b l e t o make a c r e a t i v e c h o i c b , t h e prop;r:un a s k s t h e u s e r t o do so. e find t h a t this kind of machine-user i n t e r & i o n wovides a valuable research technique. Taking as ouiu l t i m a t e g o d t h e e v e n t u a l elimi n n t i on of t h e u s e r from t h e t r a n s l a t i o n Rrogram altogether, we start with a s i t u a t i o n in which t h e u6t.r fntervenes a t many points.As we learn more we can g r a a u a l l y give t h e machlne mope t o do and t n e user l e s s . This technique can be f o l l o w e d n o t o n l y i n verbali z a t i o n , but a l s o i n p a r s i n g 'ulhetner r;he u s e r w i l l e v e n t u a l l y d i s sppear from the ~i c t u r e a l t o g e t h e r i s uncertain.However t h a t nay b e , t h e g o a l a1 a pro.;ram in which t h e conti-ibution of t h e u s e r is significantly diminished in r e l a t i o n t o t h a t o f t h e nachine seems worsable. S h o r t of the f i n a l g o a l of eliminating t n e u s e r a l t o g e t h e r , an i n t e r m e d i a t e g o a l i d e n t i f i a b l e as 'human-iided" machine t r a n s l a t i o n can more easily be foreseen.Here the machine will do the many things f o r which it is s u i t e d ; b u t a human brain will be introduced =at t h o s e points where t h e machine has reached i t s l i m i t s . This intermediate goal has, w e b e l i e v e , s i g n i f i c a n t p -~a c t i c a l as well as t h e o r e t i c a l value. c h U , as well as t h e smnllmer chunks into which it will be analyzed, with tlie p r e f i x CC ( f o r "conceptual chunk") followed by a f o u r - It is useful to t h i n k of t h e content o f each chunk--each c i r c l e i n F i g u r e 2--as if it were a m o u n t a i~o u s landscape, with t h e most salient aspects .-tanding o u t i n bold relief and t h e less salient appearing as only minor hills. kll o t h e r t?ings being equal, t h e more salient sople aspect of t h e total c o n t e n t is, the more l i k e l y the speaker i s t o express i t when he subcoaceptualizes. Re is not likely t o make exactly t h e same subconceptual breakdown each t i m e he communicates the sane initial chunk, partly because he m a y judge different things 50 be s a l i e n t in different contexts and VY' e use a different notation to r e p r e y e n t each of t h e various s t a g e s in the verb-lization process. -r~ the o u t s e t , i n t h l s example the initial chunk JC-1001 was a l l that was present. This initial r e p r e s e n t a t i o n , before any v e r b n l i z a t i~n processes had beer] a~j p l i e d , was siaply:2 ) ca-1001After the subconce~tualizatl~n s o e c i f i e d i n 1) was a p p l i e d , t h eSubconce~tu6lization p r o c e s s e s o r e +;bus 'rewrite r u l e s , wh'ich replace one s t a c e in a v e r h : j l i a n t~o n with a subsequent stage, The f o r m n t VAT will now apply txn al~orithmic o r , an we say, syntactic " l process t r i g g e r e d by t h e presence of ZJ-REASON i n 3 yesterday", we want the two sentences t o be expressed, w i t h C2-1003r v -r ? 4) V: , V i A 3u !ilT? ~. i~l '~~, L~ i 2 d .preceding CC-1002. Thus VAT will automatically change t h e r e p r esentatiofl i n 3) t o t h e following:'Phis kind of ~e p r e s e n t a t i o n , i n which no p r e d i c a t e is shown aoove the two CGs, i n d i c a t e s t h a t they ( o r t h e i r eventual verbalimti1,ns)are t o occur i n t h e final t e x t i n the order. shown, w i t h dz-1803 p r e -ceding CC-1002.I n J a p a e s e t h e corresponalng syntactic process w i l l t y o i c a l l y l e a d t o the attachnent of CJ-"KAdk" a t t h e end of th6 second sentence. phus if a representation l i k e t h a t in 3) were produced i n a Japanese vorb:ilization VAT would automatically cl~ange it t o :The q u o t a t l n n marks around indicate t h a t this is an i t e m which will a c t u a l l y appear as a word in t h e text. subconceptualization procr:eds i n t e r a c k i v c l y L n t h e f o l l o w i n g fashion:12) V: LJ1ii1T VAT TASK DO Y'dU J,,NT PERI;'Ol(MiSD? U: VERBALIZE CC-1001(VAT c r e a t e s the following representabion: )Vs HOW I S CC-1081 SUBZONCi'li'TUfiI ZED?(VAT creates first the following r e p r e s e n t a t i o n : )C J-REASON CC-1002CC-1003 (and immediately applies a scored syntactic a l f~o r i t h m t h a tchanges it t o : ) V: HOW I S dC-1003 SUBCONCEPTUALIZED? etc.In this fashion a s u~c o n c e p t u a l hierarchy of any degree o f comp l e x i t y can be c o n s t n~c t e d and expressed.The organization of a t e x t may not, be entirely hierarchical.however. Not only does a speaker break down l a r g e r chunks i n t o I I smaller chunks--larger concepts" into subconcepts; one chunk may also remind him of another, s o t h a t the organization which results may be in p a r t conc-atenative. d e have been viewing c o n c a t e n a t i o n in tepms of excursions away from the main hierarchy, a d hn-ve been c a l l i n g such excurshm 9 r e s s i o n s . In some discourse, however, t h e r e is no necessary c o n s t r a i n t t h a t t h e main hierarchy Se returned to, and the result may be a rambling t e x t in which digression is added to digression. In a more tightly organized text digressions are more likely to appear as p a r e n t h e t i c a l remarks: b r i e f Sidepaths which quickly r e t u r n t o t h e main hierarchy. We uoc t h e t e r n p r e s e n t s i m y l f asks, f o r t h e i n i t i a l CC, whether i t has an i n i t i a l surnmnry (one cxprosr.ed a t t h e b e~i n n i n p j o f t h c t e x t ) . I f t h e answer i~ yes it asks f i r s t f o r s u b c o n c e~t u n l i z a t i o 1 1 of tho summary, and moves on t o ask about t h e body of t h e t o x t o n l y a f t e r t h e summary has been completely verbalized. n t t h e end of t h e t e x t i t asks whether t h e r e is a f i n a l summary.genre t o whlch t h e discourse belongs. It would a.jpear t h a t t h e r e is a continuum ranging from mnximally storeoty-ped t o mcmimnlly c r e a t i v e discourse. Plost stereotyped are those forms o f discourse, such as r i t u a l s , in which t h e s p e a k e r has very l i t t l e choice as t o what he is going t o say o r how he i s goinf: t o say it. l l i t h w e i r discourse t h e "grammar" o f t h e genre provides many of t h e answers rn exrunple of these procedure8 ar: a n p l i e d t o a r o a l t e x t c a n be based on t h e following United Press r e p o r t trrken, sl ightly condensed, from t h e em Francisco Chroniclo o f M a y 16, 19743 13) 1. An 11-ye:lr-old boy using a new "super-glue" in t h i s text:6 v: A!l*lT Iii ?Hi2 GJ22TRd2 U:E3VL L~C~' O H TVAT will now assume t h a t the text is a t y p i c a l ncws r e~o r t which begirls with a sll:runar:r. 'Fhnt is, t h e two 32s are to he e x p r e s s e d with tte "pielderl' preceding the " y i e l d e d " , and they are to be connected w i t h c o m a followed by the word "LJD". This is g a t t h e o n l y way which YIXLD can be r e a l i z e d , b u t f o r t h e , s a k e of t h e example we may re gard it as such. VAT will now proceed t o ask a out t h e subcon-c e p t u a l i~a t i o n o f the e a r l i e s t CC in 19):20) V: WOW IS CC-1002 CUDGONCISlJTOALIZED IN TI143 C\JMMkd?Y?The user has answered that CC-1002 i s broken down i n t o t,wo CGs, CC-1004 ("buildir~g a model airplane") and CC-1005 ("An 11-yearold boy using a new "super-glue" a c c i d e n t l y g l u e d h i s e y e shut").They are r e l a t e d by PKAMSU, a temporal r e l a t i o n i n which T i e f i r s t 3ut since CC-10u4 i s not followed by one q f these boundaries, 2 t t e n t i o n i s -ne*t focused on CC-1005:23) V: HOW IS CC-1005 SUBCoNGEPTUALIZ5lJ I N THE SIlMMAriY?VAT creates the following represcntation:2 4 ) SJ-'~~WSILE~ CC-1004 CJ-PHAME CG-1006 cc-1007 C J-" , AND" cc-1003 I t I 4The user has said t h a t CG-1006 ( " a n 11-year-old boy u s i n g a new 1 ( 1 -"super-qlue"") occupies a time p e r i o d which i n c l u d e s 1007 ( " h ccidently glued his eye shut"). So f~r we w uld expect t V 1 i . s second i n s t a n c e o f PRfiE t o be expresrxd by prefixing t h e word "liilllL.<" t o 25-1006, as was done in 2 2 ) . L e t us s u p p o s e , how-t>~rer 9 h:.lt I~':tAPiL a c t u a l l y triggers a more complex algorithm which says i n e f f e c t t h a t one "WHILE" i n a sentence is enough, and that a s e c -~n d instanc Wnen t h a t has been done, i t w i l l s s y : by t h e s esker with t h e n o t a t i o n :of PHJWIE w i l l l e a d toSUB",NCEPTUALIZi3D? U: YIELD ( G C~O~P , GC-1003) T h i s is,7 ' 1 - 2") u,--05j 0 U:-uGIV&vSuch a s t n t e :~e n t i s t o be r e s~d "SC-1053 is c:lte;:orized 2s an i n -11 --7 r -~? 1 1 s t a n c e o f t h e c a t e r b r y UG-ULV!; . It should be n o t e d t h a t t h e E n g l i s h w0i.d "GIVE" i s n o t 8he name of t h i s catek-ory; m t h e r any p a r t i c u l a r $C h i c h is so c-.t-;::-oriz?d c a n be communicsted with the word "GIVE". In obher words, t h e decinion d e s c r i b e d i n 2 7 ) a l l o w s us to U S + It(;' 7J,$ll a a n:me f o r 23-1053.The way in which a s p e a k e r djecldes t h n t a p a r t i c u l a r J2 can be c a t e g o r i z e d as an l n s t a n c e of so:ne LJC is of c mrse a fl~ndi2n.mtal degree, and not ns an ~3 1 -o r -n o t h i n g decision. 5f t h e degree t o which a p a r t i c u l a r CS i s an ingtance o f some UO ic very high-i f t h e CG i s h i g h l y codable--then t h e use of the word nrovided by the U3 will succeed q u i t e well in conveying t%e content which t h e a eake r has i n mind. If, on t h e o t h e r hand, the content of t h e : C i s -.-~a n w a g e usually involves takin;; one P I ( t h e "topic") as a starting At prcsent it. asks f i p s t : It is now time f o r t h e foll~wing exchange:30) V: GPL! GC-lO53 BE G' ,n i n 3 V' 1 1The u s e r says t h a t i;he decision has been t o c a t e~-o r i z e t h l s d w :is an i n s t a n c e o f t h o category UJ-I1GlVZ". VAT than looks i n t o t h e l e x i c o n and, on t h e b a s i s of the l a s t line in 2 9 ) , r + p l a c e s 3 2 ) w i t h :Two o t h e r consideration:: a r e r e l e v a n t at this p a l n t . A second consider l t l o n at this p o i n t 5s to estab! i s h which PI is t h e s u b j e c t o r t o p i c , t h e PI on which t h e s r > e a k e r lntends t h e addressee' a attention to be focusod and concerning -{/hich something w i l l be a s s e r t e d . Again the easy way out 1s f o r 'JAiT to :I& the user:'n J l ;CT'i 37) V: ~JiIiil! 1 , : , 111 L,, U: 11-1254The question in 37) 1,s : q ) p r o~~r i a t e way; n r i o -r ti, t h e I t t e r i n g of' t h e p r e s e n t sentence (3hafe 1974).Here =aiA we h5ve a c a s e where t h e easiest, course f o r VAT a t this preliminary sts:-e of jts developnent is t o nsk ' t h e uscr: Let us assuae first t h a t the answer to LO) has been y e s , in which case English i s l i k e l y t o lexicalize PI-1234 with a or0noun.This is not always the case; sometimes a PI t h a t i s given will not be pron~minallzed. The principal criterion here s e~m s to be whether pronominalization will produce mbit;uitg, and u l t i m a t e l y VAT -w i l l need t o deci6e whether ambiguity will r e s u l t . F o r now, however, we proceed on the assumation that a PI which f3 a j v c n w i l l a u t omatically be pronominalized.The procedure we a r e currently u s i n g for prononinalization inEnglish asks f i r s t :4-1) L: IS PI-1234 'PIE ~irUDil 1 l:>(-im T:?i J A >~) , L u . Otherwise it rnust find the sex o f this r e f e r e~l t :'t5) ' V: IS PI-123 I-MALE OR P9IlLLE1md l e x i c a l i z e it as IJTd-"HiS" o r EN-"6:IZ" accordingly.If the mswr t o 42) was a n:lnber g r e a t e r t h a n one, VAT must decidz between "~~~" and "TilEY", t h e pronouns 7,dhich are explicitly p l u r a l . d~s e f i t i a l l y it must ask:' 1.6) V: It *,"1!3 L j .;' Mi ,11 h IuIJ?: JE!t OP .171-123LC2If yes, it will q~rotlr~ce t h o 1 e x i t b n l i z a t i o n AN-' " T -. ' $ T 1 ) 3 4 : s PI-12YC ' I A . ? L~ i L , . L , . , .If yes, th.: lJ-eT. cives t h e name and T/A'P lexi : a l i z e s PI-1234 as is countaSle, 1 also in t9ls case a s k a b o u t its c a r d i n a l i t y ,as in 92) above. I-f t h e answer is a nufiber r~r e a t e r t h a n one, TJl~'2 will ere t e n re-7resent at i o n 1 l~e 21:-";"":;::IiLfifI / " ;>i,:T .ILL'' Tndencn- ;relater than one. The sutcome w i l l thus be c i t h c r NN-"T!SAZII:;H" UI-"AW i~l / "I3LPIIAL" / &Z~"*L~'IIZE"~ t h a t is, "a t e a c h e r " :ontextual grodnds on whkbh VhT will be able t o answer a q u e s t i o n of UZ-ttLIFT"t Ve will w : a t to sa;~ that when X lifts Y, t i entails 51) C C -h C> UZ-"LIE'Ttf i3>25-4 P> VU-"Ll" PTt! (I~I-D(BGT, PI-C? PAT) ~i v e s t h e case f r a n e , saying that there w i l l be a c l a u s e c o n t a m i n g r;ne verb "LIFTtt ac3:~rnpanied by an age1,t (PI-B) and a p a t i e n t (PI-c).U'"4 f1 1 CJ-A S> CJ-3ii d~ (43-D, yu-li) JC-D F> VB-ACT (PI-@) rl, I \ CC -3 S> (; J-CO!!J J I I G I X O ( ( I ( G ) , ,The second Pine under i3> s a y s t h a t i t i s a l t e r n a t i v e l y possible t o subconce;~tl:alize CC-A in e c e r t a i n way, wnich a r u o u~~t s having t h e use of something, wt-ich we will call HAVE-USS. Simple HAV3, as in 52), is m e a n t t o be nonspecific a$ t o wt~ich of these v a r i e t i e s o f having is involved, as may be accounted f o r with the following two statements:55) CG-A C> UC-HlLE-OUN E> ZC-A C> UC-IIAVZ ' 1 r l A C> UC-EIAV$:-UGS E> CS-A C> UC-HAVEOne examnle of a t r a n s f e r is t h e k i n d whish is cate:;orizab.be with UC-"GIVE", whose lexical e n t r y can be ~i v e n as f o l l o~s :54) CC-A S> UG-"GIVE" E> ZC-A F> VB-"GIVZ" ( P I -B~A G T , CC-A. C> UC-TMJSFXT{ PI-D = PI-B ' 1 = PI-c P I -= PI-DThat is, a CC which has been c a t e g o r i z e d as an i n s t a n c e o f UC-"GTVX:,."has t h o case f r~m e nhown i n t h e f i r n t l i n e undor &>. The question nark before t h e heneflcimy indicates t h a t it is o p t ianal; one c~n say "Hoger gave n book" w i t h o u t n e n t i o n i n~ 9 beneficiary. The ItGIVEH e n t r y ( t h e g i v e r ) ; 1'1-b' o f t h e TBl;!JSPER entry is e q l l i v a l e n t t o PI-C of t h e "(;IUEt' e n t r y ( t h e glvee)* aqd 11-E of t h e ".iA.idp.ia e n t r y is equivalent t o PI-U o f t h e "GIVX" e n t r y ( t h e given). Besides buying and sglling, a n a t h e r t y p i c a l ~r a n s a c t~o n is r e n t i n g . The Xnglish word r e n t is ambiguous, an(: w e w i l l illustrate h e r e the e n t~y f o r what we c a l l U'3-ffR,1:;IT-2ff, whlch is renting out 57) CC-A C> UC-"RENT-2" E> CC-A F> VB-"QI!XVTt' (I?I-B~AGT, ?PI-CtBEN, 'I P I -D~M B H ; PI-E PAT)CC-A C> UC-Z'lWSACTION ~'1-F = PI-B PT-D = I~I-G I = PI-D I = PI-E CC-B = CC-F q f l fl uu-v = CC-G CC-F C> UC-T3lilILFiSR CC-B = CC-H CC-C = CC-I CG-G S>* US-T! ~ANSF3.il ee-3 = cc-J 2c-C = cc-K PI-D C> UC-l\TZlI)Im-OF-EXSTIhYGE CC-H C> ~JC-K~LVE-O\JN CC-I C> UC-IIAVLOL1N CC-J C> UC-IIXVZ-USE CC-K C> UC-HAVE-USE VB-IIAVE-OWN (PI-D, 1'1-E)The f i r s t line under E> gives t h e case frame, which includes two obligatory cases, an agent and a p a t i e i l t (Bill rented ( o u t ) his lawnmover") and an o p t i o n a l beneficiary and measure (MLR) ("-Bill and it is necessary t o s t a t e t h e equivalences between the PIS in 5?) and those in 56). Below these PI equiv:llences it is a l s o s t a t e d t h a t t h e JC-B o f t h e WANSACTION delinition ( t h e t r a n s f e r o f money)is e q u i v a l e n t t o CC-F o f t h e 'RENT-2" definition, while CZ-C o f t h e TLCLN~ACTIOIY definition ( t h e transf a r o f t h e objec*) is equival'ent It was mentioned t h a t t h e l e x i c a l e n t r y f o r Japanese TJC-J'K ,Sis the same 3s t h a t f o r Z n g l i a h U3-!IL:iXD1', as i l l 55), except t h a t t h e Japanese e n t r y laCks the last l i n e of 55) i n which it i s s t i p u l a t e d t h a t lend in^ cannot be a t r a n s a c t i s n , It can now be jeen thar; UC-"KAS-I' is 59) PI-A C> UC-rUDY~-oF-lZ2~&lJSE E> E t PI-A C> UC-"MdNEY"A more com2lex example involves the c a t e g o r i z a t i o n o f a PI as an i n s t a n c e o f UC-"BEnGLEtt.I n this case we Know t h a t t h e PI is also s a t e g o r i z a b l e as an i n s t a n c e o f UC-"DOG", that. we may e x~e c t t h a t it will have a tail (although some dogs do m t ) , that it w i l l b m , ana t h a t it w i l l chase cuts:6 0 ) PI-A O> UG-"BEA(;Li.," L' > y1-A 3> UC-ltD()GH E: VBLHAT~-AS-PIL~~T ( I -PI-B) PI-B z> uC-t'T.ti.zj-btt E: VB-BARK (YI-A) E: VB-3&dB (11-A, PI-<) PI-c C> T~c -t~d j i~t lIt nay b e that E: should be expressed as a p r o b a b~l i t y ;s~a t is, that t h e r e is a ao~tinuous range over which we nay expect ~o m e t h i n g t o be e n t a i l e d , with necessary entailnrent being one extreme. (11-1'456) If any or all of t h e s e PIS occur in t h e next sentence, t h e y ; r i l l be pronom'inalized, arld it wlll not be necessary for \VA2 lJl -1'387)t o know w h~t n~r t i c u l~r i n n t w c e i t i s (in this c a i e 1'1-l9h7).dhen, during a 1::ter s e n t e n c e , VjiT coaos t o the n u e s t i o n : as an inatance of UJ-"'dd569.P". Now, it may be n o t e d t h a t t h e second line under , which d e a l s w i t h t h e c a t e g o r i z a t i n n o f PI-B, is a statement like t h a t .in 6 6 ) above. A f t e r a sentence l i k e !'I bought a bicycle yesterday'' has bezn produced, t h i s line will t h e r e f o r e t r i g g e r a readjustment proces:: which c r e a t e s t h e statement: 70) 8P-IDENTIFI'4:E (UC-"PHM!IE",. lal'-1'~68) (with whatever number it is a p p r o p r~n t e to assign to thia PI 1. As a cchxsequuence, if PI-$468 occurs i n a subsequbnt sentence it will be l e x i c a l i e e d with t h e definite a r t i c l e , as i n "The f r b e i s extra The general n a t u r e of t h e $ r a n s l a t i n n procedure was o a t l i n e d in s e c t i o n I, and d i a~r a m e d in F i g u r e 1. 1'0 summarize aeain, '/AT will s t a r t with a t e x t in the e o u r c e language, will re con st;^-uct t h e v e r b a l i z a t i o n processes which produced t h a t t e x t , and w i l l t h e n i t s e l f produce a. p a r a l l e L l . v'erbalization in t h e t a r g e t language a r e added by changing the verb in t h e first sen'tence from u t t a 'sold' t o k a s i t a ' r e n t e d ' or 'leht L e t us f i r s t revi3w the manner in whic'h 'J1i'I' will r e c o n s t r u c -t t h e o r i g i n a l verbalization of t h e Japanese t e x t . Since our eventual pwsing' corn!~onent w i l l follow a kind of-' ' a n a l y s~s by s y n t h e s i s "procedure; we will a l s o be s u g g e s t i n g -h e r e t h e s t e p s of t h e p a r s i n g The next two questions are:19. V: WIIAT IS THZ A G~T ? 21. V: WIIAT 'IS T33 Z'ATIENlaVAT now has t h e . following represe~tation (cf. 36) above:VB-" UR-' 1 / l l~~~~l f PI-2001t1GT PI-2003'l PAT CJ-". It cc-2002 c g ~~K:~IIAII t I -I tVAT next asks-:whereupon f o r Japanese. it creques t h e structure:It 11 CJ- CC-2002CJ-"K@IA"tr t i CJ-. VMis.now a t a p o i n t where it can l e x i c a l i g e PI-2001 and PI-2003. Beginpling w i t h P I -2~0 1 , it might ask f i r~t :25. V: IS PI-2001 GIVEN?In f a c t , however, we assume t h a t t h e s p e a k e r (and addressee) are latnmtically given, so that VAT contains a general entailment t o the e f f e c t t h a t :Since The translation, then, begins with the same question t h a t beg:in t h e v e r b a l i z a t t o n ' i n Japanese:V: WHAT VAT Tr.rSK DO YOU WATIJT L''t3tZFORMLUilThe answer given in line 2 above wtas V!IRBALIBE GC-2001. The English t r a n s l a t i o n must use i t s own four V:ITAAT IS THE GzNRE'2 V: CAN C C -1 0 0 1 BE CI~T3~GOKIZkZD?We assume t h a t English would n o t i n this c a s e use t h e word because, b u t simply juxtapose t h e t w o sentences, as in example 8) in s e c t i o n 11. Thus *the represenr;aT;lon now is:Lines 9-13 of t h e Japanese v e r b a l i z a t i o n have a d i r e c t correspondence:V:CAN CC-1003 DE CBTEGOHIZ CDS U: NO V: HOW IS CC-1003 C~ITEGORIZEU?At this point the Japanese was UH-. That i s , t h e c a t e g o r i z a t i o n was in terms of the Japanese category UG-"UR-". The p r e s e n t eTample w 8 s chosen because t h e answer t o the last q u e s t i o n above -Gan be found in interlingua. L a t e r we w i l l consider a case where it cannot. A t this polnt V K Y a n s w e r s i t s own q u e s t V: S T I -1 L C EXPLICIT t V: kJHkT IS THE AGENT? V: GjTIkT IS TIE i':LTIdn7T?The r e r~r e s a n t a t i o n now i s :The next e x c h m~e is:which creates the represent at ion:With the lexicalizat~on o f PI-1001 t h e procedure is different in English, since t b i s i t e m cannot siqnly be d e l e t e d as in the Japanese.We follow t h e questions i l l u s t r Thus the r e p r -~e n t a t i~o n now is: The Javanese answer was HEIZOOKO; VAT' will-now l o o k in i n t e r l i n~u a to see whether t h a t i t e m is t h e r e , and we asoumc t h a t it w i l l be The answer deoends on t h e context, b u t let us asdurne thqk it is yes.NowThe representation now is: b t where a CC o r PI needs t o be categorized. follow in^: a r r o w 1, we look across to the source language verbalization to find that the correspondln'g ZC or PI was categorized i n a c s r t a i n wqy, l e t us say as an i n s t a n c e of category A. W e l o o k next a t interlingua ( a r r o w 2 )If A were t h e r e , we would t a k e the . Suppose t h a t we find t w~ gntries in t h e t a r g e t language lexicon, will allow us to choose between X and Y. (~e ; a i n t h e r e are c h a l -.lenging problems in searching t h e aaurce language t e x t f o r t h e answer.) Let us now assune t h a t we f i n d something in t h e source language t e x t t h a t i s c o r n~~a t i b l e with X but not w i t h Y. We a r e t h e n able t o choose B as t h e correct t a r g e t langusge catetl;ory. We int-rocfuce t h a t category into t h e t a r g e t 1angua~;e v e r b a l i z a t i o n v i a arrow 6 and proceed. In t h o s e cases where t h e choice between X and Y (and hence between B and C) cannot be made--where t h e source lan- We will want VA:? t o t r a n s l a t e t h e s e two sentences i n t o English: We are now in tho upper r i g h t o f Pi~;ure/rC, and we follow arrow 1 t o find t h a t the corresponding Gc; in the ~' a p a n e s e verbalization wag 75) CCAA J> U~~~" I Q~? :74) I rented" V - E> C F> n-ltKASdH (PI-B~AGT, ? i 2 1 -C~B L I J , IJI-DTPAT) GC-A C> UC-TIfiUJ;<FEl? PI'D = 1'1-B r* iT-F = . L I-b - = PI-j.)CC-B = CC-E CC-C = CC-F Substituting four d i g i t numbers f o r t h e %varinbles, we8 obtain: Following a r r o w 4, w e carr-y these entailments across t o the English lexicon and s e a r c h for entries whose entailments a r e comp a t i b l e with 7 6 ) . Compatibility means t h a t these entries w i l l cont a i n what is in what i s 76), but may also contain more. Let us s a y t h a t we f i n d two such entries, one f o r t h e category UC-"LENDt1, which was given. in 55) above, and one f o r UC-"HENT-2", which was given in 57).76) CC-2003 C> U3-1tF&b-'1 I> CS-2003 F> V t L ( -2 0 A , ?L I -~~o ; , VThe next s t e p is t o i s o l -7 t e thf: differences between UC-"LEND"and UC-l'RjQFJ~-2" ' U c -"~i $ f~f f , as mentioned, differs f r o m 75) in containing an a d d i t i o n a l f i n a l line:7 8 ) CC-A -C> UC-TRIiNSACTIOIlThat is, CC-A cannot be categorized as a -tran&action. UC-"i%NT-2", Idhat all this says is t h a t t h e cntegorization o f X2-1003 as an i n s t a n c e 0 1 UC-"KENT-2" involves a number of thin~s. F i r s t , t h e r e must be a person who does h e r e n t i n @ out ( 0 1 , a nerson who receives the rented o b j e c t (PI-l9Ol), t h e money t h a t is p a i d in r e n t (PI-1902) , and t h e rented object i t s e l f (PI-1003). Furtherinore, CG-1003 "is said t o be a t r a n s a c t i o n , and c e r t a i n equivalences are s t a t e d between the RENT-2 d e f i n i t i o n and t h e 'THMLACTI h def ini Lion.VAT must ther*efore a s s i g n t h e s e p a r t i c u l -w P I and dC numbers w i t h l n t h e d e f i n i t i o n of u~-TRA.NSAGTION' which was givenx as example 56) in zs-1901 s> CJ-CHANGE (CC-1903 (CC- , .':C-1904 CG71g03 F> ' W-HAVE ' (PI-1901 PI-1902 cc-1904 F> VB-HAVE ((PI-oo1-, I~I-1902) That i s , t h e f i r s t 'transfer involves a. chanae from ZS-1903 t o 3C-1904. In CC-1903 the rentee (1'1-1301) has t h e money , and in C C -1 9 4 the renter (PI-10~1) has it. The second trxnsfer is repre- What VAT wants to find o u t , t h e n , is whetlzer t h e s e t h i n a s that. In 73), however; we have made things easy .by supnlying a context is categorized in the Japanese as an i-nctance of UJ-"AITriYOC, DA" which means something like "be nee'dedl'. L e t us assume t h a t t h e Jap-anese lexicon c~n t a i n s an entry f o r this categorytwhich incrludes the following:saction V I85) CC-A C> U~' -l l B I T l j Y O O DL E> CC-A P> VB-"fIITIJYGO DAtt (pI-B+BEN, PI-z~PA'P) c -E> v B -~~~, I Q (PI-B,' l;c-D) CC-D P> V13-IIAVE (PI-8, PT-C). .Ihe case frameb immediate3y under t h e E> identifies 1'1-B as t h e beneficiary, t h e pe.rson. who needs something, while t h e t h i n g needed .is l a b e l e d T P I X . The second link under t h e E> says t h a t an a l t e r - The first line says 'that PI-B w a n t s CC-C.The second line says that PI- will have to be takeb into account during the implementation o f machine t r a n s l a t i o n along t h e l i n e s suggested above. Two of these examples will, l i k e those i n t h e l a s t sect?or,, involve t h e c h o i c e of a category in the t a r g e t \language rhen t h a t , chbice i s not d i r e c t l y provided by i n t e r l i n g u a . One has to do with t h e t r a n s l a t i o n o f Japanese osieru. into English; the othGr, the t r a n s l a t i o n of English @ve i n t o Japanese. A third example w i l l illustrate t h e 15ind of probkem t h a t arises at t h e st age of subconceptualizat i o n qnd sentence formation.The xach of these ,extunples contains the phrase: 93) Kookyo-ga doko n i a r u k a o s i e t e which is t r a n s l a t e d in t h r e e d i f f e~e n t ways, determined by t h e context in 90): show where t h e I m p e r i a l Palace is in 91): t e l l where t h e l I m p e~7 i a l Palace is i n 92): t e a c h where t h e 1 m~e r i a . l Pal .m isThe difference is l o c a l i z e d in t h e t r a n s l a t i o n o f o s i e t e , a p a r t ic i p i a l !form of t h e verb osieru. This verb may "be transla-ked into The Japanese category UC-"OdIE-If -1s well as t h e English c a t egories UC-"SiIOW", UC-"TELL" 9 and UC-','TZACH" a r e a l l included within t h e more abstract cl:tegory UJ-CONWNICBTION, which can be defined as follows: But how i s it, f o iexomplc, t h a t -t h e context in 90) r e s t r i c t s the t r a n s l t i t i o n of "OJIiS-" to ":JI1OW"'IThe it c l e a r through t h e p h r a s e translated "when we r~;ot t h e~e " th t we werenl)t a t t h e lrnp 1~i a L P a l a c e at the time o f t h e c~~~r n z m~c :~t l v e a c t .A n o t h e r ~e n e r n l principle says t h~t v l d~~a Z a t t e n t i o n c m beI idirected o n l y at t h n~s within v i s u~1 rY:lnpe. Thus lJ$-"oiiOf~ is I n -V I E - 1 ( 1 3 PI -C )That is, Ud-','dKIIHS-" is, th'e c < ) t e c o r y chor,Bn if the bcneficiqry o f t h e giving is s o c i v i l l y c l o s e to-the s p e a k e r , c l o s e r t o t h e s n e a k~r t h a n athe aqent ofl t h e g i v i n g , and the agent i n n o t s o c i a l l y h i q h e r t h a n t h e henef'ic-Ta~y. In t r a n s l a t i n t < t e x t s where s u c h i n f o r n X t i o n i s r e l e v a n t , 71iT will b i t l l e r h3ve to s t o r e a n:-~twork o : s o c i a l r e l a t i o n 5 linkin(; a l l t h e r e l e v a n t I n d l v l d~~l s , a network which n a v -;n c ? r t be ,I'eYlvnble frorn t h~! t v t , pr ~t will ~' I v ( ? In o t h e r woqds, btho -e n t n i l q e n t~ o f UO-"KUUUI-I2-" Rre t h e name ao t h o s e o f U3-"KUIId-" except that the agent o f t h e ~i v i n q & socially 'higherthnn t h e beneficiiwy. The l a s t verb th::t we' w i l l conslder h e r e 1s s~s i a { ; e r u :~, T~-CL~:~uL-'20-Al--21~LL-i (PI -3)7 ) V: HOU I : ] CC-1001 SWC,NGEPTGhLIi;ED IN Ti12 dU;iMjiliY? U-YISLI, (CC-10~2, 3C-1003)t h a t % d o e s s c l r~l e t h i r i~ which c a i l s e s a. chmir;'e of st;:.,te f r g m Y be in^ i l l one. l o c a t s o n to H b e i n g i-n a n o t h e r l o c 2 t i o n , and f u r -b h e m~r e t h a t t h e new loc-2tlrrn is shove t h e o f d l o c a t i o n . The 1 e x i z :~l entry f o r U2-"LIII'T", i n s o f -l r a s it c n y~t~~r c s t;i:lr; much Infurnn-t;~.on, i s written as f o l l o w s :
null
Main paper: v: is the wpsu~e explicit?: The next two questions are:19. V: WIIAT IS THZ A G~T ? 21. V: WIIAT 'IS T33 Z'ATIENlaVAT now has t h e . following represe~tation (cf. 36) above:VB-" UR-' 1 / l l~~~~l f PI-2001t1GT PI-2003'l PAT CJ-". It cc-2002 c g ~~K:~IIAII t I -I tVAT next asks-:whereupon f o r Japanese. it creques t h e structure:It 11 CJ- CC-2002CJ-"K@IA"tr t i CJ-. VMis.now a t a p o i n t where it can l e x i c a l i g e PI-2001 and PI-2003. Beginpling w i t h P I -2~0 1 , it might ask f i r~t :25. V: IS PI-2001 GIVEN?In f a c t , however, we assume t h a t t h e s p e a k e r (and addressee) are latnmtically given, so that VAT contains a general entailment t o the e f f e c t t h a t :Since The translation, then, begins with the same question t h a t beg:in t h e v e r b a l i z a t t o n ' i n Japanese:V: WHAT VAT Tr.rSK DO YOU WATIJT L''t3tZFORMLUilThe answer given in line 2 above wtas V!IRBALIBE GC-2001. The English t r a n s l a t i o n must use i t s own four V:ITAAT IS THE GzNRE'2 V: CAN C C -1 0 0 1 BE CI~T3~GOKIZkZD?We assume t h a t English would n o t i n this c a s e use t h e word because, b u t simply juxtapose t h e t w o sentences, as in example 8) in s e c t i o n 11. Thus *the represenr;aT;lon now is:Lines 9-13 of t h e Japanese v e r b a l i z a t i o n have a d i r e c t correspondence:V:CAN CC-1003 DE CBTEGOHIZ CDS U: NO V: HOW IS CC-1003 C~ITEGORIZEU?At this point the Japanese was UH-. That i s , t h e c a t e g o r i z a t i o n was in terms of the Japanese category UG-"UR-". The p r e s e n t eTample w 8 s chosen because t h e answer t o the last q u e s t i o n above -Gan be found in interlingua. L a t e r we w i l l consider a case where it cannot. A t this polnt V K Y a n s w e r s i t s own q u e s t V: S T I -1 L C EXPLICIT t V: kJHkT IS THE AGENT? V: GjTIkT IS TIE i':LTIdn7T?The r e r~r e s a n t a t i o n now i s :The next e x c h m~e is:which creates the represent at ion:With the lexicalizat~on o f PI-1001 t h e procedure is different in English, since t b i s i t e m cannot siqnly be d e l e t e d as in the Japanese.We follow t h e questions i l l u s t r Thus the r e p r -~e n t a t i~o n now is: The Javanese answer was HEIZOOKO; VAT' will-now l o o k in i n t e r l i n~u a to see whether t h a t i t e m is t h e r e , and we asoumc t h a t it w i l l be The answer deoends on t h e context, b u t let us asdurne thqk it is yes.NowThe representation now is: b t where a CC o r PI needs t o be categorized. follow in^: a r r o w 1, we look across to the source language verbalization to find that the correspondln'g ZC or PI was categorized i n a c s r t a i n wqy, l e t us say as an i n s t a n c e of category A. W e l o o k next a t interlingua ( a r r o w 2 )If A were t h e r e , we would t a k e the . Suppose t h a t we find t w~ gntries in t h e t a r g e t language lexicon, will allow us to choose between X and Y. (~e ; a i n t h e r e are c h a l -.lenging problems in searching t h e aaurce language t e x t f o r t h e answer.) Let us now assune t h a t we f i n d something in t h e source language t e x t t h a t i s c o r n~~a t i b l e with X but not w i t h Y. We a r e t h e n able t o choose B as t h e correct t a r g e t langusge catetl;ory. We int-rocfuce t h a t category into t h e t a r g e t 1angua~;e v e r b a l i z a t i o n v i a arrow 6 and proceed. In t h o s e cases where t h e choice between X and Y (and hence between B and C) cannot be made--where t h e source lan- We will want VA:? t o t r a n s l a t e t h e s e two sentences i n t o English: We are now in tho upper r i g h t o f Pi~;ure/rC, and we follow arrow 1 t o find t h a t the corresponding Gc; in the ~' a p a n e s e verbalization wag 75) CCAA J> U~~~" I Q~? :74) I rented" V - E> C F> n-ltKASdH (PI-B~AGT, ? i 2 1 -C~B L I J , IJI-DTPAT) GC-A C> UC-TIfiUJ;<FEl? PI'D = 1'1-B r* iT-F = . L I-b - = PI-j.)CC-B = CC-E CC-C = CC-F Substituting four d i g i t numbers f o r t h e %varinbles, we8 obtain: Following a r r o w 4, w e carr-y these entailments across t o the English lexicon and s e a r c h for entries whose entailments a r e comp a t i b l e with 7 6 ) . Compatibility means t h a t these entries w i l l cont a i n what is in what i s 76), but may also contain more. Let us s a y t h a t we f i n d two such entries, one f o r t h e category UC-"LENDt1, which was given. in 55) above, and one f o r UC-"HENT-2", which was given in 57).76) CC-2003 C> U3-1tF&b-'1 I> CS-2003 F> V t L ( -2 0 A , ?L I -~~o ; , VThe next s t e p is t o i s o l -7 t e thf: differences between UC-"LEND"and UC-l'RjQFJ~-2" ' U c -"~i $ f~f f , as mentioned, differs f r o m 75) in containing an a d d i t i o n a l f i n a l line:7 8 ) CC-A -C> UC-TRIiNSACTIOIlThat is, CC-A cannot be categorized as a -tran&action. UC-"i%NT-2", Idhat all this says is t h a t t h e cntegorization o f X2-1003 as an i n s t a n c e 0 1 UC-"KENT-2" involves a number of thin~s. F i r s t , t h e r e must be a person who does h e r e n t i n @ out ( 0 1 , a nerson who receives the rented o b j e c t (PI-l9Ol), t h e money t h a t is p a i d in r e n t (PI-1902) , and t h e rented object i t s e l f (PI-1003). Furtherinore, CG-1003 "is said t o be a t r a n s a c t i o n , and c e r t a i n equivalences are s t a t e d between the RENT-2 d e f i n i t i o n and t h e 'THMLACTI h def ini Lion.VAT must ther*efore a s s i g n t h e s e p a r t i c u l -w P I and dC numbers w i t h l n t h e d e f i n i t i o n of u~-TRA.NSAGTION' which was givenx as example 56) in zs-1901 s> CJ-CHANGE (CC-1903 (CC- , .':C-1904 CG71g03 F> ' W-HAVE ' (PI-1901 PI-1902 cc-1904 F> VB-HAVE ((PI-oo1-, I~I-1902) That i s , t h e f i r s t 'transfer involves a. chanae from ZS-1903 t o 3C-1904. In CC-1903 the rentee (1'1-1301) has t h e money , and in C C -1 9 4 the renter (PI-10~1) has it. The second trxnsfer is repre- What VAT wants to find o u t , t h e n , is whetlzer t h e s e t h i n a s that. In 73), however; we have made things easy .by supnlying a context is categorized in the Japanese as an i-nctance of UJ-"AITriYOC, DA" which means something like "be nee'dedl'. L e t us assume t h a t t h e Jap-anese lexicon c~n t a i n s an entry f o r this categorytwhich incrludes the following:saction V I85) CC-A C> U~' -l l B I T l j Y O O DL E> CC-A P> VB-"fIITIJYGO DAtt (pI-B+BEN, PI-z~PA'P) c -E> v B -~~~, I Q (PI-B,' l;c-D) CC-D P> V13-IIAVE (PI-8, PT-C). .Ihe case frameb immediate3y under t h e E> identifies 1'1-B as t h e beneficiary, t h e pe.rson. who needs something, while t h e t h i n g needed .is l a b e l e d T P I X . The second link under t h e E> says t h a t an a l t e r - The first line says 'that PI-B w a n t s CC-C.The second line says that PI- will have to be takeb into account during the implementation o f machine t r a n s l a t i o n along t h e l i n e s suggested above. Two of these examples will, l i k e those i n t h e l a s t sect?or,, involve t h e c h o i c e of a category in the t a r g e t \language rhen t h a t , chbice i s not d i r e c t l y provided by i n t e r l i n g u a . One has to do with t h e t r a n s l a t i o n o f Japanese osieru. into English; the othGr, the t r a n s l a t i o n of English @ve i n t o Japanese. A third example w i l l illustrate t h e 15ind of probkem t h a t arises at t h e st age of subconceptualizat i o n qnd sentence formation.The xach of these ,extunples contains the phrase: 93) Kookyo-ga doko n i a r u k a o s i e t e which is t r a n s l a t e d in t h r e e d i f f e~e n t ways, determined by t h e context in 90): show where t h e I m p e r i a l Palace is in 91): t e l l where t h e l I m p e~7 i a l Palace is i n 92): t e a c h where t h e 1 m~e r i a . l Pal .m isThe difference is l o c a l i z e d in t h e t r a n s l a t i o n o f o s i e t e , a p a r t ic i p i a l !form of t h e verb osieru. This verb may "be transla-ked into The Japanese category UC-"OdIE-If -1s well as t h e English c a t egories UC-"SiIOW", UC-"TELL" 9 and UC-','TZACH" a r e a l l included within t h e more abstract cl:tegory UJ-CONWNICBTION, which can be defined as follows: But how i s it, f o iexomplc, t h a t -t h e context in 90) r e s t r i c t s the t r a n s l t i t i o n of "OJIiS-" to ":JI1OW"'IThe it c l e a r through t h e p h r a s e translated "when we r~;ot t h e~e " th t we werenl)t a t t h e lrnp 1~i a L P a l a c e at the time o f t h e c~~~r n z m~c :~t l v e a c t .A n o t h e r ~e n e r n l principle says t h~t v l d~~a Z a t t e n t i o n c m beI idirected o n l y at t h n~s within v i s u~1 rY:lnpe. Thus lJ$-"oiiOf~ is I n -V I E - 1 ( 1 3 PI -C )That is, Ud-','dKIIHS-" is, th'e c < ) t e c o r y chor,Bn if the bcneficiqry o f t h e giving is s o c i v i l l y c l o s e to-the s p e a k e r , c l o s e r t o t h e s n e a k~r t h a n athe aqent ofl t h e g i v i n g , and the agent i n n o t s o c i a l l y h i q h e r t h a n t h e henef'ic-Ta~y. In t r a n s l a t i n t < t e x t s where s u c h i n f o r n X t i o n i s r e l e v a n t , 71iT will b i t l l e r h3ve to s t o r e a n:-~twork o : s o c i a l r e l a t i o n 5 linkin(; a l l t h e r e l e v a n t I n d l v l d~~l s , a network which n a v -;n c ? r t be ,I'eYlvnble frorn t h~! t v t , pr ~t will ~' I v ( ? In o t h e r woqds, btho -e n t n i l q e n t~ o f UO-"KUUUI-I2-" Rre t h e name ao t h o s e o f U3-"KUIId-" except that the agent o f t h e ~i v i n q & socially 'higherthnn t h e beneficiiwy. The l a s t verb th::t we' w i l l conslder h e r e 1s s~s i a { ; e r u :~, T~-CL~:~uL-'20-Al--21~LL-i (PI -3)7 ) V: HOU I : ] CC-1001 SWC,NGEPTGhLIi;ED IN Ti12 dU;iMjiliY? U-YISLI, (CC-10~2, 3C-1003)t h a t % d o e s s c l r~l e t h i r i~ which c a i l s e s a. chmir;'e of st;:.,te f r g m Y be in^ i l l one. l o c a t s o n to H b e i n g i-n a n o t h e r l o c 2 t i o n , and f u r -b h e m~r e t h a t t h e new loc-2tlrrn is shove t h e o f d l o c a t i o n . The 1 e x i z :~l entry f o r U2-"LIII'T", i n s o f -l r a s it c n y~t~~r c s t;i:lr; much Infurnn-t;~.on, i s written as f o l l o w s : : gram=* 01 t i~c t:)rIyet L a n r r 1~~3 f r e . J e r b g l l i z~t i o n :i:ld t r a n s l a t i j n I n an e a r l i e r r e p o r t t h a t t h e r e are two c h r n e n s l~n s of high q u a l l t y t r a n s l s t l q n , whnch we termed naturalnes-s and fldelltg, N a t u r a l n e s s 1 s achleved when t h e t u g e t language v e r b a l~z a t~o n adheres t o a l l t h e constralnts of t h a t language; the o u t p u t w~l l then sound "natural". We a r e l e d , t h e n , t o the g e n e r a l p i c t u r e o f t r a n s l a t i o n which is shown in F i g u r e 1. The t w o v e r t l c i l l columns r e p r e s e n t the two v e r b a l i z a t i o n s whlch a r e involved: .In t h e l e f t the s o u r c e languasge v e r b a l i z a t l o n and on the right t h e t a r g e t v e r b a l i z a t i o n . The o t h e r n n j r~r c~m:,qnent o f t h e t r c a n s l a t i r ) n proced!lre i : i t h e t r a n s l a t i o n componept. I t i s e q u i v a l e n t t o a vnrbql i m t i a n -1 1 1 trlc tarr;c?t language. 'The p r o c e s s e s wklich rn:~Be u~ t i v e r b a l -i z : l t~n q apem, to the extent t h a t they are a l c o r i t h m l c , those which cxnrrtss t c a r g c t 1nny;uaye c o n s t r a~n t s a n d , t o t h e e x t e n t t h a t t h e y :.T c r e a - t h e proccnnes t h n t went i n t o EI p?rt.iairlnr verbalizatinn. The kr-nn8l a t i o n comannentis R v e~b~l i~a t i o n , , thpr1p;h one of R sneciaL sort, and t h e r e a~a i n a d e t a i l e d understmrlinp; of v e r b n l i z n t i o n p r ocesses i s necezsary. This r e p o r t , then, will be most cr)ncerned wi.th t h e n a t u r e of verbalization. W e w i l l a l s o d e v o t e c o n s i d e r a 7~l e space t o t h e n a t u r e of t h a t speci-al sort o f verba1iz:rtion which i n t r n n sl a t i o n . ' We will have t h e l e a s t t o say about parsinc. Examples w i l l be c i t e d from English and Japanese.F o r bout t h e l a s t nine months o f t h e p r o j e c t we were concerned w i t h the development o f :m i n t x r n c t i v e computer pro,p;ram t h n t would implement t h e v e r b a l i z a t i o n n r o c e s s e s we hy-potheslzed. f~l t h o u p -h t i prQy;ram remained primitive, t h e intention was t h a t i t would ~r a d u a l l~ achieve increased sophistication i n its a b i l l t g to simul a t e verb: l i z a t i o n , t r a n s l a t i o n , and garsing. As it presently simulates t h e Drocesses o f v e r b a l i z a t i o n , i t beeins with an item t h a t r e p r e s e n t s t h e i n i t i a l holistic idea which t h e sneaker or writer of a t e x t wishes o c~)nmunicate. I t then asks t h e user, s e a t e d a t a t e l e $ t y n e , t o make t h e s e r i e s o f creative c h o i c e s t h a t a r e hecessnry kn the production o f t h e f a n a l t e x t . As it s i m ul a t e s t r a n s l a t i o n i t should likewise be a b l e t o a p p l y the algorithmic Drocesses o f t h e targt:t l a n~u a g e automatically, and also to apply c e r t a i n c r e a t i v e processes on its own by looking a t the source 1onp;uaf;e v n r b a l i z a t i t~n t o see wnat c r e a t i v e choices were made there. hhenever j.t i s n o t a b l e t o make a c r e a t i v e c h o i c b , t h e prop;r:un a s k s t h e u s e r t o do so. e find t h a t this kind of machine-user i n t e r & i o n wovides a valuable research technique. Taking as ouiu l t i m a t e g o d t h e e v e n t u a l elimi n n t i on of t h e u s e r from t h e t r a n s l a t i o n Rrogram altogether, we start with a s i t u a t i o n in which t h e u6t.r fntervenes a t many points.As we learn more we can g r a a u a l l y give t h e machlne mope t o do and t n e user l e s s . This technique can be f o l l o w e d n o t o n l y i n verbali z a t i o n , but a l s o i n p a r s i n g 'ulhetner r;he u s e r w i l l e v e n t u a l l y d i s sppear from the ~i c t u r e a l t o g e t h e r i s uncertain.However t h a t nay b e , t h e g o a l a1 a pro.;ram in which t h e conti-ibution of t h e u s e r is significantly diminished in r e l a t i o n t o t h a t o f t h e nachine seems worsable. S h o r t of the f i n a l g o a l of eliminating t n e u s e r a l t o g e t h e r , an i n t e r m e d i a t e g o a l i d e n t i f i a b l e as 'human-iided" machine t r a n s l a t i o n can more easily be foreseen.Here the machine will do the many things f o r which it is s u i t e d ; b u t a human brain will be introduced =at t h o s e points where t h e machine has reached i t s l i m i t s . This intermediate goal has, w e b e l i e v e , s i g n i f i c a n t p -~a c t i c a l as well as t h e o r e t i c a l value. c h U , as well as t h e smnllmer chunks into which it will be analyzed, with tlie p r e f i x CC ( f o r "conceptual chunk") followed by a f o u r - It is useful to t h i n k of t h e content o f each chunk--each c i r c l e i n F i g u r e 2--as if it were a m o u n t a i~o u s landscape, with t h e most salient aspects .-tanding o u t i n bold relief and t h e less salient appearing as only minor hills. kll o t h e r t?ings being equal, t h e more salient sople aspect of t h e total c o n t e n t is, the more l i k e l y the speaker i s t o express i t when he subcoaceptualizes. Re is not likely t o make exactly t h e same subconceptual breakdown each t i m e he communicates the sane initial chunk, partly because he m a y judge different things 50 be s a l i e n t in different contexts and VY' e use a different notation to r e p r e y e n t each of t h e various s t a g e s in the verb-lization process. -r~ the o u t s e t , i n t h l s example the initial chunk JC-1001 was a l l that was present. This initial r e p r e s e n t a t i o n , before any v e r b n l i z a t i~n processes had beer] a~j p l i e d , was siaply:2 ) ca-1001After the subconce~tualizatl~n s o e c i f i e d i n 1) was a p p l i e d , t h eSubconce~tu6lization p r o c e s s e s o r e +;bus 'rewrite r u l e s , wh'ich replace one s t a c e in a v e r h : j l i a n t~o n with a subsequent stage, The f o r m n t VAT will now apply txn al~orithmic o r , an we say, syntactic " l process t r i g g e r e d by t h e presence of ZJ-REASON i n 3 yesterday", we want the two sentences t o be expressed, w i t h C2-1003r v -r ? 4) V: , V i A 3u !ilT? ~. i~l '~~, L~ i 2 d .preceding CC-1002. Thus VAT will automatically change t h e r e p r esentatiofl i n 3) t o t h e following:'Phis kind of ~e p r e s e n t a t i o n , i n which no p r e d i c a t e is shown aoove the two CGs, i n d i c a t e s t h a t they ( o r t h e i r eventual verbalimti1,ns)are t o occur i n t h e final t e x t i n the order. shown, w i t h dz-1803 p r e -ceding CC-1002.I n J a p a e s e t h e corresponalng syntactic process w i l l t y o i c a l l y l e a d t o the attachnent of CJ-"KAdk" a t t h e end of th6 second sentence. phus if a representation l i k e t h a t in 3) were produced i n a Japanese vorb:ilization VAT would automatically cl~ange it t o :The q u o t a t l n n marks around indicate t h a t this is an i t e m which will a c t u a l l y appear as a word in t h e text. subconceptualization procr:eds i n t e r a c k i v c l y L n t h e f o l l o w i n g fashion:12) V: LJ1ii1T VAT TASK DO Y'dU J,,NT PERI;'Ol(MiSD? U: VERBALIZE CC-1001(VAT c r e a t e s the following representabion: )Vs HOW I S CC-1081 SUBZONCi'li'TUfiI ZED?(VAT creates first the following r e p r e s e n t a t i o n : )C J-REASON CC-1002CC-1003 (and immediately applies a scored syntactic a l f~o r i t h m t h a tchanges it t o : ) V: HOW I S dC-1003 SUBCONCEPTUALIZED? etc.In this fashion a s u~c o n c e p t u a l hierarchy of any degree o f comp l e x i t y can be c o n s t n~c t e d and expressed.The organization of a t e x t may not, be entirely hierarchical.however. Not only does a speaker break down l a r g e r chunks i n t o I I smaller chunks--larger concepts" into subconcepts; one chunk may also remind him of another, s o t h a t the organization which results may be in p a r t conc-atenative. d e have been viewing c o n c a t e n a t i o n in tepms of excursions away from the main hierarchy, a d hn-ve been c a l l i n g such excurshm 9 r e s s i o n s . In some discourse, however, t h e r e is no necessary c o n s t r a i n t t h a t t h e main hierarchy Se returned to, and the result may be a rambling t e x t in which digression is added to digression. In a more tightly organized text digressions are more likely to appear as p a r e n t h e t i c a l remarks: b r i e f Sidepaths which quickly r e t u r n t o t h e main hierarchy. We uoc t h e t e r n p r e s e n t s i m y l f asks, f o r t h e i n i t i a l CC, whether i t has an i n i t i a l surnmnry (one cxprosr.ed a t t h e b e~i n n i n p j o f t h c t e x t ) . I f t h e answer i~ yes it asks f i r s t f o r s u b c o n c e~t u n l i z a t i o 1 1 of tho summary, and moves on t o ask about t h e body of t h e t o x t o n l y a f t e r t h e summary has been completely verbalized. n t t h e end of t h e t e x t i t asks whether t h e r e is a f i n a l summary.genre t o whlch t h e discourse belongs. It would a.jpear t h a t t h e r e is a continuum ranging from mnximally storeoty-ped t o mcmimnlly c r e a t i v e discourse. Plost stereotyped are those forms o f discourse, such as r i t u a l s , in which t h e s p e a k e r has very l i t t l e choice as t o what he is going t o say o r how he i s goinf: t o say it. l l i t h w e i r discourse t h e "grammar" o f t h e genre provides many of t h e answers rn exrunple of these procedure8 ar: a n p l i e d t o a r o a l t e x t c a n be based on t h e following United Press r e p o r t trrken, sl ightly condensed, from t h e em Francisco Chroniclo o f M a y 16, 19743 13) 1. An 11-ye:lr-old boy using a new "super-glue" in t h i s text:6 v: A!l*lT Iii ?Hi2 GJ22TRd2 U:E3VL L~C~' O H TVAT will now assume t h a t the text is a t y p i c a l ncws r e~o r t which begirls with a sll:runar:r. 'Fhnt is, t h e two 32s are to he e x p r e s s e d with tte "pielderl' preceding the " y i e l d e d " , and they are to be connected w i t h c o m a followed by the word "LJD". This is g a t t h e o n l y way which YIXLD can be r e a l i z e d , b u t f o r t h e , s a k e of t h e example we may re gard it as such. VAT will now proceed t o ask a out t h e subcon-c e p t u a l i~a t i o n o f the e a r l i e s t CC in 19):20) V: WOW IS CC-1002 CUDGONCISlJTOALIZED IN TI143 C\JMMkd?Y?The user has answered that CC-1002 i s broken down i n t o t,wo CGs, CC-1004 ("buildir~g a model airplane") and CC-1005 ("An 11-yearold boy using a new "super-glue" a c c i d e n t l y g l u e d h i s e y e shut").They are r e l a t e d by PKAMSU, a temporal r e l a t i o n i n which T i e f i r s t 3ut since CC-10u4 i s not followed by one q f these boundaries, 2 t t e n t i o n i s -ne*t focused on CC-1005:23) V: HOW IS CC-1005 SUBCoNGEPTUALIZ5lJ I N THE SIlMMAriY?VAT creates the following represcntation:2 4 ) SJ-'~~WSILE~ CC-1004 CJ-PHAME CG-1006 cc-1007 C J-" , AND" cc-1003 I t I 4The user has said t h a t CG-1006 ( " a n 11-year-old boy u s i n g a new 1 ( 1 -"super-qlue"") occupies a time p e r i o d which i n c l u d e s 1007 ( " h ccidently glued his eye shut"). So f~r we w uld expect t V 1 i . s second i n s t a n c e o f PRfiE t o be expresrxd by prefixing t h e word "liilllL.<" t o 25-1006, as was done in 2 2 ) . L e t us s u p p o s e , how-t>~rer 9 h:.lt I~':tAPiL a c t u a l l y triggers a more complex algorithm which says i n e f f e c t t h a t one "WHILE" i n a sentence is enough, and that a s e c -~n d instanc Wnen t h a t has been done, i t w i l l s s y : by t h e s esker with t h e n o t a t i o n :of PHJWIE w i l l l e a d toSUB",NCEPTUALIZi3D? U: YIELD ( G C~O~P , GC-1003) T h i s is,7 ' 1 - 2") u,--05j 0 U:-uGIV&vSuch a s t n t e :~e n t i s t o be r e s~d "SC-1053 is c:lte;:orized 2s an i n -11 --7 r -~? 1 1 s t a n c e o f t h e c a t e r b r y UG-ULV!; . It should be n o t e d t h a t t h e E n g l i s h w0i.d "GIVE" i s n o t 8he name of t h i s catek-ory; m t h e r any p a r t i c u l a r $C h i c h is so c-.t-;::-oriz?d c a n be communicsted with the word "GIVE". In obher words, t h e decinion d e s c r i b e d i n 2 7 ) a l l o w s us to U S + It(;' 7J,$ll a a n:me f o r 23-1053.The way in which a s p e a k e r djecldes t h n t a p a r t i c u l a r J2 can be c a t e g o r i z e d as an l n s t a n c e of so:ne LJC is of c mrse a fl~ndi2n.mtal degree, and not ns an ~3 1 -o r -n o t h i n g decision. 5f t h e degree t o which a p a r t i c u l a r CS i s an ingtance o f some UO ic very high-i f t h e CG i s h i g h l y codable--then t h e use of the word nrovided by the U3 will succeed q u i t e well in conveying t%e content which t h e a eake r has i n mind. If, on t h e o t h e r hand, the content of t h e : C i s -.-~a n w a g e usually involves takin;; one P I ( t h e "topic") as a starting At prcsent it. asks f i p s t : It is now time f o r t h e foll~wing exchange:30) V: GPL! GC-lO53 BE G' ,n i n 3 V' 1 1The u s e r says t h a t i;he decision has been t o c a t e~-o r i z e t h l s d w :is an i n s t a n c e o f t h o category UJ-I1GlVZ". VAT than looks i n t o t h e l e x i c o n and, on t h e b a s i s of the l a s t line in 2 9 ) , r + p l a c e s 3 2 ) w i t h :Two o t h e r consideration:: a r e r e l e v a n t at this p a l n t . A second consider l t l o n at this p o i n t 5s to estab! i s h which PI is t h e s u b j e c t o r t o p i c , t h e PI on which t h e s r > e a k e r lntends t h e addressee' a attention to be focusod and concerning -{/hich something w i l l be a s s e r t e d . Again the easy way out 1s f o r 'JAiT to :I& the user:'n J l ;CT'i 37) V: ~JiIiil! 1 , : , 111 L,, U: 11-1254The question in 37) 1,s : q ) p r o~~r i a t e way; n r i o -r ti, t h e I t t e r i n g of' t h e p r e s e n t sentence (3hafe 1974).Here =aiA we h5ve a c a s e where t h e easiest, course f o r VAT a t this preliminary sts:-e of jts developnent is t o nsk ' t h e uscr: Let us assuae first t h a t the answer to LO) has been y e s , in which case English i s l i k e l y t o lexicalize PI-1234 with a or0noun.This is not always the case; sometimes a PI t h a t i s given will not be pron~minallzed. The principal criterion here s e~m s to be whether pronominalization will produce mbit;uitg, and u l t i m a t e l y VAT -w i l l need t o deci6e whether ambiguity will r e s u l t . F o r now, however, we proceed on the assumation that a PI which f3 a j v c n w i l l a u t omatically be pronominalized.The procedure we a r e currently u s i n g for prononinalization inEnglish asks f i r s t :4-1) L: IS PI-1234 'PIE ~irUDil 1 l:>(-im T:?i J A >~) , L u . Otherwise it rnust find the sex o f this r e f e r e~l t :'t5) ' V: IS PI-123 I-MALE OR P9IlLLE1md l e x i c a l i z e it as IJTd-"HiS" o r EN-"6:IZ" accordingly.If the mswr t o 42) was a n:lnber g r e a t e r t h a n one, VAT must decidz between "~~~" and "TilEY", t h e pronouns 7,dhich are explicitly p l u r a l . d~s e f i t i a l l y it must ask:' 1.6) V: It *,"1!3 L j .;' Mi ,11 h IuIJ?: JE!t OP .171-123LC2If yes, it will q~rotlr~ce t h o 1 e x i t b n l i z a t i o n AN-' " T -. ' $ T 1 ) 3 4 : s PI-12YC ' I A . ? L~ i L , . L , . , .If yes, th.: lJ-eT. cives t h e name and T/A'P lexi : a l i z e s PI-1234 as is countaSle, 1 also in t9ls case a s k a b o u t its c a r d i n a l i t y ,as in 92) above. I-f t h e answer is a nufiber r~r e a t e r t h a n one, TJl~'2 will ere t e n re-7resent at i o n 1 l~e 21:-";"":;::IiLfifI / " ;>i,:T .ILL'' Tndencn- ;relater than one. The sutcome w i l l thus be c i t h c r NN-"T!SAZII:;H" UI-"AW i~l / "I3LPIIAL" / &Z~"*L~'IIZE"~ t h a t is, "a t e a c h e r " :ontextual grodnds on whkbh VhT will be able t o answer a q u e s t i o n of UZ-ttLIFT"t Ve will w : a t to sa;~ that when X lifts Y, t i entails 51) C C -h C> UZ-"LIE'Ttf i3>25-4 P> VU-"Ll" PTt! (I~I-D(BGT, PI-C? PAT) ~i v e s t h e case f r a n e , saying that there w i l l be a c l a u s e c o n t a m i n g r;ne verb "LIFTtt ac3:~rnpanied by an age1,t (PI-B) and a p a t i e n t (PI-c).U'"4 f1 1 CJ-A S> CJ-3ii d~ (43-D, yu-li) JC-D F> VB-ACT (PI-@) rl, I \ CC -3 S> (; J-CO!!J J I I G I X O ( ( I ( G ) , ,The second Pine under i3> s a y s t h a t i t i s a l t e r n a t i v e l y possible t o subconce;~tl:alize CC-A in e c e r t a i n way, wnich a r u o u~~t s having t h e use of something, wt-ich we will call HAVE-USS. Simple HAV3, as in 52), is m e a n t t o be nonspecific a$ t o wt~ich of these v a r i e t i e s o f having is involved, as may be accounted f o r with the following two statements:55) CG-A C> UC-HlLE-OUN E> ZC-A C> UC-IIAVZ ' 1 r l A C> UC-EIAV$:-UGS E> CS-A C> UC-HAVEOne examnle of a t r a n s f e r is t h e k i n d whish is cate:;orizab.be with UC-"GIVE", whose lexical e n t r y can be ~i v e n as f o l l o~s :54) CC-A S> UG-"GIVE" E> ZC-A F> VB-"GIVZ" ( P I -B~A G T , CC-A. C> UC-TMJSFXT{ PI-D = PI-B ' 1 = PI-c P I -= PI-DThat is, a CC which has been c a t e g o r i z e d as an i n s t a n c e o f UC-"GTVX:,."has t h o case f r~m e nhown i n t h e f i r n t l i n e undor &>. The question nark before t h e heneflcimy indicates t h a t it is o p t ianal; one c~n say "Hoger gave n book" w i t h o u t n e n t i o n i n~ 9 beneficiary. The ItGIVEH e n t r y ( t h e g i v e r ) ; 1'1-b' o f t h e TBl;!JSPER entry is e q l l i v a l e n t t o PI-C of t h e "(;IUEt' e n t r y ( t h e glvee)* aqd 11-E of t h e ".iA.idp.ia e n t r y is equivalent t o PI-U o f t h e "GIVX" e n t r y ( t h e given). Besides buying and sglling, a n a t h e r t y p i c a l ~r a n s a c t~o n is r e n t i n g . The Xnglish word r e n t is ambiguous, an(: w e w i l l illustrate h e r e the e n t~y f o r what we c a l l U'3-ffR,1:;IT-2ff, whlch is renting out 57) CC-A C> UC-"RENT-2" E> CC-A F> VB-"QI!XVTt' (I?I-B~AGT, ?PI-CtBEN, 'I P I -D~M B H ; PI-E PAT)CC-A C> UC-Z'lWSACTION ~'1-F = PI-B PT-D = I~I-G I = PI-D I = PI-E CC-B = CC-F q f l fl uu-v = CC-G CC-F C> UC-T3lilILFiSR CC-B = CC-H CC-C = CC-I CG-G S>* US-T! ~ANSF3.il ee-3 = cc-J 2c-C = cc-K PI-D C> UC-l\TZlI)Im-OF-EXSTIhYGE CC-H C> ~JC-K~LVE-O\JN CC-I C> UC-IIAVLOL1N CC-J C> UC-IIXVZ-USE CC-K C> UC-HAVE-USE VB-IIAVE-OWN (PI-D, 1'1-E)The f i r s t line under E> gives t h e case frame, which includes two obligatory cases, an agent and a p a t i e i l t (Bill rented ( o u t ) his lawnmover") and an o p t i o n a l beneficiary and measure (MLR) ("-Bill and it is necessary t o s t a t e t h e equivalences between the PIS in 5?) and those in 56). Below these PI equiv:llences it is a l s o s t a t e d t h a t t h e JC-B o f t h e WANSACTION delinition ( t h e t r a n s f e r o f money)is e q u i v a l e n t t o CC-F o f t h e 'RENT-2" definition, while CZ-C o f t h e TLCLN~ACTIOIY definition ( t h e transf a r o f t h e objec*) is equival'ent It was mentioned t h a t t h e l e x i c a l e n t r y f o r Japanese TJC-J'K ,Sis the same 3s t h a t f o r Z n g l i a h U3-!IL:iXD1', as i l l 55), except t h a t t h e Japanese e n t r y laCks the last l i n e of 55) i n which it i s s t i p u l a t e d t h a t lend in^ cannot be a t r a n s a c t i s n , It can now be jeen thar; UC-"KAS-I' is 59) PI-A C> UC-rUDY~-oF-lZ2~&lJSE E> E t PI-A C> UC-"MdNEY"A more com2lex example involves the c a t e g o r i z a t i o n o f a PI as an i n s t a n c e o f UC-"BEnGLEtt.I n this case we Know t h a t t h e PI is also s a t e g o r i z a b l e as an i n s t a n c e o f UC-"DOG", that. we may e x~e c t t h a t it will have a tail (although some dogs do m t ) , that it w i l l b m , ana t h a t it w i l l chase cuts:6 0 ) PI-A O> UG-"BEA(;Li.," L' > y1-A 3> UC-ltD()GH E: VBLHAT~-AS-PIL~~T ( I -PI-B) PI-B z> uC-t'T.ti.zj-btt E: VB-BARK (YI-A) E: VB-3&dB (11-A, PI-<) PI-c C> T~c -t~d j i~t lIt nay b e that E: should be expressed as a p r o b a b~l i t y ;s~a t is, that t h e r e is a ao~tinuous range over which we nay expect ~o m e t h i n g t o be e n t a i l e d , with necessary entailnrent being one extreme. (11-1'456) If any or all of t h e s e PIS occur in t h e next sentence, t h e y ; r i l l be pronom'inalized, arld it wlll not be necessary for \VA2 lJl -1'387)t o know w h~t n~r t i c u l~r i n n t w c e i t i s (in this c a i e 1'1-l9h7).dhen, during a 1::ter s e n t e n c e , VjiT coaos t o the n u e s t i o n : as an inatance of UJ-"'dd569.P". Now, it may be n o t e d t h a t t h e second line under , which d e a l s w i t h t h e c a t e g o r i z a t i n n o f PI-B, is a statement like t h a t .in 6 6 ) above. A f t e r a sentence l i k e !'I bought a bicycle yesterday'' has bezn produced, t h i s line will t h e r e f o r e t r i g g e r a readjustment proces:: which c r e a t e s t h e statement: 70) 8P-IDENTIFI'4:E (UC-"PHM!IE",. lal'-1'~68) (with whatever number it is a p p r o p r~n t e to assign to thia PI 1. As a cchxsequuence, if PI-$468 occurs i n a subsequbnt sentence it will be l e x i c a l i e e d with t h e definite a r t i c l e , as i n "The f r b e i s extra The general n a t u r e of t h e $ r a n s l a t i n n procedure was o a t l i n e d in s e c t i o n I, and d i a~r a m e d in F i g u r e 1. 1'0 summarize aeain, '/AT will s t a r t with a t e x t in the e o u r c e language, will re con st;^-uct t h e v e r b a l i z a t i o n processes which produced t h a t t e x t , and w i l l t h e n i t s e l f produce a. p a r a l l e L l . v'erbalization in t h e t a r g e t language a r e added by changing the verb in t h e first sen'tence from u t t a 'sold' t o k a s i t a ' r e n t e d ' or 'leht L e t us f i r s t revi3w the manner in whic'h 'J1i'I' will r e c o n s t r u c -t t h e o r i g i n a l verbalization of t h e Japanese t e x t . Since our eventual pwsing' corn!~onent w i l l follow a kind of-' ' a n a l y s~s by s y n t h e s i s "procedure; we will a l s o be s u g g e s t i n g -h e r e t h e s t e p s of t h e p a r s i n g Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
602
0
null
null
null
null
null
null
null
null
aa454e8cea7ebc71ddc083d0097fd12626853e37
219301246
null
The {ATEF} and {CETA} Systems
ATEF converts an input string into a labeled tree; t h e label evolves under the c o n t r o l of a grammar. A s e t of labels is associated with each segment of the string, and several functions permit control of the number of alternative labels. CETA simulates a transformatianal grammar. It uses a set o f grammars with conditional linkages. The applicabili&y of a transformation can be determined in p a r t by conditions on the resulting t r e e . Computer processing 3f natural languages requires more or less polished algorithmic models. The two systems presented here represent a choice of a large class among the algorithms proposed in recent years to solve these problems. The princ i p a l choice determined by these systems l i e s in the formal use of labeled trees ( arborescences ) . Freedom of choice of these la-bels and possible structures gives these systems broad fields
{ "name": [ "Chauche, J." ], "affiliation": [ null ] }
null
null
null
1975-07-01
0
4
null
of applications in several domains and notably in that of the automatic processing of natural languages. The ATEF system has the purpose of transformFng a string of words into a tree which is manipulable by the CETA system. The definition of labeled trees determines what objects CETA can manipulate and the objecti'ves of ATEF. This note therefore begins with the definition of labeled trees. To obtain a tree of this type beginning with an input string, ArEF uses a dictionary and a finite-state grammar. The result of this system can be manipulated by CETA in order to obtain the desired type of structure. T h e example of analysis given here shows t h e possibilities of the CETA system w?th two different manipulative strategies: search for constituent or dependency structure .A rree is a set of points with which is associated a structure, that is to say a relation having the properties:The relation between two points is directed (one point depends on the o t h e r )A point Cannot depend on a point belonging to its own descent set (the descent set of a point is the set of points that depend on it, the points that depend on them, etc. )A unique point descends from no other.It is possible to draw a tree placing below a point all of its descendants, linked by lines. The definitdon of a particular label consists in an enumeration of the variables relevant to t h e label. A s e t of labels can be predefined and is collected in a so-called format file.A dictionary is a set of segments (character strings), ~t h each of which is associated a label, a processing pointer, and a lexical unit pointer. The processing pointer specifies the particular process which must be associated with the segment.T h e a n a l y s i s of t h e input word by the ATEF system resides at first in a label processing, that is to say in a n evolution of the empty label toward a final label characteristic of the analyzed word This evolution is controlled by the grammar, which at each moment has access to two labels. the label being developed (noted by the symbol C) and t h e label associated w i t h the segment which was read in the dictionary (noted by A)The analysis of a word aims to produce a segmentation of t h e woru simultaneously compatible with the segments of t h e d i f f e r e n t dictionaries (the word must be an assembly of dictionary segments) and compatible with a correct evolution of the grammar The choice and evolution of the segmentation has to do with the sequence of input characters. The segmentation forces, above all, a prior linguistic choice. Thus with the segment "UN" two possibilities can be conceived either accept "UN E" as a coherent segmentation or have the segment "Ul?E" in the dictionary and refuse the segmentation "UN El'For each initial form several segmentations are possible to arrive at the same results arid only a linguie tic study of the phenomena permits a decision on the strategy to be adopted.In any event, this stLategy is left to the user of the systemIn the course of a segmentation the system can operate d~rectly on the nonsegmented chatacters in order to force them into a "canonical'! form. Thus in the case of the word reel several possib~lities arise to accept a word like realite put the segment "real" in a dictionary as well as the segmenf "reel", the former will generate words like realite, irrealite, e t c .put the single segment "reel" in the dictionary and the analysis of the word realite Qill follow the schema realit& => 1st segment found "lte", remainder "real" mddificat&on r e a l ->r&el => 2nd segment found "reel" segmentation ."riSel it&" N B In thls analysis, it is to be noted that the search for successive segqents is performed from left to right £ox the input word. This depends on the strategy adopted and, for a given use, the direction of the segmentatson of a word can be either left to right or right to l e f t . (Observe that the segmentation of the word "chacune" will then be o b t a i n e d as G A C UNE because th8 segmentation CHACUN E will. be rejected as a subsegmentation of "UNE" This problem can easily be resolved because t h e s e functions appear in t h e rules of the grammar and are consequently conditional. One can at t h e same time f"orbid the subsegmentation "UN E-" i n the word "UNE" and aur;horize this. segmentatian in the word "CHACUNE")The calculation of the set of labels associated ~i t h a word is produced and controled by the grammar. This calculation corresponds above all with a conditianal modification of the label C or current state starting from the label A or argument state. rejected. This condition can refer to the labels of the preceding analyzed words and can condition its result on the analysis of the following form. Thus for example in the course of the analysis of the word "LA" in the sequence "il la voit", the segmentation taking "la" as a r t i c l e can be rejected. The t r a n s - fer
null
null
null
null
Main paper: : of applications in several domains and notably in that of the automatic processing of natural languages. The ATEF system has the purpose of transformFng a string of words into a tree which is manipulable by the CETA system. The definition of labeled trees determines what objects CETA can manipulate and the objecti'ves of ATEF. This note therefore begins with the definition of labeled trees. To obtain a tree of this type beginning with an input string, ArEF uses a dictionary and a finite-state grammar. The result of this system can be manipulated by CETA in order to obtain the desired type of structure. T h e example of analysis given here shows t h e possibilities of the CETA system w?th two different manipulative strategies: search for constituent or dependency structure .A rree is a set of points with which is associated a structure, that is to say a relation having the properties:The relation between two points is directed (one point depends on the o t h e r )A point Cannot depend on a point belonging to its own descent set (the descent set of a point is the set of points that depend on it, the points that depend on them, etc. )A unique point descends from no other.It is possible to draw a tree placing below a point all of its descendants, linked by lines. The definitdon of a particular label consists in an enumeration of the variables relevant to t h e label. A s e t of labels can be predefined and is collected in a so-called format file.A dictionary is a set of segments (character strings), ~t h each of which is associated a label, a processing pointer, and a lexical unit pointer. The processing pointer specifies the particular process which must be associated with the segment.T h e a n a l y s i s of t h e input word by the ATEF system resides at first in a label processing, that is to say in a n evolution of the empty label toward a final label characteristic of the analyzed word This evolution is controlled by the grammar, which at each moment has access to two labels. the label being developed (noted by the symbol C) and t h e label associated w i t h the segment which was read in the dictionary (noted by A)The analysis of a word aims to produce a segmentation of t h e woru simultaneously compatible with the segments of t h e d i f f e r e n t dictionaries (the word must be an assembly of dictionary segments) and compatible with a correct evolution of the grammar The choice and evolution of the segmentation has to do with the sequence of input characters. The segmentation forces, above all, a prior linguistic choice. Thus with the segment "UN" two possibilities can be conceived either accept "UN E" as a coherent segmentation or have the segment "Ul?E" in the dictionary and refuse the segmentation "UN El'For each initial form several segmentations are possible to arrive at the same results arid only a linguie tic study of the phenomena permits a decision on the strategy to be adopted.In any event, this stLategy is left to the user of the systemIn the course of a segmentation the system can operate d~rectly on the nonsegmented chatacters in order to force them into a "canonical'! form. Thus in the case of the word reel several possib~lities arise to accept a word like realite put the segment "real" in a dictionary as well as the segmenf "reel", the former will generate words like realite, irrealite, e t c .put the single segment "reel" in the dictionary and the analysis of the word realite Qill follow the schema realit& => 1st segment found "lte", remainder "real" mddificat&on r e a l ->r&el => 2nd segment found "reel" segmentation ."riSel it&" N B In thls analysis, it is to be noted that the search for successive segqents is performed from left to right £ox the input word. This depends on the strategy adopted and, for a given use, the direction of the segmentatson of a word can be either left to right or right to l e f t . (Observe that the segmentation of the word "chacune" will then be o b t a i n e d as G A C UNE because th8 segmentation CHACUN E will. be rejected as a subsegmentation of "UNE" This problem can easily be resolved because t h e s e functions appear in t h e rules of the grammar and are consequently conditional. One can at t h e same time f"orbid the subsegmentation "UN E-" i n the word "UNE" and aur;horize this. segmentatian in the word "CHACUNE")The calculation of the set of labels associated ~i t h a word is produced and controled by the grammar. This calculation corresponds above all with a conditianal modification of the label C or current state starting from the label A or argument state. rejected. This condition can refer to the labels of the preceding analyzed words and can condition its result on the analysis of the following form. Thus for example in the course of the analysis of the word "LA" in the sequence "il la voit", the segmentation taking "la" as a r t i c l e can be rejected. The t r a n s - fer Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
595
0.006723
null
null
null
null
null
null
null
null
0ccf7d5d193bf9ca224ebac0e5af44d69e1670c8
219300604
null
Semantically Analyzing an {E}nglish Subset for the Clowns Microworld
A m i c r o w o r l d system i s d e s c r i b e d for d i s p l a y i n g visual r e p r e s e n t a t i o n s of t h e meaning of a subset of E n g , l i s h t h a k . c o nc e r n s a clown t h a t can balance objecbs and can p a r t i c i p a t e i n m o t i o n s c e n a r i o s . Nouns s u c h a s " c l o w n " , " l i g h t h o u s e " , "water" etc. are p r o g r a m s that construct i m a g e s on a d i s p l a y s c r e e n . O t h e r nouns s u c h as " t o p " , "edge", "side", e t c , a r e d e f i n e d a s f m ~t i o n s that return c o n t a c t p ~i n t s for t h e p i c t u r e s . Adjectives and ad rerbs provide data on size and angles of supp o r t . P r e p o s i t i o n s and v e r b s are d e f i n e d as semantic functions t h a t e x p l i c a t e spatial r e l a t i o n s among noun ifnages.
{ "name": [ "Simmons, Robert F. and", "Bennett-Novak, Gordon" ], "affiliation": [ null, null ] }
null
null
null
1975-07-01
2
10
null
null
dynamic process model t h a t can be operated t o produce successive s t a t e s deecribed by t h e E n g l i s h . The p r i n c i p l e s used i n t h e system are.a c o n c i s e representation of my g l e a n i n g s from r e c e n t l i t e r a t u r e dnd of c o u r s e from work of my own and my students.In t h i s s e c t i o n only a f e w of hundreds o'f n a t u r a l language processing papers are s u g g e s t e d as entries to t h e l l i t e r a t u r e . A t least a dozen reviews of t h i s l i t e y a t u r e are a v a i l a b l e ; h a l k e r ' s i s not only among t h e most recent and complete (Walker 1973) , but i t i n c l u d e s a s e c t i o n t h a t cites t h e reviews.Since 1970, the langu'age processing l i t e r a t u r e has been r i c h i n We began with the notion that it should be quire easy to construct a microwcizld concerning-a clown, a pedestal, and a pole. (Cl, TOK CLOWN, SUPPORTBY C2, ATTACH(C1 FEET= C 2 TOPXY)) (€2, TOK PEDESTAL, SUPPORT C 1 , ATTACH(C2 TOPXY Cl F E E T X I ) ) (CLOUN, E X P R C W D A O ,) FEET X I , SIZE 3, STARTPT+ v + (ADV) DCLAUSE -t PPI R E L C O N ' J ĨR E L C L A U S E \ VMOD PP + PREP* + NP BEUDNJ -, RCONJ + CLAUSE RELCMUSE -+ (RELPRON) + PRONCLAUSE PRONCUUSE -t VO( WP + ,VG .+ (DCLAUSE) vMoDVPAST '+ SUPPORTED SAILED, . . In t h i s n e t , if t h e sentence begins with an NP, the PUSH NP will return Ehe structurk of an NP iq t h e * r e g i s t e i . A t that point t h e r e g i s t e x SUBJect is s e t t o 'that v a l u e . When a V S t r i n g i s analyzed. b y PUSH VS 'then t h e r e s u l t of p a r s i n g "clowns hold, poles" w i t h t h e above n e t is:(HOLD SUBJ CLOWNS, OBJ POJ+ES)I n , f a c t , it is necessary to c r e a t e new names f o r each word used in a sentence--to avoid clobbering d i c t i o n a r y information--so the result from a c t u a l n e t s would be: with adjectives and adverbs i r i the following fashioe:( C 1A n adjective, e . g . big, has the following lexical structure:(BIG ADJ T, POS T, TYPE SIZE, VALUE 7)PUTMODS will for each adjective obtain t h e TYPE and VALUE and p u t them on t h e noun's property list. Thus, "a big red clown" results in:(C1 TOK CLOWN, DET INDEF, NBR SING, SIZE 7, COLOR 1)where COLOR 1 assumes that some mechanism f o r a s s i g n i n g Clause n e t .'PUSH NP SNTCr PUSH PP "BY P A S V O B J + 3 PUT V t ' S U B J SUBJ SUBJ +-* PUT v !ISUB-J SUBJ ' if "OBJ'OBJ PUT Y "OBJ OBJ TST AUX=BE V=ED POP v (PUTI v I'SUBJ SUBJ) PA$V + T O B J + S U B J S U B J t NIL tThis VP net first pushes a VG, verb group. VG is n o t shown in t h i s discussion, b u t i t scans t h e sentence string for an a c c e p t a b l e sequence of auxilaries, and adverbs domindt'ed by a verb. It makes a token of the verb and puts i t s tense and a u x i d i a r i e s on that token as p r o p e r t y value pairs. I t returns t h e token name. I n exiting node VP1 we seek an NP as a syntactic OBJect and finding one, add the subjgct and object as properties PUSH PI' (GAT PREP) h rl HD + * A POP HD T 4 PUSH RELCONJ (CAT RCONJ) TST GETR SUBJ 1 CAT RPRON @ PUSH, PRQNCLAUSJ) (.EVAL ((GET * T~K ) * ) TST ' I I SUBJ + (ANTEC * GLS? ((HOLD *) J. SUBJ CAT v "ED "TNG P 11 TST T SUBJ + (VBMATCH * GLST) p , HD +- -I. SUBJ * = "TO. NEXT = VThe DCLAuSE.net i s f a i r l y i n t r i c a t e i n t h a t it accounts f o r PPs, r e l a t i v e pronouh clauses, i n f i n i t i v e modifiers, p a r t i c i p i a l c Wses and clauses introduced by relative c o n j u n c t i~n s . A PP is w e or more prepositiotrs followed by an NP. A RELCONJ s t a r t s with an RCONJ such as "while", "after1' e t c . and may be followed by a DCLAUSE or a CLAUSE. A r e l a t i v e , pronoun clause begins w i t h an o p t i o n a l r e l a t i v e pronoun and is followed by a pronoun clause which i s either a VP o r an NP fohlowed by a VG an6 o p t i o n a l DCLAUSES.For the moment we i n s i s t for computational economy t h a t a r e l a t i v e clause h e introduced by a r e l a t i v e pronoun; a c t u a l l y the f m of a pronominal clause is s u f f i c i e n t l y rwell defined that PUSH PRONCLAUSE can i d e n t i f y i t without a r e l a t i v e pronoun -in most cases.When a pronoun i s found, here or i n an NP, thef u n c t i o n ANTECedent is called to scan t h e l i s t of prece'ding nouns, t o f i n d t h e b e s t agreement-in person, number, and gender. The f u n c t i o n VBMATCA on the e x i t from node D2i s a f u n c t i o n t h a t seeks t o f i n d t h e head that the p a r t i c i p i a l or i n f i n i t i v e phrase is modifying. As in PREPMATCH, the head noun is frequently not the (C1 TOK BALANCE, SUBJ (C2 TOK CLOWN, DET DEF), OBJ ( C 3 TOK POLE, DET INDEF) , COWS (C4 TOK HANDS, POSSBY C2, PREP (XI))i e e f f e c t a£ t h e sernaotic f u n c t i o n s f o r t h i s s e n t e n c e is t o produce t h e f t l l o w i n g :(C2 TOK CLOWN, SUPPORT C 3 , S I Z E 3 , ATTACH ((22 C 3 ) ) XY XY ( C 3 'OK POLE, SUPPORTBY.C2, SIZE 3 , ATTACH (C-3 C 2 ) )xy SY c a n d i d a t e can dominate t h e PF i n q u e s t i o n . For example "ON1' is defihed as a LISP f u n c t i o n with two arguments. When c a l l e d w i t h '"clown'! and "nose", (RIGHTOF N 1 N2) ) ) (RIGHTOF (LAMBDA (N1 N2) (CO'ND ( (AND (GET N 1 "PICT) (GET N 2 "PIcT) ) (PUT N 2 "RIGHTOF Nl) (PUT N1 "LEFTOF N 2 ) ) (T NIL) 1 1)Thue. "a b e s i d e b" is q u i t e arbitrarily interpret& to mean "b ie ~o the r i g h t of a". RIGHTOF r e q u i r e s t h a t i t s two arguments be p i c t u r " r e t o r t " means '.'answer sharply" ~h i c h means "comutlicate s h a r p l y ii~ response t o a communi.cationt'. The verB mdy imply Gpecial arguments i n another way;t h e verb, "sail", implies t h a t "someone caused a v e h i c l e t a move through a The conditions or rules For transforming these syntactic arguments * i n t o semantic r o l e s are as Collows:SBBJ A OBJ + mi +-SUBJ. T H~ 4 OBJ SUBJ -+ TH2 + SUBJ OBJ + TH2 + O B J For each ZOMP, N T H~ A ON A PSCT(C0MF) -+ TH1 +-COMP nlSUPPORTPT1 A IN v ONVWITH A PART(C0MP THL) + SUPPORTPTl + COMP dBALPT2 A ON A PART(C0MP TH2), -+ BALPT2 + COMP T + PRINT (LIST "UNDEFINED COLON COW)For t h e following two example sentences, the above rqles result iri t h e bindings (shown: A ololn with a pole in his hands b'alances on a pedestal.. .Ex 1 A c,T h e e a r l i e r action sf rhe preposition semantic f u n c t i o n s will have reduced ' N o t e s : 'x -+~y X imp'lies Y by the following rules: If a t r i a n g l e has a l s~ been d e f i n e d , we can then define: H0USE:SIZE; SQUARE:SIZE; F0RWW:SIZE; TR1ANGLE:SIZE;x 4 y SET x t o y A x Not X F'(x) Evaluate function F of X n( and V o rANIM (StJBJ) + A + SUBJ FORCE (SU3J) + 1 + SUBJ VEHIC (SUBJ) -, VEHIC + StJBJ S V M I C /\ m I C (ow) + W B I C + OBJ MEDIUM (OBJ) + MED + OBJ OBJ -+ TH + OBJ FOR EACH COhF a MED A IN V ONLV THROUGH A MEDIM(CObfP) + MED + COMP % VEHIC A IN V ON VIt is the convenience and s i m p l~c i t y e o f these LOGO conventi'ons that convinced m e that drau3,ng p i c t u r e s from sentences would n o t add any g t e a t c o m p l e x i t y t o a basic l a n g u a g e a n a l y s i s system.. LOGO offers We have also noticed that the semantic network that is produced as a result of semantic analysis can be seen as a problem graph by the functions t h a t organize images and it is apparent that as these graphs come to contain larger numbers of images, i t will be necessa'ry to Bruce, wrtram C . , "A model f o r temporal r e f e r e n c e s and its a p p l i c a t i o n i n a q u e s t i o n answering program." ~r t i f i c i a l I n t e l l i g e n c e , 3 1972 pp 1-25. 1 ) ( S U F I n * ) ( T o ~1 3 2 ) )* I 1 ) ( L I F T R nL'X(GETC7 1 v G ) 1I T 0 V G ) ) ( C A T V T (HOP V V '~' ) ) (T?? V A U X (-GETR P U X )V ) ) 1 L L AOJ ( q 1 2 E 4 ) HOP V O l ) 1 ) wvl ( T S T V V T {SET!? V 9 ) (sFTR HO (MnKETQK (SETQ b L f T [ C b h S fll S f ) ) ( T O , V G l l l 1 ) ( v( B I G AnJ ( S I Z E 7 ) ( C L A S S S I Z E ) ( L A R G E ADJ (~T Z E 6 ) ( C L P S~ sIZF) ) ( L T T T L E ADLJ,(sIZE 3 ) ~C L A ' F S SIZE) J ( S M A~R Q P PREP ( 7 T I ) NBR PL 1 ( P A R T 7 ) B s ( I N k17t-i) 1 ) (HFAD r\l ( N R R S I N G ) t P 4 F T t-) ) ( N O S E N ( N Y R ~I N C ) ~P~R T 7 ) ) t~o~k N (PIC7 f ) ( N R R SING)) (HEAO M ( N R R $ I N G ) ( P P R T 7 ) ) (PFET N (
null
null
) 1 +PROCESS, MQDEI. R SET^ LFV 0)( ( S E W F O C ( G F T 4 = q U P J ) 1 ( S F T Q vw a ) ) ( ( S E T 0 F,OC(GET @ = O P J ) ) ( s E T Q V R * ) ) 1 U N l V O O P ICSCC * T R A~E * M A S~$ C L 1 ( L T N T T ) (MAXWLOQP) 1 ) ) 4(PRIIC; 1 W"wT EER N X L ) CSETQ ITADDER 2048) 1 ) ) J1 E IS A SHQRT k OF E R M E T O E R A S E YE s C A E~N . J, ( E ( L A M B D A ( E R A S E ) 1 ) 4A (~c0Nr-j (: (NULL IlhIVQT!) ( 6 0 P ) 1 )( RYTIJRhc ) ) IT-NFO, 4WD E X P R . (1 (FIX (PLclS % n , W ) 1 1 ) C MOVE WE TURTLE BACKNARD$ JI rRnCN (LAMBDA ( W ) ( P O R N ~M T V L & w ) 1 1 ) Jt MOVE THE TURTLE B Y A S f G h~E n . A~O l J h T 4 V W V E ( L A M R D A ( W ) W O R M w ) 1 ) JI( S C A L E m w W m~ ( $ 1 ( P R c G 0 (SFTO CSIZF '(TIMES S ~I .~M A L~X~E~ 1 RIGHT 90) (FORM A T ) ( R I G H T 9 0 7 1 1 1 .Lr o r -b y a single noun phrase euch ae a clown in n boat" dr by an imperative, "balance a pedestal". It ddee not ac'cept queetion forms; that would require an additional arc from CLAUSE l a b e l l e d , PUSH QFOBN SNTC. The ordinary form of an arp is an arc-label such as CATegory, PUSH,, POP, TST followed by its arguL ment, followed by any-condition statement. SNTC is simply the variable t h a tt r u e a t all t i m e s i n the model. INIT !is the s e t of relations true Bt the initial s t a t e of time 5n the model, INTER Ts t h o s e f o r t h e intermadiage s t a t e s , and R~S U L T i s t h e s e t f b r the f i n a l s t a t e . When a f u n c t i o n W G f o r Pragmatics evaluates one of these attributes, t h e r e s u l t is t o evaluatethese PUT functions t o produce a s e m a n t i c network r e p r e s e n t i n g t h e s t a t e of( 7 0 V f l 1 ) ) ( T S T O K~O R ( A~D ( S F T Q J ( G E T R A U X ) ( s F T R TENSE ( P E T (CAP 3 ) E T r N S E ) 1 1 ( S E T R T F N S E { G F T ( G E T R V 1 :TEN$€) 1 i?)(PUT ( G E T P HD) STENSF IGETR TEWFF) )( P U T ( G E T R WD) EAIJX ( S F T P A U K ) ) ( I I F T Q AUX (GET'R AIJX)) ( P U T t c~T 1 4 HG) ZVMCIU ( G F T R VMon)) (HOB V63) 1 ) ( V 0 2 ( P O P ( G F T R HU) T 1 ( " P ( P U S H V O S W T C ( S E W V 9 ) ( T T ) V P 1 )T H I S IS THE C A h O N I C A L V~Q P O F v~T I O Y F O R T I E SYST-EM s I M O V E~ ( L n M s n~ ( S T ) (PROG C~UBJ ORJ COMP C O W S A TH PMonI V E H I C M E Q I u M S G J ) +SET S U~J ineJ COPPS N I T H V S E T 4 ( V S E 7 S T ) ( c~N o ,~V 2 ) 1 ~~X F F E R F~C F ( C a n R V i ) (CAnR v 2 ) I ) ) ) e
Main paper: (setg v e h i c ( v a k e t o n $ ) ) ( c o n e ( ( a h 0 v e h i c (null m f d i l i m ) ( $ e t q j ( b e t o k ' v e h t c e d m e d i u m ) ) (setc 4 e d t u m (,pahftr)w j): ) 1 +PROCESS, MQDEI. R SET^ LFV 0)( ( S E W F O C ( G F T 4 = q U P J ) 1 ( S F T Q vw a ) ) ( ( S E T 0 F,OC(GET @ = O P J ) ) ( s E T Q V R * ) ) 1 U N l V O O P ICSCC * T R A~E * M A S~$ C L 1 ( L T N T T ) (MAXWLOQP) 1 ) ) 4(PRIIC; 1 W"wT EER N X L ) CSETQ ITADDER 2048) 1 ) ) J1 E IS A SHQRT k OF E R M E T O E R A S E YE s C A E~N . J, ( E ( L A M B D A ( E R A S E ) 1 ) 4A (~c0Nr-j (: (NULL IlhIVQT!) ( 6 0 P ) 1 )( RYTIJRhc ) ) IT-NFO, 4WD E X P R . (1 (FIX (PLclS % n , W ) 1 1 ) C MOVE WE TURTLE BACKNARD$ JI rRnCN (LAMBDA ( W ) ( P O R N ~M T V L & w ) 1 1 ) Jt MOVE THE TURTLE B Y A S f G h~E n . A~O l J h T 4 V W V E ( L A M R D A ( W ) W O R M w ) 1 ) JI( S C A L E m w W m~ ( $ 1 ( P R c G 0 (SFTO CSIZF '(TIMES S ~I .~M A L~X~E~ 1 RIGHT 90) (FORM A T ) ( R I G H T 9 0 7 1 1 1 .Lr o r -b y a single noun phrase euch ae a clown in n boat" dr by an imperative, "balance a pedestal". It ddee not ac'cept queetion forms; that would require an additional arc from CLAUSE l a b e l l e d , PUSH QFOBN SNTC. The ordinary form of an arp is an arc-label such as CATegory, PUSH,, POP, TST followed by its arguL ment, followed by any-condition statement. SNTC is simply the variable t h a tt r u e a t all t i m e s i n the model. INIT !is the s e t of relations true Bt the initial s t a t e of time 5n the model, INTER Ts t h o s e f o r t h e intermadiage s t a t e s , and R~S U L T i s t h e s e t f b r the f i n a l s t a t e . When a f u n c t i o n W G f o r Pragmatics evaluates one of these attributes, t h e r e s u l t is t o evaluatethese PUT functions t o produce a s e m a n t i c network r e p r e s e n t i n g t h e s t a t e of( 7 0 V f l 1 ) ) ( T S T O K~O R ( A~D ( S F T Q J ( G E T R A U X ) ( s F T R TENSE ( P E T (CAP 3 ) E T r N S E ) 1 1 ( S E T R T F N S E { G F T ( G E T R V 1 :TEN$€) 1 i?)(PUT ( G E T P HD) STENSF IGETR TEWFF) )( P U T ( G E T R WD) EAIJX ( S F T P A U K ) ) ( I I F T Q AUX (GET'R AIJX)) ( P U T t c~T 1 4 HG) ZVMCIU ( G F T R VMon)) (HOB V63) 1 ) ( V 0 2 ( P O P ( G F T R HU) T 1 ( " P ( P U S H V O S W T C ( S E W V 9 ) ( T T ) V P 1 )T H I S IS THE C A h O N I C A L V~Q P O F v~T I O Y F O R T I E SYST-EM s I M O V E~ ( L n M s n~ ( S T ) (PROG C~UBJ ORJ COMP C O W S A TH PMonI V E H I C M E Q I u M S G J ) +SET S U~J ineJ COPPS N I T H V S E T 4 ( V S E 7 S T ) ( c~N o ,~ [vni.ff ( l a m b d a ( v 1 v 2 ) ( l a l $ ' t ~d~f f e r e n c e ( c 4 a v i ) ( c a r: V 2 ) 1 ~~X F F E R F~C F ( C a n R V i ) (CAnR v 2 ) I ) ) ) e background: In t h i s s e c t i o n only a f e w of hundreds o'f n a t u r a l language processing papers are s u g g e s t e d as entries to t h e l l i t e r a t u r e . A t least a dozen reviews of t h i s l i t e y a t u r e are a v a i l a b l e ; h a l k e r ' s i s not only among t h e most recent and complete (Walker 1973) , but i t i n c l u d e s a s e c t i o n t h a t cites t h e reviews.Since 1970, the langu'age processing l i t e r a t u r e has been r i c h i n We began with the notion that it should be quire easy to construct a microwcizld concerning-a clown, a pedestal, and a pole. (Cl, TOK CLOWN, SUPPORTBY C2, ATTACH(C1 FEET= C 2 TOPXY)) (€2, TOK PEDESTAL, SUPPORT C 1 , ATTACH(C2 TOPXY Cl F E E T X I ) ) (CLOUN, E X P R C W D A O ,) FEET X I , SIZE 3, STARTPT+ v + (ADV) DCLAUSE -t PPI R E L C O N ' J ĨR E L C L A U S E \ VMOD PP + PREP* + NP BEUDNJ -, RCONJ + CLAUSE RELCMUSE -+ (RELPRON) + PRONCLAUSE PRONCUUSE -t VO( WP + ,VG .+ (DCLAUSE) vMoDVPAST '+ SUPPORTED SAILED, . . In t h i s n e t , if t h e sentence begins with an NP, the PUSH NP will return Ehe structurk of an NP iq t h e * r e g i s t e i . A t that point t h e r e g i s t e x SUBJect is s e t t o 'that v a l u e . When a V S t r i n g i s analyzed. b y PUSH VS 'then t h e r e s u l t of p a r s i n g "clowns hold, poles" w i t h t h e above n e t is:(HOLD SUBJ CLOWNS, OBJ POJ+ES)I n , f a c t , it is necessary to c r e a t e new names f o r each word used in a sentence--to avoid clobbering d i c t i o n a r y information--so the result from a c t u a l n e t s would be: with adjectives and adverbs i r i the following fashioe:( C 1A n adjective, e . g . big, has the following lexical structure:(BIG ADJ T, POS T, TYPE SIZE, VALUE 7)PUTMODS will for each adjective obtain t h e TYPE and VALUE and p u t them on t h e noun's property list. Thus, "a big red clown" results in:(C1 TOK CLOWN, DET INDEF, NBR SING, SIZE 7, COLOR 1)where COLOR 1 assumes that some mechanism f o r a s s i g n i n g Clause n e t .'PUSH NP SNTCr PUSH PP "BY P A S V O B J + 3 PUT V t ' S U B J SUBJ SUBJ +-* PUT v !ISUB-J SUBJ ' if "OBJ'OBJ PUT Y "OBJ OBJ TST AUX=BE V=ED POP v (PUTI v I'SUBJ SUBJ) PA$V + T O B J + S U B J S U B J t NIL tThis VP net first pushes a VG, verb group. VG is n o t shown in t h i s discussion, b u t i t scans t h e sentence string for an a c c e p t a b l e sequence of auxilaries, and adverbs domindt'ed by a verb. It makes a token of the verb and puts i t s tense and a u x i d i a r i e s on that token as p r o p e r t y value pairs. I t returns t h e token name. I n exiting node VP1 we seek an NP as a syntactic OBJect and finding one, add the subjgct and object as properties PUSH PI' (GAT PREP) h rl HD + * A POP HD T 4 PUSH RELCONJ (CAT RCONJ) TST GETR SUBJ 1 CAT RPRON @ PUSH, PRQNCLAUSJ) (.EVAL ((GET * T~K ) * ) TST ' I I SUBJ + (ANTEC * GLS? ((HOLD *) J. SUBJ CAT v "ED "TNG P 11 TST T SUBJ + (VBMATCH * GLST) p , HD +- -I. SUBJ * = "TO. NEXT = VThe DCLAuSE.net i s f a i r l y i n t r i c a t e i n t h a t it accounts f o r PPs, r e l a t i v e pronouh clauses, i n f i n i t i v e modifiers, p a r t i c i p i a l c Wses and clauses introduced by relative c o n j u n c t i~n s . A PP is w e or more prepositiotrs followed by an NP. A RELCONJ s t a r t s with an RCONJ such as "while", "after1' e t c . and may be followed by a DCLAUSE or a CLAUSE. A r e l a t i v e , pronoun clause begins w i t h an o p t i o n a l r e l a t i v e pronoun and is followed by a pronoun clause which i s either a VP o r an NP fohlowed by a VG an6 o p t i o n a l DCLAUSES.For the moment we i n s i s t for computational economy t h a t a r e l a t i v e clause h e introduced by a r e l a t i v e pronoun; a c t u a l l y the f m of a pronominal clause is s u f f i c i e n t l y rwell defined that PUSH PRONCLAUSE can i d e n t i f y i t without a r e l a t i v e pronoun -in most cases.When a pronoun i s found, here or i n an NP, thef u n c t i o n ANTECedent is called to scan t h e l i s t of prece'ding nouns, t o f i n d t h e b e s t agreement-in person, number, and gender. The f u n c t i o n VBMATCA on the e x i t from node D2i s a f u n c t i o n t h a t seeks t o f i n d t h e head that the p a r t i c i p i a l or i n f i n i t i v e phrase is modifying. As in PREPMATCH, the head noun is frequently not the (C1 TOK BALANCE, SUBJ (C2 TOK CLOWN, DET DEF), OBJ ( C 3 TOK POLE, DET INDEF) , COWS (C4 TOK HANDS, POSSBY C2, PREP (XI))i e e f f e c t a£ t h e sernaotic f u n c t i o n s f o r t h i s s e n t e n c e is t o produce t h e f t l l o w i n g :(C2 TOK CLOWN, SUPPORT C 3 , S I Z E 3 , ATTACH ((22 C 3 ) ) XY XY ( C 3 'OK POLE, SUPPORTBY.C2, SIZE 3 , ATTACH (C-3 C 2 ) )xy SY c a n d i d a t e can dominate t h e PF i n q u e s t i o n . For example "ON1' is defihed as a LISP f u n c t i o n with two arguments. When c a l l e d w i t h '"clown'! and "nose", (RIGHTOF N 1 N2) ) ) (RIGHTOF (LAMBDA (N1 N2) (CO'ND ( (AND (GET N 1 "PICT) (GET N 2 "PIcT) ) (PUT N 2 "RIGHTOF Nl) (PUT N1 "LEFTOF N 2 ) ) (T NIL) 1 1)Thue. "a b e s i d e b" is q u i t e arbitrarily interpret& to mean "b ie ~o the r i g h t of a". RIGHTOF r e q u i r e s t h a t i t s two arguments be p i c t u r " r e t o r t " means '.'answer sharply" ~h i c h means "comutlicate s h a r p l y ii~ response t o a communi.cationt'. The verB mdy imply Gpecial arguments i n another way;t h e verb, "sail", implies t h a t "someone caused a v e h i c l e t a move through a The conditions or rules For transforming these syntactic arguments * i n t o semantic r o l e s are as Collows:SBBJ A OBJ + mi +-SUBJ. T H~ 4 OBJ SUBJ -+ TH2 + SUBJ OBJ + TH2 + O B J For each ZOMP, N T H~ A ON A PSCT(C0MF) -+ TH1 +-COMP nlSUPPORTPT1 A IN v ONVWITH A PART(C0MP THL) + SUPPORTPTl + COMP dBALPT2 A ON A PART(C0MP TH2), -+ BALPT2 + COMP T + PRINT (LIST "UNDEFINED COLON COW)For t h e following two example sentences, the above rqles result iri t h e bindings (shown: A ololn with a pole in his hands b'alances on a pedestal.. .Ex 1 A c,T h e e a r l i e r action sf rhe preposition semantic f u n c t i o n s will have reduced ' N o t e s : 'x -+~y X imp'lies Y by the following rules: If a t r i a n g l e has a l s~ been d e f i n e d , we can then define: H0USE:SIZE; SQUARE:SIZE; F0RWW:SIZE; TR1ANGLE:SIZE;x 4 y SET x t o y A x Not X F'(x) Evaluate function F of X n( and V o rANIM (StJBJ) + A + SUBJ FORCE (SU3J) + 1 + SUBJ VEHIC (SUBJ) -, VEHIC + StJBJ S V M I C /\ m I C (ow) + W B I C + OBJ MEDIUM (OBJ) + MED + OBJ OBJ -+ TH + OBJ FOR EACH COhF a MED A IN V ONLV THROUGH A MEDIM(CObfP) + MED + COMP % VEHIC A IN V ON VIt is the convenience and s i m p l~c i t y e o f these LOGO conventi'ons that convinced m e that drau3,ng p i c t u r e s from sentences would n o t add any g t e a t c o m p l e x i t y t o a basic l a n g u a g e a n a l y s i s system.. LOGO offers We have also noticed that the semantic network that is produced as a result of semantic analysis can be seen as a problem graph by the functions t h a t organize images and it is apparent that as these graphs come to contain larger numbers of images, i t will be necessa'ry to Bruce, wrtram C . , "A model f o r temporal r e f e r e n c e s and its a p p l i c a t i o n i n a q u e s t i o n answering program." ~r t i f i c i a l I n t e l l i g e n c e , 3 1972 pp 1-25. 1 ) ( S U F I n * ) ( T o ~1 3 2 ) )* I 1 ) ( L I F T R nL'X(GETC7 1 v G ) 1I T 0 V G ) ) ( C A T V T (HOP V V '~' ) ) (T?? V A U X (-GETR P U X )V ) ) 1 L L AOJ ( q 1 2 E 4 ) HOP V O l ) 1 ) wvl ( T S T V V T {SET!? V 9 ) (sFTR HO (MnKETQK (SETQ b L f T [ C b h S fll S f ) ) ( T O , V G l l l 1 ) ( v( B I G AnJ ( S I Z E 7 ) ( C L A S S S I Z E ) ( L A R G E ADJ (~T Z E 6 ) ( C L P S~ sIZF) ) ( L T T T L E ADLJ,(sIZE 3 ) ~C L A ' F S SIZE) J ( S M A~R Q P PREP ( 7 T I ) NBR PL 1 ( P A R T 7 ) B s ( I N k17t-i) 1 ) (HFAD r\l ( N R R S I N G ) t P 4 F T t-) ) ( N O S E N ( N Y R ~I N C ) ~P~R T 7 ) ) t~o~k N (PIC7 f ) ( N R R SING)) (HEAO M ( N R R $ I N G ) ( P P R T 7 ) ) (PFET N ( : dynamic process model t h a t can be operated t o produce successive s t a t e s deecribed by t h e E n g l i s h . The p r i n c i p l e s used i n t h e system are.a c o n c i s e representation of my g l e a n i n g s from r e c e n t l i t e r a t u r e dnd of c o u r s e from work of my own and my students. Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
595
0.016807
null
null
null
null
null
null
null
null
db91c730625a35c942f872b4d658489877cfb96b
219303905
null
Conceptual Analysis: Inventory and Analysis of Terminology in Political Science
This committee (which includes political scient ists, sociologists, anthropologists, linguists, and philosophers) has been moving toward several objectives of concept clarification in political and social analysis. COCTA has organized panels at many polirical science and socialogy associations, including its formal, association with the C mparative Interdisciplinary Studies Section of the International Studies Association as the Internet on Conceptual and Terminological Analysis. Over the half-decade of its existence, COCTA has developed sever31 separate stages of concepmal analysis including special foci on metalinguistics, concept construction and reconstruction, and clarification of the theoretical usages of concepts. Underlying these and other interebts is a prerequisite need for an inventory of concepts-in-use. The rationale for attempting to develop the inventory, and discussion of its potential usages, are fully stated in my Commencement of a S y s t e m a t i c C o n c e p t COCTA Cozlection This statement s e t s f o r t h the description of the resulting official COCTA Concept inventory2.
{ "name": [ "Graham, Jr., George J." ], "affiliation": [ null ] }
null
null
null
1975-07-01
0
0
null
null
null
null
(IE t h e term is not English, it shauld be followed b y a coma and the closest English translation 2If the term and definition a r e in a language other than English, the definition should be followed by an EXACT 3~h e UPSIS is a special abstracting and retrieval system of political science articles, books, papers, etc., The clarification of concepts will inevitably lead to restatements of definitions from the literature, to metalinguistic information worth storing, etc. Any restatements not contained in p a p e r s , articles, books, etc., can be sent in the same for-COCTA mat as the above with the third category filled in as a COCTA PARTICIPANT RESTATEMENT If the restatement is i n a form subject to citation, it is simply e n t e r e d as any other conceptin-use .Because t h e COCTA Concept Inventory is designed to facilitate research and concept clarification i n the social and related sciences, the COCTA ~o a r d~ and the Director of the COCTA Concept Inventory hope to draw upon and share the mutual rewards and costs with active scholars. The enterprise depends upon scholars taking the time to record the concepts they are using and promises, in return, to facilitate the efforts of scholars by providing a n e x p a n d i n g list of concept meaningsin-use."~e n e r a l information about activities can be received. from Giovanni Sartori, COCTA Chairman, Instituto di Scienza Politics, Universita degli Studi di Firenze, 48, via Laura, 50121 F i r e n z e , Italy, or Fred Riggs , COCI'A Secretary, Department of Political Science, University of Hawaii, Honolulu, Hawaii 96822.
moving toward several objectives of concept clarification in political and social analysis. COCTA has organized panels at many polirical science and socialogy associations, including its formal, association with the C mparative Interdisciplinary Studies Section of the International Studies Association as the Internet on Conceptual and Terminological Analysis. Over the half-decade of its existence, COCTA has developed sever31 separate stages of concepmal analysis including special foci on metalinguistics, concept construction and reconstruction, and clarification of the theoretical usages of concepts. Underlying these and other interebts is a prerequisite need for an inventory of concepts-in-use. The rationale for attempting to develop the inventory, and discussion of its potential usages, are fully stated inThis statement s e t s f o r t h the description of the resulting official COCTA Concept inventory2.The inventory is a rather ambitious project that will depend upon the contributions of interested scholars. It will begin with special focal points within political science and sociology as a pilot project. The logic of this pilot collection, however, is to provide a framework within which che collection can be expanded into other social sciences and related fields in the humanities. The immediate task is to commence the collection of social science concepts-in-use and to demonstrate the inventory's utility. Since the inventory can be commenced o n l y by volunteers, the aid of scholars from several disciplines is essential to its success. Any concepts can be listed by interested scholars.The present proceduxes for entering concepts into the inventory are simple. Scholars in the fields record concepts and related information according to the inventory's format and mail them .tro me. These materials will be edited and sent to Carl Beck at the University of Pittsburgh where the concepts and information will be recorded and stored (the Pittsburgh '~ittsburgh: Univer.sity Center for International Studies, No. 9, 1974 . See also the other COCTA papers listed therein.2~h e f h a l design of the collection has seriously benefitted from comments from Fred W Riggs, from those who attended a special workshop on the inventory at the 1975 International. Studies Association Meeting in Washington (including Carl Beck, James B j orkman, Judy Bertelsen, Ray Corsado , David Hays, Ray Johnston, R. J. KirkbrPde, David Nasatir, Stephenie Neuman, Jona than Pool, Char les Powell, Fred Riggs , Henry Teurie, Theodore Bukahara, and Alan Zuckerman), and special responses from David Hays and Glenda P a t r i c k .for the labor and postage costs, to be absorbed by scholars one way or another, Beck's technical and storage assistance permits commencing the inventory without funds. Once the inventory is seriously commenced, funds should quickly follow.For each definition of a concept from the literature, the following information should be recorded by typing the information on-8% x 11 inch paper. The identification of field and its contents should follow as below, with the information replacing the field descriptions. The information for some fields may either not be available or not be relevant, but NO R E C O R DEach definition of a concept will be assigned an e n t r y number when p l a c e d in the inventory because of multiple definitions for a specific term, but this will not affect the r e c o r d s sent from t h e f i e l d .should b e r e c o r d e d . For example., "Revolution i s d e f i n e d only for u s e when a n a l y z i n g t h i r d world n a t i o n s from t h e perspective of demographic measures. " These descriptions should attempt LO characterize the type and level of theory employed as completely as is possible. Several sentences can be used. Retrieved definitions then CAN be limited to only those concepts which ALSO have description terms of interest in this file For example: REVOLUTION/THIRD WORL,D/DEMOGRAPHIC.(Since the collection will be stored in the same retrieval network as USPSIS, the APS Thesaurus terms provide useful guides for types of descriptors that can be used in both systems .) of the thesaurus should be listed, If in more than one, a c o m a should separate each listing. IF THE TERM I S KNOWN NOT TO BE LISTED I N A THESAURUS, the recorder is. asked to select the term(s) closest to the assigned term and list it, followed by the thesaurus s name (e .g. ,. The internal structure of the thesaurus will provide, without recording for the storage system, broader, narrower, and related TERMS, in contrast with the recorder-listed set of related CONCEPTS recorded under 4.
Main paper: the term used by the authqr t o reference a c o n c e p t , e . g consensus': (IE t h e term is not English, it shauld be followed b y a coma and the closest English translation 2If the term and definition a r e in a language other than English, the definition should be followed by an EXACT 3~h e UPSIS is a special abstracting and retrieval system of political science articles, books, papers, etc., english language d e s c r i p t i o n s o f the use o' f the cqncept: should b e r e c o r d e d . For example., "Revolution i s d e f i n e d only for u s e when a n a l y z i n g t h i r d world n a t i o n s from t h e perspective of demographic measures. " These descriptions should attempt LO characterize the type and level of theory employed as completely as is possible. Several sentences can be used. Retrieved definitions then CAN be limited to only those concepts which ALSO have description terms of interest in this file For example: REVOLUTION/THIRD WORL,D/DEMOGRAPHIC.(Since the collection will be stored in the same retrieval network as USPSIS, the APS Thesaurus terms provide useful guides for types of descriptors that can be used in both systems .) of the thesaurus should be listed, If in more than one, a c o m a should separate each listing. IF THE TERM I S KNOWN NOT TO BE LISTED I N A THESAURUS, the recorder is. asked to select the term(s) closest to the assigned term and list it, followed by the thesaurus s name (e .g. ,. The internal structure of the thesaurus will provide, without recording for the storage system, broader, narrower, and related TERMS, in contrast with the recorder-listed set of related CONCEPTS recorded under 4. the n a m e a n d l o c a t i o n of the i n d i v i d u a l 'recording the c o n c e p~' s definition.: The clarification of concepts will inevitably lead to restatements of definitions from the literature, to metalinguistic information worth storing, etc. Any restatements not contained in p a p e r s , articles, books, etc., can be sent in the same for-COCTA mat as the above with the third category filled in as a COCTA PARTICIPANT RESTATEMENT If the restatement is i n a form subject to citation, it is simply e n t e r e d as any other conceptin-use .Because t h e COCTA Concept Inventory is designed to facilitate research and concept clarification i n the social and related sciences, the COCTA ~o a r d~ and the Director of the COCTA Concept Inventory hope to draw upon and share the mutual rewards and costs with active scholars. The enterprise depends upon scholars taking the time to record the concepts they are using and promises, in return, to facilitate the efforts of scholars by providing a n e x p a n d i n g list of concept meaningsin-use."~e n e r a l information about activities can be received. from Giovanni Sartori, COCTA Chairman, Instituto di Scienza Politics, Universita degli Studi di Firenze, 48, via Laura, 50121 F i r e n z e , Italy, or Fred Riggs , COCI'A Secretary, Department of Political Science, University of Hawaii, Honolulu, Hawaii 96822. : moving toward several objectives of concept clarification in political and social analysis. COCTA has organized panels at many polirical science and socialogy associations, including its formal, association with the C mparative Interdisciplinary Studies Section of the International Studies Association as the Internet on Conceptual and Terminological Analysis. Over the half-decade of its existence, COCTA has developed sever31 separate stages of concepmal analysis including special foci on metalinguistics, concept construction and reconstruction, and clarification of the theoretical usages of concepts. Underlying these and other interebts is a prerequisite need for an inventory of concepts-in-use. The rationale for attempting to develop the inventory, and discussion of its potential usages, are fully stated inThis statement s e t s f o r t h the description of the resulting official COCTA Concept inventory2.The inventory is a rather ambitious project that will depend upon the contributions of interested scholars. It will begin with special focal points within political science and sociology as a pilot project. The logic of this pilot collection, however, is to provide a framework within which che collection can be expanded into other social sciences and related fields in the humanities. The immediate task is to commence the collection of social science concepts-in-use and to demonstrate the inventory's utility. Since the inventory can be commenced o n l y by volunteers, the aid of scholars from several disciplines is essential to its success. Any concepts can be listed by interested scholars.The present proceduxes for entering concepts into the inventory are simple. Scholars in the fields record concepts and related information according to the inventory's format and mail them .tro me. These materials will be edited and sent to Carl Beck at the University of Pittsburgh where the concepts and information will be recorded and stored (the Pittsburgh '~ittsburgh: Univer.sity Center for International Studies, No. 9, 1974 . See also the other COCTA papers listed therein.2~h e f h a l design of the collection has seriously benefitted from comments from Fred W Riggs, from those who attended a special workshop on the inventory at the 1975 International. Studies Association Meeting in Washington (including Carl Beck, James B j orkman, Judy Bertelsen, Ray Corsado , David Hays, Ray Johnston, R. J. KirkbrPde, David Nasatir, Stephenie Neuman, Jona than Pool, Char les Powell, Fred Riggs , Henry Teurie, Theodore Bukahara, and Alan Zuckerman), and special responses from David Hays and Glenda P a t r i c k .for the labor and postage costs, to be absorbed by scholars one way or another, Beck's technical and storage assistance permits commencing the inventory without funds. Once the inventory is seriously commenced, funds should quickly follow.For each definition of a concept from the literature, the following information should be recorded by typing the information on-8% x 11 inch paper. The identification of field and its contents should follow as below, with the information replacing the field descriptions. The information for some fields may either not be available or not be relevant, but NO R E C O R DEach definition of a concept will be assigned an e n t r y number when p l a c e d in the inventory because of multiple definitions for a specific term, but this will not affect the r e c o r d s sent from t h e f i e l d . Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
595
0
null
null
null
null
null
null
null
null
3f5095fd9dddc7a3d9576d99504fc3b8cffdace6
219302551
null
A Report on the Tutorial on Computational Semantics, Institute for Semantics and Cognitive Studies, {V}illa {H}eleneum, {L}ugano-{C}astagnola
The Institute, a branch of the Fondaeione dalle Molle, is carrying on research on a r t t f i a i a l intelligence (AI); about ten scholars devote themselves to the study of comhunication between man and machine, under the direction of Manfred Wettler. The tutorial was a week of lectures, seminars, and discuss i w s conducted by the staff of the Institute, supplemented by evening discussions and presentations of their own results by participants. About 100 persons from Germany,
{ "name": [ "Hanon, Suzanne and", "Koch, Gregers and", "S{\\o}ndergaard, Georg" ], "affiliation": [ null, null, null ] }
null
null
null
1975-07-01
10
0
null
Top-down parsing is the reverse procedure starting with the generations and continuing from left to right until the last word is reached. Another important pair of teahnical terms is BREADTH-FIRST and DEPTH-FIRST.B r e a d t h -f i r s t is t h e p a r a l l e l treatment of all possible alternative structures at a given time, none of which is given precedence. In depth-first pafses,, the akternative structures are treated sequentially. So far the description may apply to any kind of parsing, but it was Wilks's aim t o demonstrate p a r s i n g procedures where the structures are not syntactic but semantic. He described his own view of semantics as a version of the "meaning is procedures" attitude, i e. the procedures of its application give a pgrsed structure its s-ignificance.After mentioning what he called the "problem of natural lang~sge", by which he meant the problem of systsmatic ambiguity, Wilks gave a brief historical sketch of the first approaches to machine translation, the failure J£ which he put down to the ambiguity problem.Terry Winograd has proposed a distinction between "first"and "second" generation CI language systems. This distinction that seems no* to be wfdely acceptad also lies behind the survey below, where the systems of Winograd and Woods are considered first-generation and those of Simmons, Schank, Charniak, and Charniak then examined the ~roblern of when we make inferences. There are two obvious occasions when we may make one:TUTORIAL O N COMPUTATIONAL SEMANTICS 1. When a question is asked which requires an inference to be made (question time) 2. When the system has been given edough input i-nformation t o make t h e inference (read time)Although the inference making r e s t r i c t e d to question t i m e would seem to be more ,economical since i n f e r e n c e . i s done o n l y when we The semantic net structures here tend to become very complex.. --- -- -- -< -- -_ _- -- --.---" --" " --------" ----" " " " " i . i i i i i i " i i i i i i i i i "1 .., a"". --d l " . .-----It is not clear that any of the three approaches give really practical (or int~~tively satisfying) results.-. .. ." .... .-. -1 -I --. . -----.. .L . . . . .What we need at present is a theory of more conplex actions.For example, how do we link the descriptions o f the various substeps of the pro.cess of cake making into a single desciiption of the overall action of making a cake?There arb those who claim that a l l knowledge is stored in the form of procedures and there are those who clraim that it is stored as a collection of facts. The system knows how to simulate various human actions-such as toasting bread, making spaghetti sr cleaning up the kitchen.The information a b~u t how to perform these siinulatibns is: stored as procedures. However, these procedures can be used as data by other parts o f the system to answer such questicms.as "'HOW do you make a ham and cheese sandwtich?"., "How many utensils do you use if you make a mushroom omelette?" o r "Why did Don use a knife?"TUTORIAL ON COMPUTATIONAL SEMANTICS SEMANT I CS I ' N LI NGU I ST I CS SEMANTIC M A R K E R S AND S E L E C T I O N A L R E S T R I C T I O N S .Hayes discussed in detail the influential paper by Katz and Fodor (1963) . He concluded that t h e i r semantic theory is n0.t We are, however, aware that this is an inherent and r e c u r r i n g problem at such gatherings, where people with different qualifications meet to dis-cuss c o m o n problems We would l i k e to ex-qufte
null
null
null
null
Main paper: t u t o r i a l on computational s e m a n t i c s .: Top-down parsing is the reverse procedure starting with the generations and continuing from left to right until the last word is reached. Another important pair of teahnical terms is BREADTH-FIRST and DEPTH-FIRST.B r e a d t h -f i r s t is t h e p a r a l l e l treatment of all possible alternative structures at a given time, none of which is given precedence. In depth-first pafses,, the akternative structures are treated sequentially. So far the description may apply to any kind of parsing, but it was Wilks's aim t o demonstrate p a r s i n g procedures where the structures are not syntactic but semantic. He described his own view of semantics as a version of the "meaning is procedures" attitude, i e. the procedures of its application give a pgrsed structure its s-ignificance.After mentioning what he called the "problem of natural lang~sge", by which he meant the problem of systsmatic ambiguity, Wilks gave a brief historical sketch of the first approaches to machine translation, the failure J£ which he put down to the ambiguity problem.Terry Winograd has proposed a distinction between "first"and "second" generation CI language systems. This distinction that seems no* to be wfdely acceptad also lies behind the survey below, where the systems of Winograd and Woods are considered first-generation and those of Simmons, Schank, Charniak, and Charniak then examined the ~roblern of when we make inferences. There are two obvious occasions when we may make one:TUTORIAL O N COMPUTATIONAL SEMANTICS 1. When a question is asked which requires an inference to be made (question time) 2. When the system has been given edough input i-nformation t o make t h e inference (read time)Although the inference making r e s t r i c t e d to question t i m e would seem to be more ,economical since i n f e r e n c e . i s done o n l y when we The semantic net structures here tend to become very complex.. --- -- -- -< -- -_ _- -- --.---" --" " --------" ----" " " " " i . i i i i i i " i i i i i i i i i "1 .., a"". --d l " . .-----It is not clear that any of the three approaches give really practical (or int~~tively satisfying) results.-. .. ." .... .-. -1 -I --. . -----.. .L . . . . .What we need at present is a theory of more conplex actions.For example, how do we link the descriptions o f the various substeps of the pro.cess of cake making into a single desciiption of the overall action of making a cake?There arb those who claim that a l l knowledge is stored in the form of procedures and there are those who clraim that it is stored as a collection of facts. The system knows how to simulate various human actions-such as toasting bread, making spaghetti sr cleaning up the kitchen.The information a b~u t how to perform these siinulatibns is: stored as procedures. However, these procedures can be used as data by other parts o f the system to answer such questicms.as "'HOW do you make a ham and cheese sandwtich?"., "How many utensils do you use if you make a mushroom omelette?" o r "Why did Don use a knife?"TUTORIAL ON COMPUTATIONAL SEMANTICS SEMANT I CS I ' N LI NGU I ST I CS SEMANTIC M A R K E R S AND S E L E C T I O N A L R E S T R I C T I O N S .Hayes discussed in detail the influential paper by Katz and Fodor (1963) . He concluded that t h e i r semantic theory is n0.t We are, however, aware that this is an inherent and r e c u r r i n g problem at such gatherings, where people with different qualifications meet to dis-cuss c o m o n problems We would l i k e to ex-qufte Appendix:
null
null
null
null
{ "paperhash": [ "charniak|toward_a_model_of_children's_story_comprehension", "hewitt|planner:_a_language_for_proving_theorems_in_robots", "raphael|sir:_a_computer_program_for_semantic_information_retrieval", "katz|the_structure_of_a_semantic_theory" ], "title": [ "Toward a model of children's story comprehension", "PLANNER: A Language for Proving Theorems in Robots", "SIR: A COMPUTER PROGRAM FOR SEMANTIC INFORMATION RETRIEVAL", "The structure of a semantic theory" ], "abstract": [ "Massachusetts Institute of Technology. Dept. of Electrical Engineering. Thesis. 1972. Ph.D.", "PLANNER is a language for proving theorems and manipulating models in a robot. The language is built out of a number of problem solving primitives together with a hierarchical control structure. Statements can be asserted and perhaps later withdrawn as the state of the world changes. Conclusions can be drawn from these various changes in state. Goals can be established and dismissed when they are satisfied. The deductive system of PLANNER is subordinate to the hierarchical control structure in order to make the language efficient. The use of a general purpose matching language makes the deductive system more powerful.", "SIR is a computer system, programmed in the LISP language, which accepts information and answers questions expressed in a restricted form of English. This system demonstrates what can reasonably be called an ability to \"understand\" semantic information. SIR''s semantic and deductive ability is based on the construction of an internal model, which uses word associations and property lists, for the relational information normally conveyed in conversational statements. A format-matching procedure extracts semantic content from English sentences. If an input sentence is declarative, the system adds appropriate information to the model. If an input sentence is a question, the system searches the model until it either finds the answer or determines why it cannot find the answer. In all cases SIR reports its conclusions. The system has some capacity to recognize exceptions to general rules, resolve certain semantic ambiguities, and modify its model structure in order to save computer memory space. Judging from its conversational ability, SIR is more \"intelligent\" than any existing question-answering system. The author describes how this ability was developed and how the basic features of SIR compare with those of other systems. The working system, SIR , is a first step toward intelligent machine communication. The author proposes a next step by describing how to construct a more general system which is less complex and yet more powerful than SIR . This proposed system contains a generalized version of the SIR model, a formal logical system called SIR1 , and a computer program for testing the truth of SIR1 statements with respect to the generalized model by using partial proof procedures in the predicate calculus. The thesis also describes the formal properties of SIR1 and how they relate to the logical structure of SIR .", "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].. Linguistic Society of America is collaborating with JSTOR to digitize, preserve and extend access to Language. 1. Introduction. This paperl does not attempt to present a semantic theory of a natural language, but rather to characterize the form of such a theory. A semantic theory of a natural language is part of a linguistic description of that language. Our problem, on the other hand, is part of the general theory of language, fully on a par with the problem of characterizing the structure of grammars of natural languages. A characterization of the abstract form of a semantic theory is given by a metatheory which answers such questions as these: What is the domain of a semantic theory? What are the descriptive and explanatory goals of a semantic theory? What mechanisms are employed in pursuit of these goals? What are the empirical and methodological constraints upon a semantic theory? The present paper approaches the problem of characterizing the form of semantic theories by describing the structure of a semantic theory of English. There can be little doubt but that the results achieved will apply directly to semantic theories of languages closely related to English. The question of their applicability to semantic theories of more distant languages will be left for subsequent investigations to explore. Nevertheless, the present investigation will provide results that can be applied to semantic theories of languages unrelated to English and suggestions about how to proceed with the construction of such theories. We may put our problem this way: What form should a semantic theory of a natural language take to accommodate in the most revealing way the facts about the semantic structure of that language supplied by descriptive research? This question is of primary importance at the present stage of the development of semantics because semantics suffers not from a dearth of facts about meanings and meaning relations in natural languages, but rather from the lack of an adequate theory to organize, systematize, and generalize these facts. Facts about the semantics of natural languages have been contributed in abundance by many diverse fields, including philosophy, linguistics, philology, and …" ], "authors": [ { "name": [ "Eugene Charniak" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "C. Hewitt" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "B. Raphael" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "J. Katz", "J. Fodor" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null, null, null ], "s2_corpus_id": [ "62620723", "15590266", "62082148", "9860676" ], "intents": [ [], [], [ "methodology" ], [] ], "isInfluential": [ false, false, false, false ] }
null
595
0
null
null
null
null
null
null
null
null
69165058b3ae794aa3e258d31161a3b086dcb7d5
219304199
null
A Case History in Computer Exploration of Fast Speech Rules
In c o n v e r s a t i o n a l s p e e c h , words r u n t o q 6 t her a n d i n t e r a c t c a u s i n g t h e i p h o n o l o g i c a l f ~r m s t o d i f f e r from t h e i r c i t a t i o n forms. F a s t s p e e c h r u l e s a t t e m p t t3 q e s c r i b e these c h a n g e s as s p e e c h becomes f a s t e r and more c a s u a l , In d e v e l o p i n g 3 n y s e t of p h o n o l o g i c a l rules, c o m p u t e r i z e d grammar testers are a u s e f u l and important a i d . T h e y necessitate (3. p r e c i s e , a o n s i s t e n t E o r n u l a t i o n of the r u l e s a n d allow the g e n e r a t i o n of s -p l e d . e r i v a t i o n s . I n a p p l y i n g these ~u l e s to a d i v e r s e set of utterances, w e can f i r s t confirm t h a t the r u l e s r e a l l y d c a p p l y where we e x p e c t them to, a n d then experiment vith v a r i o u s r u l e o r d e r i n g s to o b s e r ~e t h e i r e f f e c t s . T h e Phonological Grammar Tester of Friedman a n d Morin was use-d to t e s t t w o s e t s Of* f a s t s p e ' e c h rules from t h e A R Q A S p s e c h U n d e r s t a n d i n g Research c c m m u n i t y . Working w i t h C hese rules lad to c e r t a i n o b s e r v a t i o n s about t h e i n t e r a c t i o n s and nature of f a s t s p e e c h r u l e s in general. 3x1 a d d i t i o n to t e s t i n g t h e s e t w o sets of f a s t s p e e c h r u l e s , we were also interested in t h e problems of t e S t i n g such a grammar v i t h t h i s program. A p p e h d i c e s i n c l t l d e an overview of t h e grammar tester and f i n a J output from our t e s t i n g , fi Case History i n Computer Exploration o f F a s t S p e e c h R u l e s Couglas B. Moran The U n i v e r s i t y of M i c h i g a n I n t r o d u c t i o n
{ "name": [ "Moran, Douglas B." ], "affiliation": [ null ] }
null
null
null
1975-07-01
2
1
null
null
null
null
null
We had few t r a n s c r i p t i o n s of f a s t speech forms, so i nG R A P H O F P A R T I A L O R D E R I N G - - L I N E A R I Z E D -....-----A SSI 31 LA TZO N[ RJ-FLAPPING----------VOWEL R E D U C T I O N SCHWA O E L F T I O N < I k G > R E D U C T I O NA LVEOLAP FLAPPX N ! : GLOTTAL STOP F O R M A T I O NEQUATION, existing i n slower, casual speech, but not i n f a s t speech. ANTFST CALLEC FOR11ItPALAT2 " ( A A C C ) U T E S T CALLED F O R 12"PAL AT 3 (AACC) ANTES?' CALLED F O R 1311ALFLAF " ( A A C C ) hMTEST CALLER FOR 14"GLOT " (AACC) ANTEST CALLED FOR 15" DENDELD (AACC) ANTEST C A L L E f FOR 1 6vVCENDELT1 " (AACC) ANTEST CALLED F O R 17" CENDFLT,7tt (AACC) RNTEST CALLED FOR 1 8nIiEGVOICE91 (AACC) ANTFST RETURNS ** 1** CHANGE, H A V E CSEXZR FOR MERGFF IN AN TEST c ALLED F O R 19'!LEGAP91Cf' (AACC) ANTEST CALLEE FOR 2 0 " P R O V O ICF:" ( A A C C ) ANTEST CALLED F O RANTEST RETURNS ** I** V E A F P Z S E D ARE 1 1 REDVOW 2 5 SYLLAB 3 6 R U H R E D 4 8 N A S V O U 5 18 R E G V O I C E 6 2 1 PRCARTfC 7 23 NASRED TREE REAn B Y FTRIN 1 s 2 r 3 N 4 # 5 11>/2/ 7 + 8 T 9 R D V / 3 / 1 1 1 2 D 1 3 tlH/ I / 1 4 I( 15 + 1 6 $Fl 18 NPAS/2/ 1 9 1 20 t # # ,I1>/2/ + T R D V / 3 / + D UH/1/ K + SH NPAS/2/ C # o u t p u tP = 1 + C O Y S -VOCAL -SON -CCNT +! ! KT' -C O R -VOICE -S T R I D -DELREL I , B= 1 +CONS -VOCAL -SO# -CONT +ANT -COR + V O I C E -STRID -D E L R E L I , I!= I K O N S -V O C A L -S O N -CCNT +AYT + C O R sVOICE -STBID -DELRELI, D = f ?CONS -VOCAL -S O N -CONT +ANT +:OR + V O I C Z -STIIID -D E L R E L I , K = I K O N S -VOCAL -SON -CCHT -ANT -C O R -V O I C E -S T R I D --DELREL), G= /+CONS -VOCAL -SON -C O W -ANT -C O R + V O I C E -S T R I D -D E L R S L I , FRTCA 'IIVZS" V V = I + C O V S -V O C A L -S O N +COrJT + A N T -2 0 2 + V O I C E + S T R I D 1 , F = I + C O N S -V O C A L -SON +CCNT + A N T -COR -V O I C E + S T R I D J , T H V = { + C O N S -VOCAL -S O N +CONT +ANT + C O R +VOICE -S T R I D I , THn=I +CONS -VOCAL -SON +CCNT +&ANT +COR -V O I C~ -S T R I D ( , Z = I +CONS -VOCAL -SON +CON?! +ANT + C O R + V O I C E + S T R I D ! , S = ) + C O N S -V O C A L -SON +CC?JT +ANT + C O R -VOICE + $ T R I D ) , ZH= ( +CONS -V Q C A L -S3N +CONT -A S T -COR +VOICE +STRID1 , SH= I +CONS -V O C A L -S O N ~C C Y T -A N Z -C O B -V O I C E + S T R I D ) , " A F F R I C A T E S " J H = )+CONS -VOCAL -S O N -CCNT + A Y T +COR +VOICE + S T R I D + D E L R E L I , CH= +CONS -VOCAL -SOW -CO?tT +ANT + C O R -VOICE + S T R I D +DELRSLI, "N.3 E A L S * M= 1 +CONS -VOCAL + S O N -CONY! +ANT -COR +POICE -$ T a I D +NASAbl , N = I + C O N S -VOCAL + S O N -CCNT +ANT +COR +VOICE -S T R I D +NqSAL 1 ,+ V O I C E -STRID + N A S A L [ , " L I Q U I D Smt L=I+CONS +VOCA& +SON +C.ONT + A N T +COR + V O I C E -S T R I D ) , N S l ( 3 t ( o r ) EEZ= I -CONS + V O C A L -L O V) 2* %, WHERE 1 EQ 2. S C ERAS'E 2, ERASE 40 "SCHWA DELSTION P U L E 3 X H N D A . SCHWA => * l V 1 j(ALPHA)STfiESS1 ' (*COR SON^ I -CONS +VOCAL (BETA) STRESS1 WHE3S ALPHA > BETA >, R U L E SCHWDB SCHRA => *I<% ' 1 -CONS +VOCAL 1 STRESS 1 -l 1 +COR +SON 1 %>.SD % lVI+PALAT( %. S C I+PALAT) ERASEF 1. "SFLLABTCIZ INGw TRANS 8 SYLLAB. S D % I +CONS -V/<% 'C -1 (ALPHA) STRESS1 'C 36 WHERE A L P H A > 1).T R A N S I4 REGASS. DlRIIS PROM NEU" S < # ADJ<# f L 'EEl 'F 'T . # > PREP<# 'T 'U2 #> #>, S<# N < # 'R ' E E I 'F 'EH3 'R '*EH2 'N I S 'EH3 ' 2 #> #>. SSOh N < # 'CH ' A 1 'K '03 'L. 'EH2 IT #> #>, S<# N < # 'D '001 'EH3 'R '52 C> #>. S < # C O N J < # 'AEl ' h D # > 8 ) . S<# N < U IF 'R ' U H 1 'N "I' #> #>, S<# V < # ' S '001 'F 'T X 'EH2 'N #> #>. S<# N<# ' U B 'T 'ER3 'EH2 ' N lS # #>, S < # V<X 'D 'R 'EEI 'H 'T # > iC>. S C # V < # 'B '111 'NG # 'D #> #), s < # V<% ID 'XI1 'D # PRONC# ' Y 'U? #> #>. S < # V C # I ? # > POSS<# ' Y 'ER2 #> I>, S<# N < # ' S 'AEI ' N 'P 'EH2 'L ' 2 #> #>. S<k N<& ' S ' U H 1 'D 'EB3 'N # 'L '12 # > #>. !!<# V<C 'K 'A31 'H IP ,112 'NG #> #>. S<# N < # ' 5 'AE2 9 N G 'G 'EH3 'Ul # > I # > . S<R N<1C 'K ' A 1 'T '3H2 'N # > # > a 1."" S(# N < # '112 'N 'T -R '03 'D 'UHI ' K ' S H 'EH2 'N #> #>,S<# V<# 'S 'El 'I12 'NG #> # > 8 S<# N<# 'N 'UHI* 'THU 'I12 'NG #> #>. N N U L 3 ) 1 ( ( 2 N I N C 1 1 + S O N +CONT -ANT] ) G ( 2 W I N C l !-LOR +BACKt)l). SC ERASE 1. ' 9 ' O 9 N V '1-SON1 ('SB) 1 ' 3 (3?BND 4 ( ' B N 9 ) N A S A 1 1 -('YR ('MB) ) ' I +CUR + N A S A L I %>. ('SB, ' B N D 'BNI) ) 'T -%>. N V l = C 1 ( -C O N S -V O C A L 1 I SB, NV2=C 1 (-CONS -VIH = J -CONS +VOCAL -LOB +HIGH +BACK -R O U N D / , IHI=I -C O W S + VI + V O C A L +ANT-+COR +SON -STRID VOICE^ ) %>. " D E N T A L DELETIONw- TRANS 9 DCNDELD. SD % v V ' O ' N V * I + S O N +ANT +CORI ('SB) I'D (3'BND 4 ( ' B N D ) ) 2 ' N V %, WHERE ( (T => */<$ ' ( + C O R +/ +CONS -VOCAL +NASAL ( => * / <% ' 1 -C O N S +VOCAL + N A S A L ( -' O~B N~ ~N V %>. nDASK [ L 1" B U L E 2 1 C A R K L o L => , -AN T -CORI /<% -('C) 'NB %>. "[ B ]-FLA P P I N G n R U L E 22 RFLAP, R => 1-VOCAL + A N T I / < % 'PB ' 4 % 'THU -% P o '*[ D 1-DEVOPC INGfl R U L E 23 R D E V O I C E . P => 1-VOI:CZf/<%S C ' M R V < ! k B ' W ' A 1 IN IT 'MD> P < t M B 'T 102 ' H B > V < t l~~ 'G '01 'ND> 'MB>, r REE R E A D BY FTRTY 1 S 2 3 v 4 R 5 I1 6 A / 1 / 7 N 8 T . ! # I 1 # 1 2 ' I 7 3 U/2/ 74 f 16 # 17 G 18 0/1/ 1 9 I C 20 # # # W A / 1 / N ? # # T U/2/ 4 # G 0/1/ t # IS*** TR ANSFORMATICNS rk* s** SCAN CALLED AT 1 I AN TEST CALL'SD FOR l'lREDVOW " (AACC) ,SD= 2, RES= 6, TOP= 1:s ANTEST R E T U R N S ** I**CHANGYo H A V E CSRXCfl FOR M3RGCP IN 8 C H A N G E . H A V E CSEXCH Foil MZRGZF IN 1 2 ANTEST C A L L T D F O R 1 €!"REG V O I C E (AACC) ,ANTEST RETURNS ** 1** CHANGE, CALL ELE3OP FOR E R A S E 0 8 C H A N G E , H A V E CSEXCH FOR MERCEF IN 1 2 CHANGE, CALL ELSMOP FOR SURSE 2 1 9 CHANGE, CALL E L E I O P FOR E F A S E 0 11 ANTEST C A L L E C F O R, SD= 24. 3ES= 3 .
Main paper: : We had few t r a n s c r i p t i o n s of f a s t speech forms, so i nG R A P H O F P A R T I A L O R D E R I N G - - L I N E A R I Z E D -....-----A SSI 31 LA TZO N[ RJ-FLAPPING----------VOWEL R E D U C T I O N SCHWA O E L F T I O N < I k G > R E D U C T I O NA LVEOLAP FLAPPX N ! : GLOTTAL STOP F O R M A T I O NEQUATION, existing i n slower, casual speech, but not i n f a s t speech. ANTFST CALLEC FOR11ItPALAT2 " ( A A C C ) U T E S T CALLED F O R 12"PAL AT 3 (AACC) ANTES?' CALLED F O R 1311ALFLAF " ( A A C C ) hMTEST CALLER FOR 14"GLOT " (AACC) ANTEST CALLED FOR 15" DENDELD (AACC) ANTEST C A L L E f FOR 1 6vVCENDELT1 " (AACC) ANTEST CALLED F O R 17" CENDFLT,7tt (AACC) RNTEST CALLED FOR 1 8nIiEGVOICE91 (AACC) ANTFST RETURNS ** 1** CHANGE, H A V E CSEXZR FOR MERGFF IN AN TEST c ALLED F O R 19'!LEGAP91Cf' (AACC) ANTEST CALLEE FOR 2 0 " P R O V O ICF:" ( A A C C ) ANTEST CALLED F O RANTEST RETURNS ** I** V E A F P Z S E D ARE 1 1 REDVOW 2 5 SYLLAB 3 6 R U H R E D 4 8 N A S V O U 5 18 R E G V O I C E 6 2 1 PRCARTfC 7 23 NASRED TREE REAn B Y FTRIN 1 s 2 r 3 N 4 # 5 11>/2/ 7 + 8 T 9 R D V / 3 / 1 1 1 2 D 1 3 tlH/ I / 1 4 I( 15 + 1 6 $Fl 18 NPAS/2/ 1 9 1 20 t # # ,I1>/2/ + T R D V / 3 / + D UH/1/ K + SH NPAS/2/ C # o u t p u tP = 1 + C O Y S -VOCAL -SON -CCNT +! ! KT' -C O R -VOICE -S T R I D -DELREL I , B= 1 +CONS -VOCAL -SO# -CONT +ANT -COR + V O I C E -STRID -D E L R E L I , I!= I K O N S -V O C A L -S O N -CCNT +AYT + C O R sVOICE -STBID -DELRELI, D = f ?CONS -VOCAL -S O N -CONT +ANT +:OR + V O I C Z -STIIID -D E L R E L I , K = I K O N S -VOCAL -SON -CCHT -ANT -C O R -V O I C E -S T R I D --DELREL), G= /+CONS -VOCAL -SON -C O W -ANT -C O R + V O I C E -S T R I D -D E L R S L I , FRTCA 'IIVZS" V V = I + C O V S -V O C A L -S O N +COrJT + A N T -2 0 2 + V O I C E + S T R I D 1 , F = I + C O N S -V O C A L -SON +CCNT + A N T -COR -V O I C E + S T R I D J , T H V = { + C O N S -VOCAL -S O N +CONT +ANT + C O R +VOICE -S T R I D I , THn=I +CONS -VOCAL -SON +CCNT +&ANT +COR -V O I C~ -S T R I D ( , Z = I +CONS -VOCAL -SON +CON?! +ANT + C O R + V O I C E + S T R I D ! , S = ) + C O N S -V O C A L -SON +CC?JT +ANT + C O R -VOICE + $ T R I D ) , ZH= ( +CONS -V Q C A L -S3N +CONT -A S T -COR +VOICE +STRID1 , SH= I +CONS -V O C A L -S O N ~C C Y T -A N Z -C O B -V O I C E + S T R I D ) , " A F F R I C A T E S " J H = )+CONS -VOCAL -S O N -CCNT + A Y T +COR +VOICE + S T R I D + D E L R E L I , CH= +CONS -VOCAL -SOW -CO?tT +ANT + C O R -VOICE + S T R I D +DELRSLI, "N.3 E A L S * M= 1 +CONS -VOCAL + S O N -CONY! +ANT -COR +POICE -$ T a I D +NASAbl , N = I + C O N S -VOCAL + S O N -CCNT +ANT +COR +VOICE -S T R I D +NqSAL 1 ,+ V O I C E -STRID + N A S A L [ , " L I Q U I D Smt L=I+CONS +VOCA& +SON +C.ONT + A N T +COR + V O I C E -S T R I D ) , N S l ( 3 t ( o r ) EEZ= I -CONS + V O C A L -L O V) 2* %, WHERE 1 EQ 2. S C ERAS'E 2, ERASE 40 "SCHWA DELSTION P U L E 3 X H N D A . SCHWA => * l V 1 j(ALPHA)STfiESS1 ' (*COR SON^ I -CONS +VOCAL (BETA) STRESS1 WHE3S ALPHA > BETA >, R U L E SCHWDB SCHRA => *I<% ' 1 -CONS +VOCAL 1 STRESS 1 -l 1 +COR +SON 1 %>.SD % lVI+PALAT( %. S C I+PALAT) ERASEF 1. "SFLLABTCIZ INGw TRANS 8 SYLLAB. S D % I +CONS -V/<% 'C -1 (ALPHA) STRESS1 'C 36 WHERE A L P H A > 1).T R A N S I4 REGASS. DlRIIS PROM NEU" S < # ADJ<# f L 'EEl 'F 'T . # > PREP<# 'T 'U2 #> #>, S<# N < # 'R ' E E I 'F 'EH3 'R '*EH2 'N I S 'EH3 ' 2 #> #>. SSOh N < # 'CH ' A 1 'K '03 'L. 'EH2 IT #> #>, S<# N < # 'D '001 'EH3 'R '52 C> #>. S < # C O N J < # 'AEl ' h D # > 8 ) . S<# N < U IF 'R ' U H 1 'N "I' #> #>, S<# V < # ' S '001 'F 'T X 'EH2 'N #> #>. S<# N<# ' U B 'T 'ER3 'EH2 ' N lS # #>, S < # V<X 'D 'R 'EEI 'H 'T # > iC>. S C # V < # 'B '111 'NG # 'D #> #), s < # V<% ID 'XI1 'D # PRONC# ' Y 'U? #> #>. S < # V C # I ? # > POSS<# ' Y 'ER2 #> I>, S<# N < # ' S 'AEI ' N 'P 'EH2 'L ' 2 #> #>. S<k N<& ' S ' U H 1 'D 'EB3 'N # 'L '12 # > #>. !!<# V<C 'K 'A31 'H IP ,112 'NG #> #>. S<# N < # ' 5 'AE2 9 N G 'G 'EH3 'Ul # > I # > . S<R N<1C 'K ' A 1 'T '3H2 'N # > # > a 1."" S(# N < # '112 'N 'T -R '03 'D 'UHI ' K ' S H 'EH2 'N #> #>,S<# V<# 'S 'El 'I12 'NG #> # > 8 S<# N<# 'N 'UHI* 'THU 'I12 'NG #> #>. N N U L 3 ) 1 ( ( 2 N I N C 1 1 + S O N +CONT -ANT] ) G ( 2 W I N C l !-LOR +BACKt)l). SC ERASE 1. ' 9 ' O 9 N V '1-SON1 ('SB) 1 ' 3 (3?BND 4 ( ' B N 9 ) N A S A 1 1 -('YR ('MB) ) ' I +CUR + N A S A L I %>. ('SB, ' B N D 'BNI) ) 'T -%>. N V l = C 1 ( -C O N S -V O C A L 1 I SB, NV2=C 1 (-CONS -VIH = J -CONS +VOCAL -LOB +HIGH +BACK -R O U N D / , IHI=I -C O W S + VI + V O C A L +ANT-+COR +SON -STRID VOICE^ ) %>. " D E N T A L DELETIONw- TRANS 9 DCNDELD. SD % v V ' O ' N V * I + S O N +ANT +CORI ('SB) I'D (3'BND 4 ( ' B N D ) ) 2 ' N V %, WHERE ( (T => */<$ ' ( + C O R +/ +CONS -VOCAL +NASAL ( => * / <% ' 1 -C O N S +VOCAL + N A S A L ( -' O~B N~ ~N V %>. nDASK [ L 1" B U L E 2 1 C A R K L o L => , -AN T -CORI /<% -('C) 'NB %>. "[ B ]-FLA P P I N G n R U L E 22 RFLAP, R => 1-VOCAL + A N T I / < % 'PB ' 4 % 'THU -% P o '*[ D 1-DEVOPC INGfl R U L E 23 R D E V O I C E . P => 1-VOI:CZf/<%S C ' M R V < ! k B ' W ' A 1 IN IT 'MD> P < t M B 'T 102 ' H B > V < t l~~ 'G '01 'ND> 'MB>, r REE R E A D BY FTRTY 1 S 2 3 v 4 R 5 I1 6 A / 1 / 7 N 8 T . ! # I 1 # 1 2 ' I 7 3 U/2/ 74 f 16 # 17 G 18 0/1/ 1 9 I C 20 # # # W A / 1 / N ? # # T U/2/ 4 # G 0/1/ t # IS*** TR ANSFORMATICNS rk* s** SCAN CALLED AT 1 I AN TEST CALL'SD FOR l'lREDVOW " (AACC) ,SD= 2, RES= 6, TOP= 1:s ANTEST R E T U R N S ** I**CHANGYo H A V E CSRXCfl FOR M3RGCP IN 8 C H A N G E . H A V E CSEXCH Foil MZRGZF IN 1 2 ANTEST C A L L T D F O R 1 €!"REG V O I C E (AACC) ,ANTEST RETURNS ** 1** CHANGE, CALL ELE3OP FOR E R A S E 0 8 C H A N G E , H A V E CSEXCH FOR MERCEF IN 1 2 CHANGE, CALL ELSMOP FOR SURSE 2 1 9 CHANGE, CALL E L E I O P FOR E F A S E 0 11 ANTEST C A L L E C F O R, SD= 24. 3ES= 3 . Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
595
0.001681
null
null
null
null
null
null
null
null
df78e32399133ca696bec85787300049d704c47e
219310187
null
{``}Formulae{''} in Coherent Text: Linguistic Relevance of Symbolic Insertions
Some difficulties in automatic analysis ~n d translat i o n bound to symbolic insertions in mathematical t e x t s a r e discussed. Rules dealins with these d i f f i c u l t i e s a r e proposed, These r u l e s are based on the use of the whole t e x t of the a ~C i c l e incorporating a formula. For satisfactory automatic analysis of texts, it is necessary to provide in the dictionary exhaustive serriantical Information ascribed to i t s entries. But t h i s i n f o rmation can a p p e a r to be insufficient in cases where the meaning of! linauistic elements is a s c r i b e d to their occurrences by t h e very t e x t in whlch they. a r e encountered cf . I or example, pronouns. The o t h e r example is provided by symbolic insertions in mathematical t e x t s , which we shall call 'If ormulaeN . So n o t o n l y 'a= b , 'X 2 Y ' e t c . , but a l s o
{ "name": [ "Dreizin, Felix" ], "affiliation": [ null ] }
null
null
null
1975-07-01
0
0
null
null
null
null
null
L e t US try to translate from, Engllcsh-to 'Russian the sentence n \ y n -- Similar examples a r e provicted by other languages:.: In jeder Umgebung V von o X * .'Let R be a ring with a unity I*.( 2 We deflne 3 and k by j = m + n; k = mno. 5The "direct* translation of ( 5 ) t o Russian: '
Main paper: : L e t US try to translate from, Engllcsh-to 'Russian the sentence n \ y n -- Similar examples a r e provicted by other languages:.: In jeder Umgebung V von o X * .'Let R be a ring with a unity I*.( 2 We deflne 3 and k by j = m + n; k = mno. 5The "direct* translation of ( 5 ) t o Russian: ' Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
595
0
null
null
null
null
null
null
null
null
39279898ecd83f9502e994f4d694bea48db70413
219305258
null
Recent Computer Science Research in Natural Language Processing
Ihe maohine translation problem has recently been replaced by much narrower goals and computer processing of language has become part df artificial intelligence (AI), speech recognition, and structural pattern recognition. These are each specialized computer science research fields with distinct objectives and assumptions. The narrower goals involve making it possible for a computer user to employ a near natural-language mode for problem-salving, information retrieval, and other applications. Naturdl computer responses have also been created and a special term, "understanding", has been used to describe the resulting computek-human dialogues. Phe purpose of this paper is to survey these recent developments to make the A1 literature accessible to researchers mainly interested in computation on written text or spoken language.
{ "name": [ "Klinger, Allen" ], "affiliation": [ null ] }
null
null
null
1975-07-01
1
1
null
A program to detect "meaning" (logical consequences of word interpretations) must also perform grammatical operations for certain words to determine a part of speech (noun, verb, adjective, etc.)One method makes a tentative assigrlment, parses, then tests for plausibility via consistency with known facts.To reduce the complexity of this task, the designer limits the subset of language allower or the "world" (i.e. the subject)discussed. The word "domain" sums up this concept, other terms for "restricted domain" are "limited scope of discourse", "narrow problem domain", and "restricted English framework"The limitation of vocabulary or context constrains the lexicon and semantics of the "language". The trend i n t h e design of software for "natural-language understanding" is to deal with (a) a specialized vocabulary, and (b) a particular context or set of allowed interpretations in order to reduce processing time. Although computing results for several highly specialized problems Le,g. 7, 231 are impressive examples of language processing in restricted domains, they do not answer several key concerns.The systems cited in this section answer questions, perform commands, or conduct dialogues.Programs that enable a user to execute a task via computer in an on-line mode are generally called "interactive" Some systems are so rich in their language-processing capability that they are called "conversational" there provides a general discussion of "semantic information and computer programs involving "semantics"The "question-answering" program systems described in The "blocks-world" system described in [71 contrasts with these in that it has sophisticated language-processing capability It infers antecedents of pronouns and resolves ambiguities in input word strings regarding blocks on a table.The distinction between "interactive", "conversational", and "question-answering" is less important when the blocks-world is the. domain. The computer-science contribution is a program to interaet ,wfth the domain as if it could "underktand" the input, in the sense that it takes the proper action even when the input is somewhat ambiguous. To resolve ambiguities the program refers to existing relationships among the blocks.The effect of [71 was to provide a sophisticated example of computer "understanding" which led to attempts to apply similar principLes to speech inputs. (More detail on parallel developments in speech processing is presented later.)The early "language-understanding" systems, BASEBALL The former could "handle time questions" and used a bottom-up analysis method which allowed questions to be nested. For example, the question "Who is the commander of the battalion at Fort Fubar?" was handled by first internally answering the questian "What battalion is at Fort Fubar?" The answer was then substituted directly into the original question to make ic "Who is the commander of the 69th battalion?" which the system then answered. reports some second thoughts).The work of many other groups could be added to this In all of the program systems described thus far, In most "understanding" programs, information on a primi tive level of processing can be inaccurate; for example, the identification of a sound string "blew" can be inaccurately "blue" Subsequent processing levels combine identified primitives. If parts of speech are concerned, the level is syntactic; if meaning is involved, "semantic"; if domain is involved, the lave1 is that of the "world". Sach level can be an aid in a deductive process, leading to "understanding" an input segment of language. Programs NOW EXIST which operationally satisfy most of the following points concerning "understanding" in narrow domains (emphasis has been added)Perhaps tha most importaht criterion for undersvanding a language is the ability TO RELATE THE INFORMA-TION CONTAINED in a sentence TO KNOWLEDGE PREVIOUSLY ACQUIRED.MAY BE FJTTED . . . The memory structure in these programs may be regarded as 3emantic, cognitive, or conceptual structures.,.these programs can make statements or answer questions based not only an the individual statemegts they were previausly t o l d , but also On THOSE INTERRELATIONSHIPS BETWEEN CONCEPTS that were built up from separate sentences as information was incorporated into the structure . . .[ 2 8 p p .This has been accomplished through clever (and lengthy) computer programming, and by taking advantage of structure inherent in special proklem domains such as stacking blocks on a When such a system is used a user might f a i l to get a fact or relataonship because the natural-language subset chosen to represent his question was too righ--i.e., it includes a complex set of logical relationships not in the computer. Thos a block could result in a human-computer dialogue if the program has no logical connection between "garage" and "car" but only between "garage" and "house" (the program replies "OK" or "??'!I1 to user input sentences)I L I K E CHEVROLETS.? ? ?The computer failed to "understand" that there was no change of discourse subject. This is an example of a "semantfo" failure w h i~h could be overcome by interaction. That is; the human user would need to dnput one more meaning or association of a valid word so that computer "understanding" may be . Furthermore, this increase in time is added onto that which occurs when the size of lexicon is expanded. ks words are added, the number of trees that can Be-produced by the grammar's rewriting rules in an attempt to "recognize" a string expqnds rapidly. Hence in speech as in text processing, "under,standihgn1 exists via computer yet it is not likely to lead to rhachine processing of truly natural language. Indeed the artificiality of speech "understanding" by computer is even greater than that of text input. The "moon rocks where language, and probably spoken-language, "understanding" will be exhibtred. These developments will occur through careful design of tasks and use of advances in computer technology However, the general problem of machine' "understanding" of natural language--whether text Dr speech-is not likely to be aided by these developments.To enable "intelligent" processing by the computer ("hrtificial intelligence") 2. To produce a more useful way
A large body of research in computer science is devoted to language processing.
null
null
The computer literature discussed in this paper uses several linguistic terms in special ways, when there is a possibility sf congusion, quotation marks will be used to identify technical terms in computer science. The term "understanding" is frequently used as a synonym for "the addition of logical relationships or semantics to syntactic processing". This use is substantially qarrower than the word's implicit association with "human behavior implemented by computer'' the narrower use is introduced as a neutral reference point, The question of whether a computer porgram can operate in a human-like way is central to artificial intelligence. "Do current 'understanding' program systems show how extended human-like capability can be implemented using computers?" is a related pragmatic questton Initially this investigation sought to examine whether programs which "understand" language in the stipulated narrow sense are protatypes which could lead to expanded capability. Unfortunately, "language understanding" and its special subtopic "speeeh understanding'' are insufficiently developed to permit profitable discussion of the original question Hence an operational approach to the recent literature is taken here. This paper outlines how "language understanding" research has evolved and identifies key elements of program organization used to achieve limited computer "understanding".Current A 1 programs for lankuage processing are organized by level and restricted to specified domains. This section presents those ideas and comments on the limitations that they entail.Three principal levels of language-processing software are 1. "Lexical" (allowed vocabulary) 2. "Syntactic" (allowed phrases or sentences) 3 "Semantic" (allowed meanings) ln practice all these levels must operate many times for the computer to interpret even a small portion, say two words, of restricted natural-language input. Programs that perform operations on each level are, respectively, 1. Word in a table? 2. Word string acceptable grammatically?2 . Are current "understanding" programs, organized by level and using domain reatrictidn, extendable to true natural language?The realities are severe. Syntactic processing is interdependent with meaning and involves the allowed logical relationships among words %n the lexicon. Most natural-language software is highly developed at the "syntactic" level Howwer, the number of times the "syntactic" level must be ent'ered can grow explosively as the "naturalness" of the language to be processed increases. Success on artificial domains cannot imply a great deal about processing truly natural language.
Main paper: . do specialized vocabularies have sufficient complexity to warrant comparison with true natural language?: 2 . Are current "understanding" programs, organized by level and using domain reatrictidn, extendable to true natural language?The realities are severe. Syntactic processing is interdependent with meaning and involves the allowed logical relationships among words %n the lexicon. Most natural-language software is highly developed at the "syntactic" level Howwer, the number of times the "syntactic" level must be ent'ered can grow explosively as the "naturalness" of the language to be processed increases. Success on artificial domains cannot imply a great deal about processing truly natural language. word string acceptable logically?: A program to detect "meaning" (logical consequences of word interpretations) must also perform grammatical operations for certain words to determine a part of speech (noun, verb, adjective, etc.)One method makes a tentative assigrlment, parses, then tests for plausibility via consistency with known facts.To reduce the complexity of this task, the designer limits the subset of language allower or the "world" (i.e. the subject)discussed. The word "domain" sums up this concept, other terms for "restricted domain" are "limited scope of discourse", "narrow problem domain", and "restricted English framework"The limitation of vocabulary or context constrains the lexicon and semantics of the "language". The trend i n t h e design of software for "natural-language understanding" is to deal with (a) a specialized vocabulary, and (b) a particular context or set of allowed interpretations in order to reduce processing time. Although computing results for several highly specialized problems Le,g. 7, 231 are impressive examples of language processing in restricted domains, they do not answer several key concerns.The systems cited in this section answer questions, perform commands, or conduct dialogues.Programs that enable a user to execute a task via computer in an on-line mode are generally called "interactive" Some systems are so rich in their language-processing capability that they are called "conversational" there provides a general discussion of "semantic information and computer programs involving "semantics"The "question-answering" program systems described in The "blocks-world" system described in [71 contrasts with these in that it has sophisticated language-processing capability It infers antecedents of pronouns and resolves ambiguities in input word strings regarding blocks on a table.The distinction between "interactive", "conversational", and "question-answering" is less important when the blocks-world is the. domain. The computer-science contribution is a program to interaet ,wfth the domain as if it could "underktand" the input, in the sense that it takes the proper action even when the input is somewhat ambiguous. To resolve ambiguities the program refers to existing relationships among the blocks.The effect of [71 was to provide a sophisticated example of computer "understanding" which led to attempts to apply similar principLes to speech inputs. (More detail on parallel developments in speech processing is presented later.)The early "language-understanding" systems, BASEBALL The former could "handle time questions" and used a bottom-up analysis method which allowed questions to be nested. For example, the question "Who is the commander of the battalion at Fort Fubar?" was handled by first internally answering the questian "What battalion is at Fort Fubar?" The answer was then substituted directly into the original question to make ic "Who is the commander of the 69th battalion?" which the system then answered. reports some second thoughts).The work of many other groups could be added to this In all of the program systems described thus far, In most "understanding" programs, information on a primi tive level of processing can be inaccurate; for example, the identification of a sound string "blew" can be inaccurately "blue" Subsequent processing levels combine identified primitives. If parts of speech are concerned, the level is syntactic; if meaning is involved, "semantic"; if domain is involved, the lave1 is that of the "world". Sach level can be an aid in a deductive process, leading to "understanding" an input segment of language. Programs NOW EXIST which operationally satisfy most of the following points concerning "understanding" in narrow domains (emphasis has been added)Perhaps tha most importaht criterion for undersvanding a language is the ability TO RELATE THE INFORMA-TION CONTAINED in a sentence TO KNOWLEDGE PREVIOUSLY ACQUIRED.MAY BE FJTTED . . . The memory structure in these programs may be regarded as 3emantic, cognitive, or conceptual structures.,.these programs can make statements or answer questions based not only an the individual statemegts they were previausly t o l d , but also On THOSE INTERRELATIONSHIPS BETWEEN CONCEPTS that were built up from separate sentences as information was incorporated into the structure . . .[ 2 8 p p .This has been accomplished through clever (and lengthy) computer programming, and by taking advantage of structure inherent in special proklem domains such as stacking blocks on a When such a system is used a user might f a i l to get a fact or relataonship because the natural-language subset chosen to represent his question was too righ--i.e., it includes a complex set of logical relationships not in the computer. Thos a block could result in a human-computer dialogue if the program has no logical connection between "garage" and "car" but only between "garage" and "house" (the program replies "OK" or "??'!I1 to user input sentences)I L I K E CHEVROLETS.? ? ?The computer failed to "understand" that there was no change of discourse subject. This is an example of a "semantfo" failure w h i~h could be overcome by interaction. That is; the human user would need to dnput one more meaning or association of a valid word so that computer "understanding" may be . Furthermore, this increase in time is added onto that which occurs when the size of lexicon is expanded. ks words are added, the number of trees that can Be-produced by the grammar's rewriting rules in an attempt to "recognize" a string expqnds rapidly. Hence in speech as in text processing, "under,standihgn1 exists via computer yet it is not likely to lead to rhachine processing of truly natural language. Indeed the artificiality of speech "understanding" by computer is even greater than that of text input. The "moon rocks where language, and probably spoken-language, "understanding" will be exhibtred. These developments will occur through careful design of tasks and use of advances in computer technology However, the general problem of machine' "understanding" of natural language--whether text Dr speech-is not likely to be aided by these developments. conclus ions: A large body of research in computer science is devoted to language processing. 1, introduction: The computer literature discussed in this paper uses several linguistic terms in special ways, when there is a possibility sf congusion, quotation marks will be used to identify technical terms in computer science. The term "understanding" is frequently used as a synonym for "the addition of logical relationships or semantics to syntactic processing". This use is substantially qarrower than the word's implicit association with "human behavior implemented by computer'' the narrower use is introduced as a neutral reference point, The question of whether a computer porgram can operate in a human-like way is central to artificial intelligence. "Do current 'understanding' program systems show how extended human-like capability can be implemented using computers?" is a related pragmatic questton Initially this investigation sought to examine whether programs which "understand" language in the stipulated narrow sense are protatypes which could lead to expanded capability. Unfortunately, "language understanding" and its special subtopic "speeeh understanding'' are insufficiently developed to permit profitable discussion of the original question Hence an operational approach to the recent literature is taken here. This paper outlines how "language understanding" research has evolved and identifies key elements of program organization used to achieve limited computer "understanding".Current A 1 programs for lankuage processing are organized by level and restricted to specified domains. This section presents those ideas and comments on the limitations that they entail.Three principal levels of language-processing software are 1. "Lexical" (allowed vocabulary) 2. "Syntactic" (allowed phrases or sentences) 3 "Semantic" (allowed meanings) ln practice all these levels must operate many times for the computer to interpret even a small portion, say two words, of restricted natural-language input. Programs that perform operations on each level are, respectively, 1. Word in a table? 2. Word string acceptable grammatically? Appendix: To enable "intelligent" processing by the computer ("hrtificial intelligence") 2. To produce a more useful way
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
595
0.001681
null
null
null
null
null
null
null
null
4350fd940840986cbd46624dfde0f3d718448ca2
219310301
null
A Formal Psycholinguistic Model of Sentence Comprehension
c l a u s e , w i t h o n l y the semantic content regularly retained after the clause boundary is passed. The surface words (and g fortiori the syntactic s t r u c t u r e ) of the clause would tend t o be erased after each clause boundary. 1 *Thie paper i s based on chapter VII o f my doctoral dissertation ( ~e i m o l d (forthcoming) ) . I wish to thank Thomas G . Bever , Jame-a Higginbotham, and D *Terence Langendoen f o r h e l p f u l suggestions. l ~h e f o r t i o r i J refers to the fact that the syntactic s t r u c t u r e Another study supporting the clause as unit of proceasing is Abrams 6 ~e v e r ( 1 9 6 9 ) . These a u t h o r s found t h a t r e a c t i o n time to s h o r t bursts of noise " c l i c k s " ) superimposed on sentences was longer for clause-final clicks than for clause-initial ones. This would point to the clause as unit of perception, under the assumption t h a t processing is more intensive towards the end of a p e r c e ~t u a l unit, and that reaction time to e x t e r n a l stimuli is a valid ineicator of t h e intensity of internal procesging. or a review of other studies in support of the clausal processing theory, the reader i s referred t o Podor, Bever & ~arre%t(1974), where arguments a r e a l s o given f o r the clause as a decision point across which ambiguitiis are ,normally at l e a s t , not carried-. ) Secondly, it seems t h a t as we l i s t e n to speech, we simultaneously have access to both the syntactic and semantic propertiee of what we hear. That is, there appears to be ~a r a l l e l orocessing of t h e syntax and the semantics of a clause. One finding explained by this assumption i s t h a t so-called "irreversible" passive sentences like (1) are perceptually no more complex t h n n their active counterparts (the air1 ~i c k e d the rlower,in this c a s e ) . By c o n t r a s t , 'reversiblew passives l i k e ( 2 ) take longer to verify vis-a-vis p i c t u r e s than t h e ~orre~pondin$ active sentences (~lobin(1.966) ) . presumably contain8 surface wards a s terminal nodes. Hence if the eyntax were regularly preserved the surface words should remain easily accessible, too. (1) The f l o w e ~ was picked by t h e g i r l . (irreversible) (2) The boy was k i c k e d by the girl.
{ "name": [ "Reimold, Peter" ], "affiliation": [ null ] }
null
null
null
1975-09-01
6
2
null
null
( 4 )~q i r l { w i t h a green hat who wore a green hat g r e e t e d John.
null
J o h n ate t h e cake C afterwards .aster the guests left.Z v l d e n t l y , w i t h a green h a t in ( 4 ) is r e l a t e d to who wore a meen hat, and t h e adverb afterwards in ( 5 ) can be replaced by f i l l 1 adverbial clauses like after t h e m 1 e s . t~ l e f t . (1) it should be a clause-by-clause processo2, where my n o t i o n of "-lause" includes some things traditionally regarded as phrases; as soon as the i n t e r p r e t a t i o n of a clause is completed, its syntactic structure is erased;( 2 ) there should be parallel syntactic and semantic processing of each clause1 conetLtuents. An example f o r the three stages is given in (13).(13) The boy laughed. We can now translate t h e structures 111 (13) i n t o E n g l i s h .PSR r (THEX 1 BOYX) (By) (E t . : PAST t ) I: L A U G H Y~ 3 ISRI (IIIEX~BOYX &HUMAN% & ~ADULTX m 0 * } ( 3~) ( 3 t~ PAST t & -FUTUaZ t * . ) C LAUGHyt & HUMANY & ANIKATEY~ & BLNFyt"%e x such t h a t x. is a boy is invowed in somE vent suoh t h a t there is some y and some time which is PAST, and . y is laughing at time t." (14) (a) r t h e DDI a (THSV~ --) re-1(b) ~~O S T ~f l I ( E X ) E B O Y X I ( c ) Elaurrhed MVB PAST], (~' y ) (E t r PAST t) L L A U G~~~ 1Notice t h a t each of t h e deftnitions consists again of a prefix and a matrix. An example is given in ( 3 7 ) d~e r e as elswhere in t h i s paper, "#" atande for initial and "$" for final clause boundary.) ( 4 5 ) # J ohn btlieved $ ( LYcompll t h a t the 2ake was pcdecncd 4) If we assumed t h a t tpt2=t3 in ( 4 9 ) , then the t w o conjoined phrases pnd round and round* should be redundant in the same sense in which Fido ia a dog and is a doe: and is a dog is. However, (49) can q u i t e n a t u r a l l y be interpreted To formalize t h i s , we can make use of Pattern ittcbing.ThereasF o r instance, the encyclopedia would contain a p a t t e r n like (51), and t h e r e would furthermore be a meaning r u l e like (52).(52) (AU.,tl,t2) LCAUSE<A(. dl. a ) ,B(. at2. 43 IMPL tl CIRPREC t2 3We need only make sure t h a t the pattern (51) is activated by the two sentences he broke his a r m and he f e l l o f f h i s bike in (~o B ) , which can be-done by c a l l i n g up all patterns which a r e in the intersection of t h e main verbs of the two sentences.pa or instance, break and f e l l both point t o (51).) In e f f e c t , Another restriction seems to be that t h e Temporal Sequencewewhile tl must precede t , in ( 5 4 ) , t h e y seem to be roughly -simi~ltaneous in ( 5 9 , even though l i a t i n a a c i~m e t t e and leaving a room normally count as "instantaneous" events (seeThe condition against conjunction by and is necessary since pnd can never mean "and before t h a t . . , " For instance, t h e syntax-sensitive r u l e 8 have a p p l i e d (and failed).
null
Main paper: ): J o h n ate t h e cake C afterwards .aster the guests left.Z v l d e n t l y , w i t h a green h a t in ( 4 ) is r e l a t e d to who wore a meen hat, and t h e adverb afterwards in ( 5 ) can be replaced by f i l l 1 adverbial clauses like after t h e m 1 e s . t~ l e f t . (1) it should be a clause-by-clause processo2, where my n o t i o n of "-lause" includes some things traditionally regarded as phrases; as soon as the i n t e r p r e t a t i o n of a clause is completed, its syntactic structure is erased;( 2 ) there should be parallel syntactic and semantic processing of each clause1 conetLtuents. An example f o r the three stages is given in (13).(13) The boy laughed. We can now translate t h e structures 111 (13) i n t o E n g l i s h .PSR r (THEX 1 BOYX) (By) (E t . : PAST t ) I: L A U G H Y~ 3 ISRI (IIIEX~BOYX &HUMAN% & ~ADULTX m 0 * } ( 3~) ( 3 t~ PAST t & -FUTUaZ t * . ) C LAUGHyt & HUMANY & ANIKATEY~ & BLNFyt"%e x such t h a t x. is a boy is invowed in somE vent suoh t h a t there is some y and some time which is PAST, and . y is laughing at time t." (14) (a) r t h e DDI a (THSV~ --) re-1(b) ~~O S T ~f l I ( E X ) E B O Y X I ( c ) Elaurrhed MVB PAST], (~' y ) (E t r PAST t) L L A U G~~~ 1Notice t h a t each of t h e deftnitions consists again of a prefix and a matrix. An example is given in ( 3 7 ) d~e r e as elswhere in t h i s paper, "#" atande for initial and "$" for final clause boundary.) ( 4 5 ) # J ohn btlieved $ ( LYcompll t h a t the 2ake was pcdecncd 4) If we assumed t h a t tpt2=t3 in ( 4 9 ) , then the t w o conjoined phrases pnd round and round* should be redundant in the same sense in which Fido ia a dog and is a doe: and is a dog is. However, (49) can q u i t e n a t u r a l l y be interpreted To formalize t h i s , we can make use of Pattern ittcbing.ThereasF o r instance, the encyclopedia would contain a p a t t e r n like (51), and t h e r e would furthermore be a meaning r u l e like (52).(52) (AU.,tl,t2) LCAUSE<A(. dl. a ) ,B(. at2. 43 IMPL tl CIRPREC t2 3We need only make sure t h a t the pattern (51) is activated by the two sentences he broke his a r m and he f e l l o f f h i s bike in (~o B ) , which can be-done by c a l l i n g up all patterns which a r e in the intersection of t h e main verbs of the two sentences.pa or instance, break and f e l l both point t o (51).) In e f f e c t , Another restriction seems to be that t h e Temporal Sequencewewhile tl must precede t , in ( 5 4 ) , t h e y seem to be roughly -simi~ltaneous in ( 5 9 , even though l i a t i n a a c i~m e t t e and leaving a room normally count as "instantaneous" events (seeThe condition against conjunction by and is necessary since pnd can never mean "and before t h a t . . , " For instance, t h e syntax-sensitive r u l e 8 have a p p l i e d (and failed). : ( 4 )~q i r l { w i t h a green hat who wore a green hat g r e e t e d John. Appendix:
null
null
null
null
{ "paperhash": [ "nash-webber|semantics_and_speech_understanding", "kuno|the_predictive_analyzer_and_a_path_elimination_technique" ], "title": [ "Semantics and Speech Understanding", "The predictive analyzer and a path elimination technique" ], "abstract": [ "Abstract : Syntactic constraints and expectations are based on the patterns formed by a given set of linguistic objects, e.g. nouns, verbs, adjectives, etc. Pragmatic ones arise from notions of conversational structure and the types of linguistic behavior appropriate to a given situation. The bases for semantic constraints and expectations are an a priori sense of what can be meaningful and the ways in which meaningful concepts can be realized in actual language. The paper describes how semantics is being used in several recent speech understanding systems. It then expands the generalities of the first section with a detailed discussion of some actual problems that have arisen in the attempt to understand speech.", "Some of the characteristic features of a predictive analyzer, a system of syntactic analysis now operational at Harvard on an IBM 7094, are delineated. The advantages and disadvantages of the system are discussed in comparison to those of an immediate constituent analyzer, developed at the RAND Corporation with Robinson's English grammar. In addition, a new technique is described for repetitive path elimination for a predictive analyzer, which can now claim efficiency both in processing time and core storage requirement." ], "authors": [ { "name": [ "B. Nash-webber" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "S. Kuno" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "60296761", "16669681" ], "intents": [ [ "background" ], [] ], "isInfluential": [ false, false ] }
null
593
0.003373
null
null
null
null
null
null
null
null
7bb56e990a610940a53e4713deeed2140acb3b0c
219303156
null
{AUTONOTE}2: Network Mediated Natural Language Communication in a Personal Information Retrieval System
T h i s paper is based on a doctoral disserfiation by the first author. Support from the iUationai Science Foundation under ~r m t ' No. DCR71-02038 is gratefully acknowle8ged. Those wishing more mmplete details about s y s t e m c a d s and imp.tomentatioa should,write the s-d author for a User's Manual. 2 lVow a t t southern Railway System, 125 Spring S t r e e t , S . W., Atlanta, Georgia 30.303
{ "name": [ "Linn, Jr., William E. and", "Reitman, Walter" ], "affiliation": [ null, null ] }
null
null
null
1975-09-01
6
0
null
The system described here uses t h e AUTONOTE information storage and r e t r i e v a l system (Reibnane t -. Text e n t r y , To e n t e r a new text i t e m , t h e u s e r f i r s t types t h e command ENTER and t h e eystem responds with a numerical t a g f o r the new i t e m . The system then e n t e r s a "lext i n s e r t i a n mode" and i n d i c a t e s its readiness t oaccept successive text lines with a question mark. Aftex e n t e r i n g text,, the user may r e t u r n t o "command mode" by e n t e r i n g a n u l l l i n e o r an end-of-file i n d i c a t i o n , Should t h e u s e r a t any t i m e wish t o continue i n s e r t i n g t e x t i n t o the current i t e m , he may re-enter t e x t i n s e r t i o n mode via the INSERT cormnand.Subsequent l i n e s are placed below the most r e c e n t l i n e f o r t h e c u r r e n t i t e m i n the t e x t file.In command mode, t h e system prompts t h e u s e r f o r input with a minus sign.The user may give each command i n f u l l o r he may abbreviate by g i v i n g any i n i t i a l s u b s t r i n g of the command name.Descriptor entry. To a s s o c i a t e one o r more d e s c r i p t o r s with t h e current text i t e m , the u s e r e n t e r s a list of words, beginning t h e input l i n e with an a t sign (@). Any character s t r i n g up t o 16 c h a r a c t e r s i n length may be used as a d e s c r i p t o r . kn a d d i t i o n t o updating the d e s c r i p t o r index, t h e system a l s o places t h e actual "@-linet' i n the t e x t file i n a subregion beneath t h e text a£ che c u r r e n t item.Retrieval, To d i s p l a y a p a r t c i u l a r t e x t i t e m t h e user may e n t e r the command PRINT followed by t h e appropriate item number. In most cases, however, the s p e c i f i c i t e m number(s) w i l l not be known.The Item-iteq l i n k a e . The a b i l i t y to define a s s o~i a t i v e l i n k s between any two t e x t items i s provided by t h e APPEND command. When an i t e m i s displayed, its a s s o c i a t i v e l i n k s t o o t h e r items may o p t i o n a l l y be p r i n t e d along with a user-lspecified comment i n d i c a t i n g the n a t u r e of t h e assocLation. Grouping. AllTONOTE provides a grouping f a c i l i t y which permits t h e u s e r t o organize t e x t items i n s e v e r a l u s e f u l ways. It A grouping item can be viewed as a node of an i n v e r t e d tree s t r u c t u r e w;ith downward b r a n J l e s t o those items l i s t e d i n i t s "@GROW" l i n e . A request t o display a grouping i t e m i n i t i a t e s r e c u r s i v e processing of t h e t r e e structure to identify the terminal ad& nonterminal f terns of the hierarchy.The user may request t h a t only terminal or',nonterminal items,be displayed, or t e entire list of materials be printed,The organization of the HELP data base described above provides an excellent example of the pmer and flexibility of the grouping f a c i l i t y .The HErJe text filelcontains a t this w r i t i n g approxinrstely 150 %terns of documentation. Using the grouping conventton, these are orgaritzed i n t o five subgroups: (1) general #nf ormation;(2) input and editing facilities ;(3) output (retrieval) facilities; (4) organizational facilities; and (5) utflity compands. Thme-is; one major item which groups all of these aubgrottps into a single tree structure. AUTONOTE also provides a large number of auxillary commands and f a c i l i - These routines then becoke a part of the resident system, remining in w r e etorage until the user explZcitly requests their removd. An organizational diagram of the AUTONOTE system appears i n Fig. 1 .The modular design of AUTONOTE coupled wtth the dynamic loading facility offers two important benefits. From the user s viewpoint, he has access to the complete repertory of AUTONOTE seMces, yet he pays core storage charges only fm those routines he actually uses during a given session.To the developers of the system, the modular framework f a c i l i t a t e s the STORAGE mUT/otrrPUTROUTIWS J A d MONITOR LYIMMIC LOADER I (COMMAND INTERPRErnR) s PROCESSOR Fig, 1 -AUTONOTE System Organizationaddition of new system components. The latter has been an important factor in the implementation of the AUTONOTE2 system.The AUTONOTE2 system uses ideas (Reitman, 2965; Reitmanet -* a1 9 1969) concerning the use of our "knowledge of the world" to clisambiguate and fill in implied facts when conversing with one anathear. Zn parti~ular, the system design is based upon the assumption that efficient human communication..."depends upon the listener's ability to make inferences from prior information, from context, and from a knowledge of the speaker and the world. Communicating in this way, we risk occasional misunderstanding as the price for avoiding verbose, redundant messages largely consisting of material the listener already knows" [Reitmanet -* a1 9 19691.In b u~ mare restricted domain of discourse, we view the process of human referential c~unication as onq >pided by some f o m of internal rapresentation of the various topfqs or referents discussed earlier. When a listener can be assumed to have such a representation, the speaker is spared the difficulty of describing in complete detail the things to which he refers. He;need only give enough information to allow the referent to be discerned in full. Our goal then is to develop a represenratlonal scheme for our retrieval system that allows the user analogous conrmunicative efficiencies.The first step in devising a representational framework wa8 the f o mlation of a language for expressing topic dedcriptions to the system.Although an underlying factor in the design of AUTONOTEL was to make com-It municatloh with the system more natural," it should be noted that the emphasis of this research is not upon parsing or "understanding" natural language. Rather, our goal is to investigate the notions of topic representation and referential camunication as a means f o r improving the user's ability to describe, organize, and retrieve his materials. Consequently, a minimal subset of noun phrases waa chosen-minimal in the sense that it excludes most of the complexity of natural English, yet still retains a degree of descriptive richness sufficient to explore the underlying ideas of this study.Natural language enables us to combine nouns and adjectives i n t~ noun phrases and to interlink noun phrases via prepositions to form complex descriptions of objects in the real world. The AUTONOTE2 description language provldes such a framework for composing topic references. A form~l grammar for the language is given in Fig. 2 along with a few sample descriptions that illustrate the flexibility of expression achievable with the language. These grammatical rules are not in fact used explicitly by the system in actually parsing topic descriptions. The grammar is presented here only to specify precisely the set of descriptions acceptable to the system. The actual AUTONOTE2 parser i s heuristic-based, making use of previously analyzed phrases, noun-preposition co-occurrences, and a s e t of heuristics to guide qdeacrip tion>: : = (noun-group, Iknoun-group> <preposition> <description> <noun-group 3 : := (<article>) (<modif ier-group) <noun>a / <preposition> : about 1 to I from 1 in I on etc.: := a I an I the (a) Grammar for the description language.The paper about microprogramming in the proceedings of the fall joint computer conference Notes on the organization of AUTONOTE2 for use in the presentation of the ACMThe use of recall precision measures in the evaluation of the SMART information retrieval systemQuotes from Peldman's 1969 paper for use in the introduction of the second chapterb (b) Sample descriptions. %odif iers and nouns are arbitrary character strings not recognized as articles or prepositions. When a number of consecutive "words" are encountered, the last is parsed as a noun and the preceding words as modifiers.b Possessive adjectives are treated as a special case of adjectival modification.t h e parsing process. I n some i n s t a n c e s , t h e user may even be asked f o r parsing a s s i s t a n c e .Central t o the design of AUTONOTEZ i s t h e i d e a of viewing the u s e r ' s information universe a s a c o l l e c t i o n of "informational objects" o r t o p i c s , each having associated with i t a number of t e x t i t e m s . When the user wishes t o describe a t e x t item, we assume he has such a t o p i c i n mind. Using t h eand p r e s e n t s i t t o t h e system. AUTONOTE2 then c o n s t r u c t s an i n t e r n a l repres e n t a t i o n of t h a t topic.When a t e x t i t e m i s described, the system must consult the r e p r e s e n t a t i o n t o determine i f t h e d e s c r i p t i o n (1) references an e x i s t i n g topic, (2) is r e l a t e d t o an e x i s t i n g t o p i c , o r (3) d e f i n e s a new topic. In any case, the u l t i m a t e goal i s t o a s s o c i a t e the text i t e m with a topic representation, possibly augmenting t h e r e p r e s e n t a t i o n i n the process.Efficiency of comunication, E f f i c i e n t man-machine communication imp l i e s t h a t the user should not i n general have t o formulate a complete desc r i p t i o n of a p a r t i c u l a r t o p i c i n order t o convey a reference t o i t . The system should be capable of accepting and c o r r e c t l y i n t e r p r e t i n g incomplete referefices bp f i l l i n g i n missing information. A s an example, a t o p i c f u l l y described as THE PAPER BY SALTON ABOUT THE SMART SYSTEM might be r e f e r r e d t o as THE PAPER, THE PAPER BY SALTON, THE PAPER ABOUT THE SMART SYSTEM and s o on.A d e s c r i p t i o n i n the AUTONOTE2 language consists of a noun modified by a d j e c t i v e s and p r e p o s i t i o n a l phrases. The words t h a t modify any given term may themselves be modified in exactly the same way. In e f f e c t , each adjective and prepositional phrase functions as a phrase component that imparts greater detail to the overall description. In the example above, BY SALTON and ABOUT THE SYSTEM provide information about the paper; SMART specifies which system is meant.To facilitate efficient communication we require a representational framework that makes explicit the component phrases of each topic description.Given such a framework, we have a basis for comparing incomplete descriptions with the representation to determine possible topic referents.A system that makes use of syntax in the user's descriptor entries increases descriptive power in that it permits distinctions that, in geberal, will not be made in keyword-based retrieval systems. A description such as THE ORGANIZATION OF THE PAPER ABOUT MTS is semantically quite different from THE PAPER ABOUT THE ORGANIZATION OF MTS, despite the fact that both contain the same words, A system that takes into consideration the syntactic relationships that hold among the words ORGANIZATION, PAPER, and MTS can discriminate between the two.The considerations outlined thus far lead quite naturally to some form of dependency representation for the user's topics. Essentially, a dependency representation for the AUTONOTE2 language would reflect the syntactic dependence of each adjective and prepositional phrase upon an appropriate noun. Such a framework provides the essential information for enhancing descriptive power and communicative efficiency as defined above.Hierarchical representations. We view a topic as a group of interconnected subtopics, each bearing on a central theme yet with varying levels of generality. To make this notion more concrete, consider a user of AUTONOTE2 putting down his thoughts and ideas for a book he is writing. He begins by entering some general material which he describes simply as "THE BOOK ABOUT.. . ." At some later time he may enter an outline for the book, a list of reference materials he will use, publishing arrangements, etc. Still later, he wi.11 enter materials for the chapters of his book and perhaps outlines for each chapter, In time he will have defined a host of related descriptions. Fig. 3 gives a pictorla1 representation of the resultant complex "topic. " The representational spl~eme of AUTONOTE2 was designed with complex hierarchies such as this one in mind. In other words, we want to represent related topic descriptions via interconnections in a network,The essential idea is that such a network corresponds to a map of the organization of the associated textual materials--a map that should reflect important structural relationships among the materials from the user' s viewpoint. A hierarchical representation of this kind is especially effective during retrieval. If the user requests materials dealing with h i s book, for example, the system can also inform him that he bas more specific items dealing with the publishing arrangements, the component chapters, and so on.The notion of a representational network fits well with the dependency framework we require. The syntactic dependencies among the words and phrases of a description may be used to represent structural relationships among the user's topics. In the example above, the network connection between theOF CHAPTER 1 W E BOOK Fig. 3 -A Topic Hierarchy"out line" and the "book1' corresponds to the syntactic dependency of "book" upon "outline1' in the descrip tionL THE OUTLINE OF THE BOUK ABOUT. . . .Au~plentation of the repre-sentation. In the previous discussion of communicative efficiency we were concerned with associating an incomplete description with its corresponding topic. In designing the representational framedork we also had to consider the cqse in which a reference provides a more detailed d'escription of an existing topic. In such instances we want to enrich the topic represbntation to include the additional information.Whether additional descriptive information is encountered in a subsequent item description or in a retrieval request, we want the system to incorporate it into its existing knowledge of the user's topics. This requires that the representation be stmctured in such a way that dynamic augmentation is easily accomplished.The representation of context. In providing a framework for interpreting terse, incomplete references we naturally are confronted with the problem of ambiguity. A description such as THE PAPER, or TRE PAPER ABOUT MICROPRO-GRAMMING may in fact satisfy a large number of distinctly described topics.To deal with this problem we require some kind of contextual framework that enables the system to infer, where possible, the intent of a vague or ambiguous reference. A user who has been entering material for a paper he is writing should be able to describe a subsequent item as,say, THE OUTLINE OF THE PAPER, and have the system infer which paper he means. In general then, we want the representatfonal framework to include information that identifies the "workingcontext ," i. e. , those topics the user has ref erred to recently. System interrogation of the user. Presented with an ambiguous description "out of context," the system is faced with much the same dilemma a human listener would face. In such instances, we want the system to be capable of asking pertinent questions to resolve the ambiguous reference. This implies, of course, that the representation preserve sufficient information to enable it to reconstruct descriptions of the user's topics.werview of the AUTONOTE2 Irpplementation Data structures. We have now presented the major design requirements for the representational framework. These preliminary criteria suggest a representation organized as a network of (possibly interconnected) dependency structures obtained from syntactic analysis of topic descriptions. The network data structures arediscussed in section IVin terms of the representational criteria and also the computational requirements--how they are to be accessed, modified, and so on.The parser. The subordinate relationship of the node B phrases t o t h e node A phrase, and i n turn, t h a t of t h e node C phrase t o the node B phrases i s r e f l e c t e d by downward branches connecting those nodes.The r e s u l t a n t t r e e s t r u c t u r e degines the r e p r e s e n t a t f~n of i t s corresponding topic. Representations of each of t h e u s e r ' s t o p i c s are organized into a should the user later describe a new text item as say, OUTLINE OF THE PAPER or OUTLINE OF THE PAPER ABOUT AUTONOTF2, the system will note that: i t already has a representation for the topic. In designing special purpose list structures for the representational n e t~~o r k , we first specified the logical components of the structure and defined the interconnections among these primitives. Three logical components were forumlated--simple phrases, nodes, and words. The following subsections present the major design considerations for each structural component.Simple phrases,. Given our goal of communicative efficiency, we chose the simple phrase as a primary udit for the network. By analyzing a toplc t ?description into simple phrases we are in effect isolating possible shorthand" references to the given topic. The representational data structures have been designed to allow a topic to be referenced through any of its component simple phrases.Simple phrases are formed from either adjectival or prepositional modification of a noun. Very often, an adjectival modification can be equivalently expressed by a prepositional phrase dependent upon the same noun s e n t s a p a r t i c u l a r paper may be linked downward t o another t h a t describes a conference a t which t h e paper was presented; i t may a l s o be linked t o s e v e r a l higher order nodes corresponding t o , say, a summary, an o u t l i n e , and a review of the paper. As more and more i t e m s are described node due t o the s y n t a c t i c dependency of "conference" upon "paper" i n a phrase of t h e form, PAPER AT THE...CONFERENCE. efficiency. Since we a n t i c i p a t e t h a t users w i l l make frequent use of s i n g l e word references when working i n t h e context of a p~r t i c u l a r t o p i c , we want t o provide a n a t u r a l and convenient treatment of such d e s c r i p t i o n s . F i n a l l y , a p h r a s a l d e s c r i p t i o n can convey a higher order c a t e g~r i z a t i o n of an e x i s t i n g t o p i c without containing a simple phrase f o r t h a t t o p i c . For example, THEThese considerations lead us t o the t h i r d l o g i c a l component of t h e network d a t a s t m c t u \ r e s , the s i n g l e word. E s s e n t i a l l y , each component word provldes access t o a series of p o i n t e r s t o simple phrases i n which t h e word occurs.Word-to-phrase p o i n t e r s a r e of two types: those i n d i c a t i n g usage as subject noun; and those indicating modifier usage in a particular simple phrase. As we shall see later, this distinction is required in order to relate new simple phrases to existing topics at an appropriate node level.Hadng specified the three logical components and the linkages in the representational network, we now turn our attent&on to the storage implementation of these structures.There are three directories needed to maintain the representational network, one for each of the components of the structure. All directory informatlon must, of course, be saved in permanent storage between AUTONOTE2 sessions. Two design alternatives were considered for maintaining the network during execution of the program. Thedirectories could be accessed and updated on disk, or they could be brought into core storage for the duratlon of the session. We adopted the former strategy for a number of reasons.First, AUTONOTE is highly oriented toward the use of disk file storage.Several file interface routines were available at the outset for conveniently storing and accessing information through the MTS file system. Second, as the network grows in complexity, it becomes increasingly unlikely that the user will reference the major portion of the network during any given session. By maintaining the network in disk files, the amount of core storage required is substantially reduced. Finally, the file approach greatly simplified the programming effort, especially in those system components that operate recursively on the list structured network. We will elaborate on this point further in section VI, which i l l u s t r a t e s the simplification of recursive processes in AUTONOTE2. 270 as SMITH'S PAPER, and e n t e r s a summary of t h a t paper i n t o Item 312. A p i c t o r i a l r e p r e s e n t a t i o n of the r e s u l t a n t p o r t i o n of t h e network i s given i n Fig. 9 , w h i l e t h e corresponding d i r e c t o r y c o n t e n t s appear I n sucfi instances w e r e l y upon t h e u s e r t o supply t h e r e f e r e n t noun upon request. I n t h e example above, t h e system may prompt: DOES "ABOUT GENETICS" REFER TO PAPER OR CONFERENCE? Should t h e u s e r r e p l y CONFERENCE, the simple phrase CONFERENCE ON GmETICS w i l l be added t o t h e network. If a t some later t i m e , t h e parser i s attempting t o find a r e f e r e n t f o r t h e prepositional phrase ON GENETICS where CONFERENCE i s one of t h e a l t e r n a t i v e s , ft forms t h a t simple phrase d i r e c t l y .Consecutive modifiers. A p a r a l l e l problem arises i n determining the noun referents for a s t r i n g of consecutive modifiers. Descriptions containing at 111ust a single adjective for any particular noun are parsed in the obvious manner. A simple phrase is formed from each modifier and the noun following i t . In the event a noun i s preceded by two or more modifiers, the parser is confronted with a task similar to that of determining the referent of a prepositional phrase. The possessive heuristic can be fully stated as follows. A possessive occurring in a stfing of modifiers will be assumed to modify the head noun unless another possessive occurs between it and the head noun. In the latter case, the first possessive will be assumed to modify the second. This is similar to the possessive feature employed by the REL parser (Dosert & Thompson, 1971) . Thus in SMITH'S RESEARCH GROUP'S MEMORY EXPERIMENT, SMITH'S is assumed to modify GROUP, and GROUP'S is assumed to modify the head noun EXPERIMENT.The question now arises, why check the phrase directory first instead of applying the possessive heuristic immediately? To answer this, suppose a topic was o r i g i n a l l y described as THE WSULTS OF THE MEMORY EXPERIMENT BY SMXJTH and the machine has just encountered an article and is anticipating a "word."If the machine is in state S upon completion, it has juat recognized a pre-0 position and is expecting an object; thus, the string is rejected. The state S is reached whenever a possessive is encountered. Since a possessive must 4have an object noun, a "word" input is required to reach s t a t e S1. State S 3 is a trapping state; once entered, the machine remains in that state regardless of the remaining input and the description is consequently rejected.State S corresponds to various error conditions--two consecutive prepositions 3 or articles, an article between two words, a phrase beginning wlth a preposition, etc.The state transitions for the description BRUNER'S FIRST EXPERIMENT ON THE CONSERVATION OF LIQUIDS are given below along with the resultant word list. (1) autono te/paper(2) planned/paper(3) conferenee/paper previously and accepts that candidate. ACM CONFERENCE is added to the phrase table and the parsing is complete (Fig. 14) . Node 2. ORGANIZATION OF THE PLANNBD PAPER ABOUT AUTONOTE FOR THE ACM CONFERENCE.Node 3. THE ACM CONFERENCE.Node 4. AN ABSTRACT OF THE FIRST PAPER ABOUT AUTONOTE.Node 5. THIE FIRST PAPER ABOUT AUTQNOTE.Node 6. THE REVIEWER'S COMMENTS O@ THE PLANNED PAPER..oo Context i n the AUTONOTE2 system takes t h e form of an access recency nwlber (context number). Each t i m e t h e user r e f e r s t o some t o p i c i n t h e network, where PAPER i s the subject noun. Using the r e s u l t a n t l i s t of simple phrases, a list of nodes d i r e c t l y described by these phrases i s generated, I n t h i s case, t h i s process generates two a l t e r n a t i v e s (node 5 and node 1). The system then functions as before, e i t h e r choosing a node i n context, o r i n t e r r o g a t i d g the user,The foregoing discussion has described our approach t o network location.We now give a more d e t a i l e d presentation of the algorithm. Before describing the matching process, let us first consider a few special cases. Suppose, for instance, that the focus phrasq directly describes only one topic node and that any additional active phrases are also present ih that toflic representation. The presence (or absence) of nonactive phrases in the description is, in this case, an important parameter.Any non-active phkases may serve to distinguish the description from the existing topic. On the other hand, they could very well represent additional description of the topic at hand. If the topic under consideration is recent, we first assume the latter case. In addition, when processing descriptions rendered for retrieval, the netwofk locator naturally rules out the possibility that a new topic is being described and accepts the one at hand.The_matching process. When the focus phrase directly describes two or more nodes, a network matching procedure is used to determine which of the associated topics the description references. The simplified flow diagram appearing in Fig. 18 summarizes the decision procedure for the case in whikh the description contains one or more active The o r g a n i z a t i o n of t h e paper./ / -\ I \ \ I / 4 --* \ \\ \ / \ \ / \ \ / \ \ / 4, -% b r -----1 Y / \ I ORGANIZATION I \ ' I SUMMARY I 1 of paper , \ N' -0 v L ----- / \ A I I I PAPER about 1 I PAPER f o r , IAutonote i 1 conference The previous sections dealt primarily with the process of item description, that is, the process of constructing a representation from descriptions of the user's textual materials. This section discusses the AUTONOTE2 procedures that retrieve information through the representational network.Many of the procedures described earlier for item description and representation are used in retrieval. The user initiates retrieval by giving a FIND command, supplying a description as argument. Retrieval descriptions are first passed to the parser, and are therefore subject to exactly the same constraints as item descriptions. If the description is acceptable, the resultant phrase table is passed along to the netw~rn locator which ultimately returns a node number to the FIND processor.The FIND processor constructs a set of item numbers by extracting the textual references from the node returned by the network locator. The system then checks for upward poipters from the node, to more specifically described materials, If there are ~tructurally related topics, the FIND processor so informs the user and asks if he would like to explore further. If so, the user is presented with descriptions of the higher order alternatives. Using the network depicted in Fig. 16 , for example, consider the retrieval request FIND THE PLANNED PAPER ABOUT AUTONOTE. The network locator would determine t h a t node 1 i s the desired r e f e r e n t and r e t u r n t h a t fact t o t h e FIND processor.After s t o r i n g away t h e item references of node 1 t h e system would ask: pointers. I f a node i s reached with multiple upward paths, t h e system stops and queries the user. For example, i f a user has entered only an o u t l i n e and some bibliographic references f o r a paper he is w r i t i n g , then a r e t r i e v a l desc r i p t i o n t h a t maps onto the "paper" node would e l i c i t a query such as: c r i p t i o n f o r t h e DESCRIBE command. I f i t i s unable t o d i s c e r n a unique node using t h e matching procedure and context, a list of t h e a l t e r n a t i v e s i s returned f o r subsequent display.DODOThe FULLY modifier. The u s e r may r e q u e s t t h e d i s p l a y of a h o s t of r e l a t e d topics by employing the. JXJLLY modifier. S p e c i f i c a l l y , t h e u s e r types DESCRIBE FULLY, followed by any of t h e argument forms discussed above. As before, t h i s generates a node or set of nodes. When describing FULLY, each node i s i n t u r n expanded i n t o a set of s t r u c t u r a l l y r e l a t e d nodes a l s o having associated textu a l references.A s an example, consider again the network i n Fig. 16 . The u s e r types DES-CRIBE 'FULLY, THE PAPER ABOUT AUTONOTE. Assuming no choice i s p o s s i b l e i n context, t h e d e s c r i p t i o n i s ambiguous, and t h e network l o c a t o r r e t u r n s nodes 1and 5. a l i s t ( c a l l e d the modification chain) of p r e p o s i t i o n a l modifications of t h e subjec.t noun. For example, t h e s u b j e c t s t a b l e s e n t r y f o r PAPER may have an a d j e c t i v e list containing PLANNED, and a modification chain c o n s i s t i n g of (ABOUT) AUTONOTE and (FOR) CONEI3RENCE. Both of t h e lists are chained through t h e t a b l e oE modifiers. Note t h a t some words w i l l appear i n both the s u b j e c t and modifier t a b l e s . For example, PAPER may be i n the modifler t a b l e a s p a r t of t h e modification chain of the word ORGANTZATION, and also is the subjects table with a modification chain of i t s own, ...The... The second stage i s carried out by a recursive algorithm t h a t operates onEQUATION... ters. The f i r s t "pop" r e s t o r e s t h e PAPER modification chain. Since t h e r e i s The resultant processor includes procedures for removing or adding item references to a topic, deleting topics, adding or removing simple phrases from the description of a topic, e t c . Rather than require the user to identify the particular topic to be altered each time a modification is to be performed, primitives are implemented as local commands to a generalized modificatton processor.The modification processor is invoked by issuing a CHANGE command which accepts a phrasal description as its argument. A node in the network is established as the currentjidentified topic. The processor then prompts the user for modification instructions. After all modifications are completed, the user types DONE and control is returned to the regular command monitor. The CHANGE command also may be issued while in modification mode, thereby changing the current topic. Each of the local commands is discussed separately below, using the hypothetical representation depicted in Fig. 21 for illustration.The ADD command associates additional text references with the current topic, and adds simple phrases to the topic's description. To add item references, the user types ADD ITEM[S] followed by a list of item numbers. This procedure is quite useful if the user has a large set of items that pertain to a particular topic. He simply identifies the topic and adds the list of references. Note that i f t h e paper node were deleted and the two higher order nodes were not, t h e higher order nodes would no longer be struct u r a l l y r e l a t e d . I n addition, t h e i r d e s c r i p t i o n s w i l l s t i l l contain the word "paper," but which paper no longer i s s p e c i f i e d . For t h e s e reaaons, we con- I n the i n s t a n c e s we have examined, t h e d i s t i n c t i o n between the two cases seems t o be t h a t "unimportant" nodes have n e i t h e r t e x t u a l r e f e r e n c e s nor up- The down s t a c k i s then popped and node 4 i s e s t a b l i s h e d as the next node t o be examined. A f t e r s e t t i n g a f l a g i n d i c a t i n g t h a t w e have j u s t moved down a level i n the network, t h e algorithm r e c u r s e s on node 4. The system d e t e c t s t h r e e upward p o i n t e r s ( t o nodes 2, 5, and 6 ) . It should now be apparent why we save t h e f a c t t h a t node 4 was reached by moving down fram node 2. When. .(1,2)(392) k DOWN STACK ( 4 , 2 ) J DELETIONS LIST 2placing a node's upward p o i n t e r s on t h e up s t a c k , t h e node t h a t led down t o the c u r r e n t node must be ignored.Upon n o t i n g t h a t node 4 has upward p o i n t e r s i n a d d i t i o n t o node 2, t h e system checks t o see i f i t has just moved down. In this case i t has; consequently, node-4 i s deemed "important" and t h e system asks DO YOU WANT THE ACM CONFERENCE DELETED? Assume t h e r e p l y i s NO. Since the ACM CONFERENCE node will remain, the system records t h a t the linkage between nodes 2 and 4 must be Carrying o u t t h e d e l e t i o n involves several s t e p s . F i r s t , any l i n k a g e s between those nodes t h a t are t o be d e l e t e d and those that w i l l remain a r e sewzed. deemed r e l e v a n t t o a query; r e c a l l is the r a t i o of r e l e v a n t documents r e t r i e v e d t o t h e total r e l e v a n t i n the data base. indexing by topic. To achieve a direct comparison, protocols of both types of indexing activity with a common data base are required.The original AUTONOTE system was employed in a study (Sauvain, 1970) aimed at uncovering structural communication problems within a keyword-based system. The resulting data base is related primarily to Sauvain's dissertatlon research. It includes reading notes, bibliographic references, research ideas, expository material, and so on. The collection brings together a broad range of topics and ideas touching upon various aspects of computer science, information retrieval, man-machine interaction, and psychology.Copies of the item texts, the originally assigned keywords, and protocols o f Sauvain's activities during data base indexing, organi8ation, and rekrieval were acquired. We then proceeded to re-index the collection with AUTONOTE2 topic descriptions. Each of the roughly 400 items in the data base was viewed and described in a sequential fashion; that is, there was no look-ahead or preplanning of topic phrasings to facilitate network structuring. Protocols were collected of all interaction with the system and the state of the network was recorded at periodic intervals. (For details, see Linn, 1972) .For brevity, AUTONOTE2 reports of parsing assumptions are excluded.However, system responses t h a t elicit a user reply are shown to provide a feeling for user interaction under AUTONOTEZ.Indexing activity. The AUTONOTEL protocols show a high degree of terse, efficient referencing of previously defined topics. The communicative efficiency was especially great in instances where several consecutive items were entered on a common topic. This situation frequently occurred when entering a set of reading notes on a particular paper or collection of papers. Typically, the first item in such a set of entries was assigned to one or more new topics, In describing the subsequent items, references to these topics often were conveyed by a s$ngle word or phrase, or by a null description (a description line consisting of only a slash i s treated as a reference to the topic just mentioned). the network through the "artificial intelligence" node.In the AUTONOTE protocol for these materials, there was frequent use of descriptor abbreviations and other idiosyncratic tags (CWRUAICONF, AT, COGPSY, e t c . ) .These suggest a s t r o n g d e s i r e t o e l i m i n a t e repeated e n t r y more of t h e s e items were described i n f u l l . The random sample lacked a consist e n t contextual framework, and consequently had t h e l e a s t communicative e f f iciency on t h e average. R e t r i e v a l a c t i v i t y . W e have seen t h a t r e t r i e v a l a c t i v i t y i s an e s s e n t i a l p a r t of t h e indexing and o r g a n i z a t i o n a l processes. The second raises a more significant problem. Which descriptors should be used to restrict the size of the accessed set of items? Some descriptors may restrict the set too greatly, eliminating relevant material; others may discriminate very little or not at all. In the absence of system feedback, this discrimination process places a major burden on the user's memory.The AUTONOTE2 system, on the other hand, provides the user with very meaningful feedback in response to general queries. Consider, fbr example, the retrieval protocol presented in Fig, 24 . At each level in the representational network, the user is given an opportunity to choose among several subtopics. This example very effectively demonstrates a marked improvement over keyword indexing--the ability to discriminate among subsets of material indexed under a common set of general descriptors.An analysis of man-machine dialogs collected during the description of a realistically diverse collection of textual materials has shown the communicative ease and efficiency and the descriptive power attainable under the referential system. The results Indicate that the referential mechanisms developed in this study copstitute a viable alternative to keyword indexing techniques as applied to personal information systems. The referential approach offers four primary contributions toward the improvement of man-machine communication;each corresponds to a particular kind of facilitation during storage and retrieval activity. Finally, the utilization of the structural context provided by the network approach taakes it possible for the user to describe, organize, and retrieve materials with considerable communicative efficiency, This is a fundamental aspect of the system design--to provide a framework for interpreting terse, efficient, sometimes ambiguous references to the topics in the information universe.In light of the increasing availability of on-line computing facilities today, it seems reasonable to expect that personalized retrieval systems will play an expanding role in the computer support of individual research activity.It is hoped that thisstudywill suggest new directions for the design of such systems.
null
null
null
human cormrmnicat%~e efficiency. The l i s t e n e r ' s r e p r e s e n t a t i o n of t h e t o p i c s alteacly discussed f a c i l i t a t e s communication i n t h a t t h e speaker i s spared the trouble of describing i n complete d e t a i l those t h b g s t o which he r e f e r s .Furthermore, the speaker can proceed to related t o p i c s without having t o describe them in full. For example, a speaker who has been t a l k i n g about the design of a particular experiment can safely move on t o discuss t h e results of the experiment without specifying anew the experiment he has i n mind. This paper describes the design and implementation of a personal information storage and retrieval system based on t h e foregoing analogy w i t h human referential communication. It presents a hierarchical network data strqcture foe representing topic descriptions formulated within a phrasal description language. ~alied the representational network, this structure enables the system to move easily from one Bubject to other elated ones.It provides a means for representing the user's working context, thereby enabling the user to describe his materials much more tersely than is possible in keyword-based systems. The system makes use of the syntactic dependencies among the words and phrases of descriptiops in or'der to represent structural relationsh'ips among the user's topics. Consequently, the user impaats structure to the data base in a particularly natural way, eliminating much of the organization activity normalb associated with keyword-based systems. Our central thesis is that the network mediated techniques provide for m z e effective mawmachine communication during the processes of description, organization, and retrieval within a personally generated information universeThe procedures used here differ substantially from the typical keyword indexing and retrieval mechanisms of other personal retrieval systems. The centrab objective is to provide the user with framework for defining the important topics or informational objects he deals with, and to enable him to easily associate items in his data base with these entities. Rather than viewing the data base as a collection of items and associated index terms, the user deals with "objects" thet are in some sense meaningful to him.Whether retrieving information or indexing new material the user conveys references to the appropriate topics. This shift in the user's view of his information universe, coupled with the mechanisms we have developed for building up and r e f e r r i n g t o t h e t o p i c framework, o o n s t i t u t e the substance of our approach t o personal information storage and r e t r i e v a l .
Main paper: the awonote system: The system described here uses t h e AUTONOTE information storage and r e t r i e v a l system (Reibnane t -. Text e n t r y , To e n t e r a new text i t e m , t h e u s e r f i r s t types t h e command ENTER and t h e eystem responds with a numerical t a g f o r the new i t e m . The system then e n t e r s a "lext i n s e r t i a n mode" and i n d i c a t e s its readiness t oaccept successive text lines with a question mark. Aftex e n t e r i n g text,, the user may r e t u r n t o "command mode" by e n t e r i n g a n u l l l i n e o r an end-of-file i n d i c a t i o n , Should t h e u s e r a t any t i m e wish t o continue i n s e r t i n g t e x t i n t o the current i t e m , he may re-enter t e x t i n s e r t i o n mode via the INSERT cormnand.Subsequent l i n e s are placed below the most r e c e n t l i n e f o r t h e c u r r e n t i t e m i n the t e x t file.In command mode, t h e system prompts t h e u s e r f o r input with a minus sign.The user may give each command i n f u l l o r he may abbreviate by g i v i n g any i n i t i a l s u b s t r i n g of the command name.Descriptor entry. To a s s o c i a t e one o r more d e s c r i p t o r s with t h e current text i t e m , the u s e r e n t e r s a list of words, beginning t h e input l i n e with an a t sign (@). Any character s t r i n g up t o 16 c h a r a c t e r s i n length may be used as a d e s c r i p t o r . kn a d d i t i o n t o updating the d e s c r i p t o r index, t h e system a l s o places t h e actual "@-linet' i n the t e x t file i n a subregion beneath t h e text a£ che c u r r e n t item.Retrieval, To d i s p l a y a p a r t c i u l a r t e x t i t e m t h e user may e n t e r the command PRINT followed by t h e appropriate item number. In most cases, however, the s p e c i f i c i t e m number(s) w i l l not be known.The Item-iteq l i n k a e . The a b i l i t y to define a s s o~i a t i v e l i n k s between any two t e x t items i s provided by t h e APPEND command. When an i t e m i s displayed, its a s s o c i a t i v e l i n k s t o o t h e r items may o p t i o n a l l y be p r i n t e d along with a user-lspecified comment i n d i c a t i n g the n a t u r e of t h e assocLation. Grouping. AllTONOTE provides a grouping f a c i l i t y which permits t h e u s e r t o organize t e x t items i n s e v e r a l u s e f u l ways. It A grouping item can be viewed as a node of an i n v e r t e d tree s t r u c t u r e w;ith downward b r a n J l e s t o those items l i s t e d i n i t s "@GROW" l i n e . A request t o display a grouping i t e m i n i t i a t e s r e c u r s i v e processing of t h e t r e e structure to identify the terminal ad& nonterminal f terns of the hierarchy.The user may request t h a t only terminal or',nonterminal items,be displayed, or t e entire list of materials be printed,The organization of the HELP data base described above provides an excellent example of the pmer and flexibility of the grouping f a c i l i t y .The HErJe text filelcontains a t this w r i t i n g approxinrstely 150 %terns of documentation. Using the grouping conventton, these are orgaritzed i n t o five subgroups: (1) general #nf ormation;(2) input and editing facilities ;(3) output (retrieval) facilities; (4) organizational facilities; and (5) utflity compands. Thme-is; one major item which groups all of these aubgrottps into a single tree structure. AUTONOTE also provides a large number of auxillary commands and f a c i l i - These routines then becoke a part of the resident system, remining in w r e etorage until the user explZcitly requests their removd. An organizational diagram of the AUTONOTE system appears i n Fig. 1 .The modular design of AUTONOTE coupled wtth the dynamic loading facility offers two important benefits. From the user s viewpoint, he has access to the complete repertory of AUTONOTE seMces, yet he pays core storage charges only fm those routines he actually uses during a given session.To the developers of the system, the modular framework f a c i l i t a t e s the STORAGE mUT/otrrPUTROUTIWS J A d MONITOR LYIMMIC LOADER I (COMMAND INTERPRErnR) s PROCESSOR Fig, 1 -AUTONOTE System Organizationaddition of new system components. The latter has been an important factor in the implementation of the AUTONOTE2 system.The AUTONOTE2 system uses ideas (Reitman, 2965; Reitmanet -* a1 9 1969) concerning the use of our "knowledge of the world" to clisambiguate and fill in implied facts when conversing with one anathear. Zn parti~ular, the system design is based upon the assumption that efficient human communication..."depends upon the listener's ability to make inferences from prior information, from context, and from a knowledge of the speaker and the world. Communicating in this way, we risk occasional misunderstanding as the price for avoiding verbose, redundant messages largely consisting of material the listener already knows" [Reitmanet -* a1 9 19691.In b u~ mare restricted domain of discourse, we view the process of human referential c~unication as onq >pided by some f o m of internal rapresentation of the various topfqs or referents discussed earlier. When a listener can be assumed to have such a representation, the speaker is spared the difficulty of describing in complete detail the things to which he refers. He;need only give enough information to allow the referent to be discerned in full. Our goal then is to develop a represenratlonal scheme for our retrieval system that allows the user analogous conrmunicative efficiencies.The first step in devising a representational framework wa8 the f o mlation of a language for expressing topic dedcriptions to the system.Although an underlying factor in the design of AUTONOTEL was to make com-It municatloh with the system more natural," it should be noted that the emphasis of this research is not upon parsing or "understanding" natural language. Rather, our goal is to investigate the notions of topic representation and referential camunication as a means f o r improving the user's ability to describe, organize, and retrieve his materials. Consequently, a minimal subset of noun phrases waa chosen-minimal in the sense that it excludes most of the complexity of natural English, yet still retains a degree of descriptive richness sufficient to explore the underlying ideas of this study.Natural language enables us to combine nouns and adjectives i n t~ noun phrases and to interlink noun phrases via prepositions to form complex descriptions of objects in the real world. The AUTONOTE2 description language provldes such a framework for composing topic references. A form~l grammar for the language is given in Fig. 2 along with a few sample descriptions that illustrate the flexibility of expression achievable with the language. These grammatical rules are not in fact used explicitly by the system in actually parsing topic descriptions. The grammar is presented here only to specify precisely the set of descriptions acceptable to the system. The actual AUTONOTE2 parser i s heuristic-based, making use of previously analyzed phrases, noun-preposition co-occurrences, and a s e t of heuristics to guide qdeacrip tion>: : = (noun-group, Iknoun-group> <preposition> <description> <noun-group 3 : := (<article>) (<modif ier-group) <noun>a / <preposition> : about 1 to I from 1 in I on etc.: := a I an I the (a) Grammar for the description language.The paper about microprogramming in the proceedings of the fall joint computer conference Notes on the organization of AUTONOTE2 for use in the presentation of the ACMThe use of recall precision measures in the evaluation of the SMART information retrieval systemQuotes from Peldman's 1969 paper for use in the introduction of the second chapterb (b) Sample descriptions. %odif iers and nouns are arbitrary character strings not recognized as articles or prepositions. When a number of consecutive "words" are encountered, the last is parsed as a noun and the preceding words as modifiers.b Possessive adjectives are treated as a special case of adjectival modification.t h e parsing process. I n some i n s t a n c e s , t h e user may even be asked f o r parsing a s s i s t a n c e .Central t o the design of AUTONOTEZ i s t h e i d e a of viewing the u s e r ' s information universe a s a c o l l e c t i o n of "informational objects" o r t o p i c s , each having associated with i t a number of t e x t i t e m s . When the user wishes t o describe a t e x t item, we assume he has such a t o p i c i n mind. Using t h eand p r e s e n t s i t t o t h e system. AUTONOTE2 then c o n s t r u c t s an i n t e r n a l repres e n t a t i o n of t h a t topic.When a t e x t i t e m i s described, the system must consult the r e p r e s e n t a t i o n t o determine i f t h e d e s c r i p t i o n (1) references an e x i s t i n g topic, (2) is r e l a t e d t o an e x i s t i n g t o p i c , o r (3) d e f i n e s a new topic. In any case, the u l t i m a t e goal i s t o a s s o c i a t e the text i t e m with a topic representation, possibly augmenting t h e r e p r e s e n t a t i o n i n the process.Efficiency of comunication, E f f i c i e n t man-machine communication imp l i e s t h a t the user should not i n general have t o formulate a complete desc r i p t i o n of a p a r t i c u l a r t o p i c i n order t o convey a reference t o i t . The system should be capable of accepting and c o r r e c t l y i n t e r p r e t i n g incomplete referefices bp f i l l i n g i n missing information. A s an example, a t o p i c f u l l y described as THE PAPER BY SALTON ABOUT THE SMART SYSTEM might be r e f e r r e d t o as THE PAPER, THE PAPER BY SALTON, THE PAPER ABOUT THE SMART SYSTEM and s o on.A d e s c r i p t i o n i n the AUTONOTE2 language consists of a noun modified by a d j e c t i v e s and p r e p o s i t i o n a l phrases. The words t h a t modify any given term may themselves be modified in exactly the same way. In e f f e c t , each adjective and prepositional phrase functions as a phrase component that imparts greater detail to the overall description. In the example above, BY SALTON and ABOUT THE SYSTEM provide information about the paper; SMART specifies which system is meant.To facilitate efficient communication we require a representational framework that makes explicit the component phrases of each topic description.Given such a framework, we have a basis for comparing incomplete descriptions with the representation to determine possible topic referents.A system that makes use of syntax in the user's descriptor entries increases descriptive power in that it permits distinctions that, in geberal, will not be made in keyword-based retrieval systems. A description such as THE ORGANIZATION OF THE PAPER ABOUT MTS is semantically quite different from THE PAPER ABOUT THE ORGANIZATION OF MTS, despite the fact that both contain the same words, A system that takes into consideration the syntactic relationships that hold among the words ORGANIZATION, PAPER, and MTS can discriminate between the two.The considerations outlined thus far lead quite naturally to some form of dependency representation for the user's topics. Essentially, a dependency representation for the AUTONOTE2 language would reflect the syntactic dependence of each adjective and prepositional phrase upon an appropriate noun. Such a framework provides the essential information for enhancing descriptive power and communicative efficiency as defined above.Hierarchical representations. We view a topic as a group of interconnected subtopics, each bearing on a central theme yet with varying levels of generality. To make this notion more concrete, consider a user of AUTONOTE2 putting down his thoughts and ideas for a book he is writing. He begins by entering some general material which he describes simply as "THE BOOK ABOUT.. . ." At some later time he may enter an outline for the book, a list of reference materials he will use, publishing arrangements, etc. Still later, he wi.11 enter materials for the chapters of his book and perhaps outlines for each chapter, In time he will have defined a host of related descriptions. Fig. 3 gives a pictorla1 representation of the resultant complex "topic. " The representational spl~eme of AUTONOTE2 was designed with complex hierarchies such as this one in mind. In other words, we want to represent related topic descriptions via interconnections in a network,The essential idea is that such a network corresponds to a map of the organization of the associated textual materials--a map that should reflect important structural relationships among the materials from the user' s viewpoint. A hierarchical representation of this kind is especially effective during retrieval. If the user requests materials dealing with h i s book, for example, the system can also inform him that he bas more specific items dealing with the publishing arrangements, the component chapters, and so on.The notion of a representational network fits well with the dependency framework we require. The syntactic dependencies among the words and phrases of a description may be used to represent structural relationships among the user's topics. In the example above, the network connection between theOF CHAPTER 1 W E BOOK Fig. 3 -A Topic Hierarchy"out line" and the "book1' corresponds to the syntactic dependency of "book" upon "outline1' in the descrip tionL THE OUTLINE OF THE BOUK ABOUT. . . .Au~plentation of the repre-sentation. In the previous discussion of communicative efficiency we were concerned with associating an incomplete description with its corresponding topic. In designing the representational framedork we also had to consider the cqse in which a reference provides a more detailed d'escription of an existing topic. In such instances we want to enrich the topic represbntation to include the additional information.Whether additional descriptive information is encountered in a subsequent item description or in a retrieval request, we want the system to incorporate it into its existing knowledge of the user's topics. This requires that the representation be stmctured in such a way that dynamic augmentation is easily accomplished.The representation of context. In providing a framework for interpreting terse, incomplete references we naturally are confronted with the problem of ambiguity. A description such as THE PAPER, or TRE PAPER ABOUT MICROPRO-GRAMMING may in fact satisfy a large number of distinctly described topics.To deal with this problem we require some kind of contextual framework that enables the system to infer, where possible, the intent of a vague or ambiguous reference. A user who has been entering material for a paper he is writing should be able to describe a subsequent item as,say, THE OUTLINE OF THE PAPER, and have the system infer which paper he means. In general then, we want the representatfonal framework to include information that identifies the "workingcontext ," i. e. , those topics the user has ref erred to recently. System interrogation of the user. Presented with an ambiguous description "out of context," the system is faced with much the same dilemma a human listener would face. In such instances, we want the system to be capable of asking pertinent questions to resolve the ambiguous reference. This implies, of course, that the representation preserve sufficient information to enable it to reconstruct descriptions of the user's topics.werview of the AUTONOTE2 Irpplementation Data structures. We have now presented the major design requirements for the representational framework. These preliminary criteria suggest a representation organized as a network of (possibly interconnected) dependency structures obtained from syntactic analysis of topic descriptions. The network data structures arediscussed in section IVin terms of the representational criteria and also the computational requirements--how they are to be accessed, modified, and so on.The parser. The subordinate relationship of the node B phrases t o t h e node A phrase, and i n turn, t h a t of t h e node C phrase t o the node B phrases i s r e f l e c t e d by downward branches connecting those nodes.The r e s u l t a n t t r e e s t r u c t u r e degines the r e p r e s e n t a t f~n of i t s corresponding topic. Representations of each of t h e u s e r ' s t o p i c s are organized into a should the user later describe a new text item as say, OUTLINE OF THE PAPER or OUTLINE OF THE PAPER ABOUT AUTONOTF2, the system will note that: i t already has a representation for the topic. In designing special purpose list structures for the representational n e t~~o r k , we first specified the logical components of the structure and defined the interconnections among these primitives. Three logical components were forumlated--simple phrases, nodes, and words. The following subsections present the major design considerations for each structural component.Simple phrases,. Given our goal of communicative efficiency, we chose the simple phrase as a primary udit for the network. By analyzing a toplc t ?description into simple phrases we are in effect isolating possible shorthand" references to the given topic. The representational data structures have been designed to allow a topic to be referenced through any of its component simple phrases.Simple phrases are formed from either adjectival or prepositional modification of a noun. Very often, an adjectival modification can be equivalently expressed by a prepositional phrase dependent upon the same noun s e n t s a p a r t i c u l a r paper may be linked downward t o another t h a t describes a conference a t which t h e paper was presented; i t may a l s o be linked t o s e v e r a l higher order nodes corresponding t o , say, a summary, an o u t l i n e , and a review of the paper. As more and more i t e m s are described node due t o the s y n t a c t i c dependency of "conference" upon "paper" i n a phrase of t h e form, PAPER AT THE...CONFERENCE. efficiency. Since we a n t i c i p a t e t h a t users w i l l make frequent use of s i n g l e word references when working i n t h e context of a p~r t i c u l a r t o p i c , we want t o provide a n a t u r a l and convenient treatment of such d e s c r i p t i o n s . F i n a l l y , a p h r a s a l d e s c r i p t i o n can convey a higher order c a t e g~r i z a t i o n of an e x i s t i n g t o p i c without containing a simple phrase f o r t h a t t o p i c . For example, THEThese considerations lead us t o the t h i r d l o g i c a l component of t h e network d a t a s t m c t u \ r e s , the s i n g l e word. E s s e n t i a l l y , each component word provldes access t o a series of p o i n t e r s t o simple phrases i n which t h e word occurs.Word-to-phrase p o i n t e r s a r e of two types: those i n d i c a t i n g usage as subject noun; and those indicating modifier usage in a particular simple phrase. As we shall see later, this distinction is required in order to relate new simple phrases to existing topics at an appropriate node level.Hadng specified the three logical components and the linkages in the representational network, we now turn our attent&on to the storage implementation of these structures.There are three directories needed to maintain the representational network, one for each of the components of the structure. All directory informatlon must, of course, be saved in permanent storage between AUTONOTE2 sessions. Two design alternatives were considered for maintaining the network during execution of the program. Thedirectories could be accessed and updated on disk, or they could be brought into core storage for the duratlon of the session. We adopted the former strategy for a number of reasons.First, AUTONOTE is highly oriented toward the use of disk file storage.Several file interface routines were available at the outset for conveniently storing and accessing information through the MTS file system. Second, as the network grows in complexity, it becomes increasingly unlikely that the user will reference the major portion of the network during any given session. By maintaining the network in disk files, the amount of core storage required is substantially reduced. Finally, the file approach greatly simplified the programming effort, especially in those system components that operate recursively on the list structured network. We will elaborate on this point further in section VI, which i l l u s t r a t e s the simplification of recursive processes in AUTONOTE2. 270 as SMITH'S PAPER, and e n t e r s a summary of t h a t paper i n t o Item 312. A p i c t o r i a l r e p r e s e n t a t i o n of the r e s u l t a n t p o r t i o n of t h e network i s given i n Fig. 9 , w h i l e t h e corresponding d i r e c t o r y c o n t e n t s appear I n sucfi instances w e r e l y upon t h e u s e r t o supply t h e r e f e r e n t noun upon request. I n t h e example above, t h e system may prompt: DOES "ABOUT GENETICS" REFER TO PAPER OR CONFERENCE? Should t h e u s e r r e p l y CONFERENCE, the simple phrase CONFERENCE ON GmETICS w i l l be added t o t h e network. If a t some later t i m e , t h e parser i s attempting t o find a r e f e r e n t f o r t h e prepositional phrase ON GENETICS where CONFERENCE i s one of t h e a l t e r n a t i v e s , ft forms t h a t simple phrase d i r e c t l y .Consecutive modifiers. A p a r a l l e l problem arises i n determining the noun referents for a s t r i n g of consecutive modifiers. Descriptions containing at 111ust a single adjective for any particular noun are parsed in the obvious manner. A simple phrase is formed from each modifier and the noun following i t . In the event a noun i s preceded by two or more modifiers, the parser is confronted with a task similar to that of determining the referent of a prepositional phrase. The possessive heuristic can be fully stated as follows. A possessive occurring in a stfing of modifiers will be assumed to modify the head noun unless another possessive occurs between it and the head noun. In the latter case, the first possessive will be assumed to modify the second. This is similar to the possessive feature employed by the REL parser (Dosert & Thompson, 1971) . Thus in SMITH'S RESEARCH GROUP'S MEMORY EXPERIMENT, SMITH'S is assumed to modify GROUP, and GROUP'S is assumed to modify the head noun EXPERIMENT.The question now arises, why check the phrase directory first instead of applying the possessive heuristic immediately? To answer this, suppose a topic was o r i g i n a l l y described as THE WSULTS OF THE MEMORY EXPERIMENT BY SMXJTH and the machine has just encountered an article and is anticipating a "word."If the machine is in state S upon completion, it has juat recognized a pre-0 position and is expecting an object; thus, the string is rejected. The state S is reached whenever a possessive is encountered. Since a possessive must 4have an object noun, a "word" input is required to reach s t a t e S1. State S 3 is a trapping state; once entered, the machine remains in that state regardless of the remaining input and the description is consequently rejected.State S corresponds to various error conditions--two consecutive prepositions 3 or articles, an article between two words, a phrase beginning wlth a preposition, etc.The state transitions for the description BRUNER'S FIRST EXPERIMENT ON THE CONSERVATION OF LIQUIDS are given below along with the resultant word list. (1) autono te/paper(2) planned/paper(3) conferenee/paper previously and accepts that candidate. ACM CONFERENCE is added to the phrase table and the parsing is complete (Fig. 14) . Node 2. ORGANIZATION OF THE PLANNBD PAPER ABOUT AUTONOTE FOR THE ACM CONFERENCE.Node 3. THE ACM CONFERENCE.Node 4. AN ABSTRACT OF THE FIRST PAPER ABOUT AUTONOTE.Node 5. THIE FIRST PAPER ABOUT AUTQNOTE.Node 6. THE REVIEWER'S COMMENTS O@ THE PLANNED PAPER..oo Context i n the AUTONOTE2 system takes t h e form of an access recency nwlber (context number). Each t i m e t h e user r e f e r s t o some t o p i c i n t h e network, where PAPER i s the subject noun. Using the r e s u l t a n t l i s t of simple phrases, a list of nodes d i r e c t l y described by these phrases i s generated, I n t h i s case, t h i s process generates two a l t e r n a t i v e s (node 5 and node 1). The system then functions as before, e i t h e r choosing a node i n context, o r i n t e r r o g a t i d g the user,The foregoing discussion has described our approach t o network location.We now give a more d e t a i l e d presentation of the algorithm. Before describing the matching process, let us first consider a few special cases. Suppose, for instance, that the focus phrasq directly describes only one topic node and that any additional active phrases are also present ih that toflic representation. The presence (or absence) of nonactive phrases in the description is, in this case, an important parameter.Any non-active phkases may serve to distinguish the description from the existing topic. On the other hand, they could very well represent additional description of the topic at hand. If the topic under consideration is recent, we first assume the latter case. In addition, when processing descriptions rendered for retrieval, the netwofk locator naturally rules out the possibility that a new topic is being described and accepts the one at hand.The_matching process. When the focus phrase directly describes two or more nodes, a network matching procedure is used to determine which of the associated topics the description references. The simplified flow diagram appearing in Fig. 18 summarizes the decision procedure for the case in whikh the description contains one or more active The o r g a n i z a t i o n of t h e paper./ / -\ I \ \ I / 4 --* \ \\ \ / \ \ / \ \ / \ \ / 4, -% b r -----1 Y / \ I ORGANIZATION I \ ' I SUMMARY I 1 of paper , \ N' -0 v L ----- / \ A I I I PAPER about 1 I PAPER f o r , IAutonote i 1 conference The previous sections dealt primarily with the process of item description, that is, the process of constructing a representation from descriptions of the user's textual materials. This section discusses the AUTONOTE2 procedures that retrieve information through the representational network.Many of the procedures described earlier for item description and representation are used in retrieval. The user initiates retrieval by giving a FIND command, supplying a description as argument. Retrieval descriptions are first passed to the parser, and are therefore subject to exactly the same constraints as item descriptions. If the description is acceptable, the resultant phrase table is passed along to the netw~rn locator which ultimately returns a node number to the FIND processor.The FIND processor constructs a set of item numbers by extracting the textual references from the node returned by the network locator. The system then checks for upward poipters from the node, to more specifically described materials, If there are ~tructurally related topics, the FIND processor so informs the user and asks if he would like to explore further. If so, the user is presented with descriptions of the higher order alternatives. Using the network depicted in Fig. 16 , for example, consider the retrieval request FIND THE PLANNED PAPER ABOUT AUTONOTE. The network locator would determine t h a t node 1 i s the desired r e f e r e n t and r e t u r n t h a t fact t o t h e FIND processor.After s t o r i n g away t h e item references of node 1 t h e system would ask: pointers. I f a node i s reached with multiple upward paths, t h e system stops and queries the user. For example, i f a user has entered only an o u t l i n e and some bibliographic references f o r a paper he is w r i t i n g , then a r e t r i e v a l desc r i p t i o n t h a t maps onto the "paper" node would e l i c i t a query such as: c r i p t i o n f o r t h e DESCRIBE command. I f i t i s unable t o d i s c e r n a unique node using t h e matching procedure and context, a list of t h e a l t e r n a t i v e s i s returned f o r subsequent display.DODOThe FULLY modifier. The u s e r may r e q u e s t t h e d i s p l a y of a h o s t of r e l a t e d topics by employing the. JXJLLY modifier. S p e c i f i c a l l y , t h e u s e r types DESCRIBE FULLY, followed by any of t h e argument forms discussed above. As before, t h i s generates a node or set of nodes. When describing FULLY, each node i s i n t u r n expanded i n t o a set of s t r u c t u r a l l y r e l a t e d nodes a l s o having associated textu a l references.A s an example, consider again the network i n Fig. 16 . The u s e r types DES-CRIBE 'FULLY, THE PAPER ABOUT AUTONOTE. Assuming no choice i s p o s s i b l e i n context, t h e d e s c r i p t i o n i s ambiguous, and t h e network l o c a t o r r e t u r n s nodes 1and 5. a l i s t ( c a l l e d the modification chain) of p r e p o s i t i o n a l modifications of t h e subjec.t noun. For example, t h e s u b j e c t s t a b l e s e n t r y f o r PAPER may have an a d j e c t i v e list containing PLANNED, and a modification chain c o n s i s t i n g of (ABOUT) AUTONOTE and (FOR) CONEI3RENCE. Both of t h e lists are chained through t h e t a b l e oE modifiers. Note t h a t some words w i l l appear i n both the s u b j e c t and modifier t a b l e s . For example, PAPER may be i n the modifler t a b l e a s p a r t of t h e modification chain of the word ORGANTZATION, and also is the subjects table with a modification chain of i t s own, ...The... The second stage i s carried out by a recursive algorithm t h a t operates onEQUATION... ters. The f i r s t "pop" r e s t o r e s t h e PAPER modification chain. Since t h e r e i s The resultant processor includes procedures for removing or adding item references to a topic, deleting topics, adding or removing simple phrases from the description of a topic, e t c . Rather than require the user to identify the particular topic to be altered each time a modification is to be performed, primitives are implemented as local commands to a generalized modificatton processor.The modification processor is invoked by issuing a CHANGE command which accepts a phrasal description as its argument. A node in the network is established as the currentjidentified topic. The processor then prompts the user for modification instructions. After all modifications are completed, the user types DONE and control is returned to the regular command monitor. The CHANGE command also may be issued while in modification mode, thereby changing the current topic. Each of the local commands is discussed separately below, using the hypothetical representation depicted in Fig. 21 for illustration.The ADD command associates additional text references with the current topic, and adds simple phrases to the topic's description. To add item references, the user types ADD ITEM[S] followed by a list of item numbers. This procedure is quite useful if the user has a large set of items that pertain to a particular topic. He simply identifies the topic and adds the list of references. Note that i f t h e paper node were deleted and the two higher order nodes were not, t h e higher order nodes would no longer be struct u r a l l y r e l a t e d . I n addition, t h e i r d e s c r i p t i o n s w i l l s t i l l contain the word "paper," but which paper no longer i s s p e c i f i e d . For t h e s e reaaons, we con- I n the i n s t a n c e s we have examined, t h e d i s t i n c t i o n between the two cases seems t o be t h a t "unimportant" nodes have n e i t h e r t e x t u a l r e f e r e n c e s nor up- The down s t a c k i s then popped and node 4 i s e s t a b l i s h e d as the next node t o be examined. A f t e r s e t t i n g a f l a g i n d i c a t i n g t h a t w e have j u s t moved down a level i n the network, t h e algorithm r e c u r s e s on node 4. The system d e t e c t s t h r e e upward p o i n t e r s ( t o nodes 2, 5, and 6 ) . It should now be apparent why we save t h e f a c t t h a t node 4 was reached by moving down fram node 2. When. .(1,2)(392) k DOWN STACK ( 4 , 2 ) J DELETIONS LIST 2placing a node's upward p o i n t e r s on t h e up s t a c k , t h e node t h a t led down t o the c u r r e n t node must be ignored.Upon n o t i n g t h a t node 4 has upward p o i n t e r s i n a d d i t i o n t o node 2, t h e system checks t o see i f i t has just moved down. In this case i t has; consequently, node-4 i s deemed "important" and t h e system asks DO YOU WANT THE ACM CONFERENCE DELETED? Assume t h e r e p l y i s NO. Since the ACM CONFERENCE node will remain, the system records t h a t the linkage between nodes 2 and 4 must be Carrying o u t t h e d e l e t i o n involves several s t e p s . F i r s t , any l i n k a g e s between those nodes t h a t are t o be d e l e t e d and those that w i l l remain a r e sewzed. deemed r e l e v a n t t o a query; r e c a l l is the r a t i o of r e l e v a n t documents r e t r i e v e d t o t h e total r e l e v a n t i n the data base. indexing by topic. To achieve a direct comparison, protocols of both types of indexing activity with a common data base are required.The original AUTONOTE system was employed in a study (Sauvain, 1970) aimed at uncovering structural communication problems within a keyword-based system. The resulting data base is related primarily to Sauvain's dissertatlon research. It includes reading notes, bibliographic references, research ideas, expository material, and so on. The collection brings together a broad range of topics and ideas touching upon various aspects of computer science, information retrieval, man-machine interaction, and psychology.Copies of the item texts, the originally assigned keywords, and protocols o f Sauvain's activities during data base indexing, organi8ation, and rekrieval were acquired. We then proceeded to re-index the collection with AUTONOTE2 topic descriptions. Each of the roughly 400 items in the data base was viewed and described in a sequential fashion; that is, there was no look-ahead or preplanning of topic phrasings to facilitate network structuring. Protocols were collected of all interaction with the system and the state of the network was recorded at periodic intervals. (For details, see Linn, 1972) .For brevity, AUTONOTE2 reports of parsing assumptions are excluded.However, system responses t h a t elicit a user reply are shown to provide a feeling for user interaction under AUTONOTEZ.Indexing activity. The AUTONOTEL protocols show a high degree of terse, efficient referencing of previously defined topics. The communicative efficiency was especially great in instances where several consecutive items were entered on a common topic. This situation frequently occurred when entering a set of reading notes on a particular paper or collection of papers. Typically, the first item in such a set of entries was assigned to one or more new topics, In describing the subsequent items, references to these topics often were conveyed by a s$ngle word or phrase, or by a null description (a description line consisting of only a slash i s treated as a reference to the topic just mentioned). the network through the "artificial intelligence" node.In the AUTONOTE protocol for these materials, there was frequent use of descriptor abbreviations and other idiosyncratic tags (CWRUAICONF, AT, COGPSY, e t c . ) .These suggest a s t r o n g d e s i r e t o e l i m i n a t e repeated e n t r y more of t h e s e items were described i n f u l l . The random sample lacked a consist e n t contextual framework, and consequently had t h e l e a s t communicative e f f iciency on t h e average. R e t r i e v a l a c t i v i t y . W e have seen t h a t r e t r i e v a l a c t i v i t y i s an e s s e n t i a l p a r t of t h e indexing and o r g a n i z a t i o n a l processes. The second raises a more significant problem. Which descriptors should be used to restrict the size of the accessed set of items? Some descriptors may restrict the set too greatly, eliminating relevant material; others may discriminate very little or not at all. In the absence of system feedback, this discrimination process places a major burden on the user's memory.The AUTONOTE2 system, on the other hand, provides the user with very meaningful feedback in response to general queries. Consider, fbr example, the retrieval protocol presented in Fig, 24 . At each level in the representational network, the user is given an opportunity to choose among several subtopics. This example very effectively demonstrates a marked improvement over keyword indexing--the ability to discriminate among subsets of material indexed under a common set of general descriptors.An analysis of man-machine dialogs collected during the description of a realistically diverse collection of textual materials has shown the communicative ease and efficiency and the descriptive power attainable under the referential system. The results Indicate that the referential mechanisms developed in this study copstitute a viable alternative to keyword indexing techniques as applied to personal information systems. The referential approach offers four primary contributions toward the improvement of man-machine communication;each corresponds to a particular kind of facilitation during storage and retrieval activity. Finally, the utilization of the structural context provided by the network approach taakes it possible for the user to describe, organize, and retrieve materials with considerable communicative efficiency, This is a fundamental aspect of the system design--to provide a framework for interpreting terse, efficient, sometimes ambiguous references to the topics in the information universe.In light of the increasing availability of on-line computing facilities today, it seems reasonable to expect that personalized retrieval systems will play an expanding role in the computer support of individual research activity.It is hoped that thisstudywill suggest new directions for the design of such systems. : human cormrmnicat%~e efficiency. The l i s t e n e r ' s r e p r e s e n t a t i o n of t h e t o p i c s alteacly discussed f a c i l i t a t e s communication i n t h a t t h e speaker i s spared the trouble of describing i n complete d e t a i l those t h b g s t o which he r e f e r s .Furthermore, the speaker can proceed to related t o p i c s without having t o describe them in full. For example, a speaker who has been t a l k i n g about the design of a particular experiment can safely move on t o discuss t h e results of the experiment without specifying anew the experiment he has i n mind. This paper describes the design and implementation of a personal information storage and retrieval system based on t h e foregoing analogy w i t h human referential communication. It presents a hierarchical network data strqcture foe representing topic descriptions formulated within a phrasal description language. ~alied the representational network, this structure enables the system to move easily from one Bubject to other elated ones.It provides a means for representing the user's working context, thereby enabling the user to describe his materials much more tersely than is possible in keyword-based systems. The system makes use of the syntactic dependencies among the words and phrases of descriptiops in or'der to represent structural relationsh'ips among the user's topics. Consequently, the user impaats structure to the data base in a particularly natural way, eliminating much of the organization activity normalb associated with keyword-based systems. Our central thesis is that the network mediated techniques provide for m z e effective mawmachine communication during the processes of description, organization, and retrieval within a personally generated information universeThe procedures used here differ substantially from the typical keyword indexing and retrieval mechanisms of other personal retrieval systems. The centrab objective is to provide the user with framework for defining the important topics or informational objects he deals with, and to enable him to easily associate items in his data base with these entities. Rather than viewing the data base as a collection of items and associated index terms, the user deals with "objects" thet are in some sense meaningful to him.Whether retrieving information or indexing new material the user conveys references to the appropriate topics. This shift in the user's view of his information universe, coupled with the mechanisms we have developed for building up and r e f e r r i n g t o t h e t o p i c framework, o o n s t i t u t e the substance of our approach t o personal information storage and r e t r i e v a l . Appendix:
null
null
null
null
{ "paperhash": [ "henisz-dostert|how_features_resolve_syntactic_ambiguity", "reitman|autonote:_a_personal_information_storage_and_retrieval_system" ], "title": [ "How features resolve syntactic ambiguity", "AUTONOTE: A personal information storage and retrieval system" ], "abstract": [ "Ambiguity is a pervasive and important aspect of natural language. Ambiguities, which are disambiguated by context, contribute powerfully to the expressiveness of natural language as compared to formal languages. In computational systems using natural language, problems of properly controlling ambiguity are particularly large, partially because of the necessity to circumvent parsings due to multiple orderings in the application of rules.Features, that is, subcategorizations of parts-of-speech, constitute an effective means for controlling syntactic ambiguity through ordering the hierarchical organization of syntactic constituents. This is the solution adopted for controlling ambiguity in REL English, which is part of the REL (Rapidly Extensible Language) System. REL is a total software system for facilitating man/machine communications. The efficiency of processing natural language in REL English is achieved both by the detailed syntactic aspects which are incorporated into the REL English grammar, and by means of the particular implementation for processing features in the parsing algorithm.", "This paper describes AUTONOTE, a personal storage and retrieval system designed for use by individuals working with large bodies of information. The user may enter a variety of textual materials and assign descriptors and phrases by which these materials may be retrieved. He has available mechanisms for deleting, replacing, linking, and hierarchically organizing text items. The system is operating on-line in a time-sharing environment, and can be utilized from a variety of terminals. Both use and implementation are discussed in detail, with special attention to utilization of AUTONOTE through an alphanumeric CRT display. Also mentioned are potential artificial intelligence extensions and the use of the system in a study of scientific problem solving." ], "authors": [ { "name": [ "Bozena Henisz-Dostert", "F. B. Thompson" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] }, { "name": [ "W. Reitman", "R. Roberts", "R. W. Sauvain", "D. D. Wheeler", "William E. Linn" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null }, { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null, null ], "s2_corpus_id": [ "6574474", "14924698" ], "intents": [ [], [] ], "isInfluential": [ false, false ] }
null
593
0
null
null
null
null
null
null
null
null
382275589b87306e54e8f94c1357911282026005
219306857
null
Review: \textit{ {W}ord {O}rder and {W}ord {O}rder {C}hange}, by {C}harles {N}. {L}i, Editor
r sign language, languages o f the Niger-Congo group, Chinese, Indo-European, drift, d i s c o u r s e grammar, rnetatheory, the e v a l u a t i o n me-tric, and, of c o u r s e , language typology. Obviously, t h e i r common purpose i s t o move t o w a r d a c l e a r e r exp l a n a t i o n o f the causal relationships between the surface constituents of a sentence b o t h synch~onicdlly and d i a c h r o n i c a l l y .
{ "name": [ "Dunn, James M." ], "affiliation": [ null ] }
null
null
null
1975-09-01
0
0
null
null
null
null
But many o f the papers actually share more than t h e commom denomi n a t o r of interest i n here may be instances when the preposed formis found] i n which t h e r e i s nothing t o show that it was ever otherwise* (38).The autherb proceeds n e x t t o considering . the factors i n - v o l v' S e r i a l . verbs and syntactic cfiange r Niger-Congo' , by T a l m y . Givon argues that a shift from s e r i a l i z a t i o n must be gradu- (30'7-3 3 ) . O n the e x p l a n a t the p a r t i c i p a n t s i n the S a n t a Barbara conference . Greenberg's work, t h e y w r i t e , i n d i c a t e s two important modes o f i n v e s t i g a t i o n (Lehmann and iMalkie1 1968 8 138) We are encouraged by Greenberg's u s e of q u a n t i t a t i v e methods and his a b i l i t y t o 5 s o l a t e s i -g n i f i c a n t trends i n s t r u c t u r e . Austin: U n i v e r s i t y o f Texas P r e s s .
null
Main paper: : But many o f the papers actually share more than t h e commom denomi n a t o r of interest i n here may be instances when the preposed formis found] i n which t h e r e i s nothing t o show that it was ever otherwise* (38).The autherb proceeds n e x t t o considering . the factors i n - v o l v' S e r i a l . verbs and syntactic cfiange r Niger-Congo' , by T a l m y . Givon argues that a shift from s e r i a l i z a t i o n must be gradu- (30'7-3 3 ) . O n the e x p l a n a t the p a r t i c i p a n t s i n the S a n t a Barbara conference . Greenberg's work, t h e y w r i t e , i n d i c a t e s two important modes o f i n v e s t i g a t i o n (Lehmann and iMalkie1 1968 8 138) We are encouraged by Greenberg's u s e of q u a n t i t a t i v e methods and his a b i l i t y t o 5 s o l a t e s i -g n i f i c a n t trends i n s t r u c t u r e . Austin: U n i v e r s i t y o f Texas P r e s s . Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
593
0
null
null
null
null
null
null
null
null
15593cbf354a496706f0cff0aabf55e74f411ea7
56548665
null
{J}unction {G}rammar as a Base for Natural Language Processing
Melby, Alan. Forming and T e s t i n g S y n t a c t i c Transfers. Brigham Young University, M.A. Thesis, 1974. L y t l e , Eldon Brigham 1973.
{ "name": [ "Lytel, Eldon G. and", "Packard, Dennis and", "Gibb, Daryl and", "Melby, Alan K. and", "Billings, Jr., Floyd H." ], "affiliation": [ null, null, null, null, null ] }
null
null
null
1975-09-01
8
5
null
null
null
null
Dennis Packard Assuming t h a t s e m a n t i c s must be t a k e n i n t o a c c o u n t i n a t r a n s - Given t h e E n g l i s h s u r f a c e s t r i n g , "He needs i t " , E n g l i s h a n a l y s i s would produce t h e j u n c t i o n t r e e : Consider t h e following sentence :(1) Every monkey t h a t swallows that, gets i n d i g e s t i o n .U t i l i z i n g a m o d i f i c a t i o n j u n c t i o n * c E ve ry n g e t s i n d i g e s t i o n monkey n t h a tA V- N swallows t h a tThe problem w i t h t h e above j u n c t i o n marker i s t h a t i t i s n o t c l e a r which t h a t i s t h e relative pronoun. Following Montague (see a l s o P a r t i e [ 3 ] and Gabbay The above t r e e would t h e n be l e x i c a l i z e d a s s e n t e n c ( 1 ) . I f we had s u b s c r i p t e d j u s t t h e o t h e rt h a t t h e n t h e m a r k~r would be l e xi c a l i z e d n o t as s e n t e n c e (1) but r a t h e r a s :(1A)Every monkey t h a t t h a t swallows g e t s i n d i g e s t i o n .A r e a d a b l e way o f h a n d l i n g such c l a u s e s is t h e method used i n J u n c t i o n Grammar: The proform with r e s p e c t t o which t h e modi- D-F/ B E --t G H~ A- BC/F--t r e a t m e n t of r e l a t i v e c l a u s e s w i t h i t s t r e a t m e n t of noun complements. These r u l e s propose t o account f o r r e s t r i c t i v e r e l a t i v e c l a u s e s and noun complements, r e s p e c t i v e l y , b o t h embedding an e n t i r e s e nt e n c e t o a nominal a n t e c e d e n t , encompassing both c o n s t i t u e n t s w i t h b r a c k e t s l a b e l l e d NP. I t seems c l e a r t h a t t h e two s t r u c t u r e s i n q u e s t i o n a r e i n f a c t r e l a t e d , i . e . i n some s e n s e s i m i l a r , b u t n o t i n the way suggested by t h e P -r u l e s proposed t o g e n e r a t e them.c l a u s e : I n t h e r e l a t i v e c l a u s e , an NP of t h e main c l a u s e c o i n c i d e s r e f e r e n t i a l l y w i t h an NP of t h e dependent c l a u s e ; i n t h e complement, a noun, o r p o t e n t i a l l y , a noun p h r a s e , of t h e main c l a u s e i s equated r e f e r e n t i a l l y w i t h t h e e n t i r e dependent c l a u s e . Thus s e n t e n c e ( 2 ) i s ambiguous over t h e r e l a t i v e c l a u s e and t h e complement r e a d i n g s . (Notice t h a t one can r e p l a c e t h a t w i t h which f o r t h e r e l a t i v e c l a u s e r e a d i n g b u t n o t f o r t h e complement r e a di n g . ] There i s n o t h i n g i n t h e P -r u l e f o r m u l a t i o n t o make t h i s o v e r l a p p i n g of c o n s t i t u e n t s e x p l i c i t , however. Hence, a mechanism f o r checking t h e c o r e f e r e n c e of NP1s i s r e q u i r e d s o that t h e r e l at i v e c l a u s e t r a n s f o r m a t i o n could a p p l y t o s e n t e n c e s embedded byS and produce t h e a p p r o p r i a t e r e l a t i v e pronoun. I t i s n o t c l e a r , however, whether t h i s mechanism i s supposed t o e s t a b l i s h a c o r e f e r e n c e r e l a t i o n between t h e head N and t h e complement S i n S t i l l more s e r i o u s i s what appears t o be implied by t h e r e l at i v e c l a u s e r u l e . Namely, t h e e n t i r e c l a u s e i s b r a c k e t e d w i t h a nominal c a t e g o r y (NP), s u g g e s t i n g t h a t i t , l i k e t h e complement, i s f u n c t i o n i n g i n i t s e n t i r e t y as a nominal c o n s t i t u e n t . The s t r u c t u r a l symmetry of t h e s e two r u l e s r e s u l t s i n a f a l s e g e n e r a li z a t i o n ( t h e i l l u s i o n t h a t b o t h c l a u s e s were nominalized) w h i l e f a i l i n g t o make e x p l i c i t t h e generality which a c t u a l l y e x i s t s and i s s e m a n t i c a l l y c r u c i a l ( t h e r e f e r e n t i a l o v e r l a p between c o n s t it u e n t s i n t h e main and dependent c l a u s e s ) . What i s needed a r e s t r u c t u r a l r e p r e s e n t a t i o n s which r e f l e c t t h e o v e r l a p p i n g of con- The schematic expression for full-subjunction is Z+X*Y.z a t i o n s . The J u n c t i o n Grammar s o l u t i o n i s as f o l l o w s : N sv A n N * N N A * S V ( t h a t )The following are representative members of t h i s schernq: ( S p e a k i n g Russian)N ' N * SV Adj- A d j * PVN ft P V i n g n speak Russian The speaking o f Russian i s d i f f i c u l t .n -i n g speakChildren s u c h that they hate candy are rare. Basic r e l a t i v e m o d i f i e r s a r e d e f i n e d t o be r e l a t i v e c l a u s e s of which t h e modifying node i s of noun c a t e g o r y , t h e r e l a t i v e marke r f o r such c l a u s e s being i n some c a s e s n u l l (e.g. t h e boy I saw was c r y i n g ) , o r such words a s which, who -9 and t h a t . Non-basic r e l a t i v e modifiers a r e d e f i n e d t o be a l l o t h e r s which e n t a i l i nt e r j u n c t i o n . Some of t h e s e r e l a t i v e s have n o t i n some c a s e s been recognized i n t h e l i t e r a t u r e as r e l a t i v e c o n s t r u c t i o n s a t a l l , John recommends G i l l e t t e , we cannot n e c e s s a r i l y conclude t h a t John i s an e x p e r t who recommends G i l l e t f e ( t h e reason being t h a t he might b e a blimp e x p e r t who j u s t recommends G i l l e t t e products t o h i s friends, while saying t h a t he i s an e x p e r t who recommends G i l l e t t e i m p l i e s t h a t he i s an e x p e r t on t h e types of products t h a t G i l l e t t e produces) .F u r t h e r , i f m o d i f i e r s such a s those i n (9) and (10) a r e con-s i d e r e d t o be r e l a t i v e s t a t e m e n t s , it e x p l a i n s o u r i n t u i t i o n t h a t obviously, s u r p r i s i n g l y , e t c . , a r e u s u a l l y m o d i f i e r s a t s t a t e m e n t l e v e l . For example, a n a t u r a l way, i t would seem, t o r e p r e s e n t p l e t a r g e t languages w i t h j u n c t i o n t r e e s a s t h e i n t e r l i n g u a . T h i s c o n f i g u r a t i o n , which has a l s o been proposed by Kay [ 7 ] , has some a t t r a c t i v e f e a t u r e s . sv n N + PV B i l l f i V + N brought- * sv N 2 , books N + PV N * N John A N-+ V I~ brought - N * N * NI INFX 00 1 INFX ~ 01 I Nouns ~~ --- 1 0011 I V? Seg = C Seg + 1 1 Nounc Nounc+l YES , Put p o i n t e r t o noun i n n o u n l i s t (nounc)EQUATION(N + CV + N) I N ('1 V ( s aw) V(1oves) w (boy) N $ ( P + N) ( i n c a r ) Topic NCombining t-he prepositional phrase and a n t e c e d e n t problems, we p r o - The program w i l l ask what by t h e church modifies--we w i l l answer "6" ( c a r ) . Then i t w i l l ask "Does which r e f e r t o church?" T h i s i s r e a l l y a s k i n g "Did t h e b i r d s e a t t h e b r e a d on t h e church?" I f we s a y "No" t h e n i t w i l l ask, "Does which r e f e r t o c a r ? " "Yes." The "C" will remove the "we" as object o f t h e p r e p o s i t i o n and r e p l a c e it with a noun t t i t tl A new o r d e r -r u l e i s then invoked, p o i n t i n g t o t h e noun l l i t . w The underlying s t r u c t u r e i s r e p r e s e n t e d " a f t e r i t t h a t we a t e dinner" U n f o r t u n a t e l y , I p l a i n l y saw t h e l i t t l e boy i n t h e b i g r e d c a r t h a t 1 2 3 4 5 6 7 8 9 1 9 11 1 2 1 3 1 4EQUATIONEQUATIONPP I \ P + N a f t e r /\ N * SV ( t h a t ) it A N we A V N a t e d i n n e r o r , i n infix: (N + (V + N) $ (P + N * (N + ( v + N ) ) ) ) .t h e g i r l l o v e s .1 5 1 6 1 7 18 (A+E) $ N $ (P + (A+EI $ (A+E) $N) 1 4 X ( t o p i c ) 4 (A+E) $ 0 1 (A+E) $As a summary of t h i s section, we w i l l f o l l o w a sample s e n t e n c e through t h e f i v e phases of a n a l y s i s . A f t e r t h e p r e p a r a t i o n phase, t h e t r a n s f e r language i n t e r p r e - With t h e p r e l i m i n a r y example of F i g u r e 2 i n mind, l e t u s now examine t r a n s f e r language a s a programming language. and e x e c u t e t h e s t a t e m e n t JOIN = 4 $ (Plfl267<12> + =5) 9 we obtain t h e t r e e : whose P node c o n t a i n s t h e s e m a n t i c index 1 0 2 6 7 and t h e s e m a n t i c f e a t u~e number 1 2 . ample, TRANSFER 1 2 WHILE C 2 e x e c u t e s t r a n s f e r number 1 2 r e p e a t e d l y u n t i l some c o n t e x t Lest w i t h i n t r a n s f e r 1 2 , o r some o t h e r t r a n s f e r c a l l e d by i t , s e t s c o n d i t i o n v a r i a b l e 2 t o f a l s e ( L e a to zero) . - Y ( A ( z 2 ) )*** LEVEL 1 *** ANTECEDENT 0 FROM LEVEL 0 ORDER TYPE 1 ORDER NODE # ( ( N * ( V + N ) ) N $ A + E -VV $ ( P + N ) 4-E ) ) /I REF 2 4 5 6 ~lc-tc* LEVEL 3 *JC* ANTECEDENT 10 FROM LEVEL 1 ORDER TYPE 1 ORDER NODE # ( N - ( V 3 N ) ) I REF 3 14 15 17 *** LEVEL 4 *** ANTECEDENT 1 7 FROM LEVEL 1 ORDER TYPE 2 ORDER NODE I!( ( N -VLET = 4 BE-LET =9 BE H -LET =11 BE = -LET C7 BE 4 -LET C 7 BE ~5 -LET C 7 BE C 2 -LEX C 7 BE INODCAT (=8) -LET C7 BE P 4 -LET FEATURES (=9) BE <-SINGULAR, +MASS)-LET H BE = 9-JOIN = 4 $ (PI8267 12 + = 5 ) 159 Consider t h e three c l o s e l y r e l a t e d sentences:-UNJOIN = 1 2 -REPLACE = l O WITH = 2 -REPLACE = 3 WITH @ -I F C9 IS TRUE THEN stmt -IF C 2 EQ 1 2 THEN DO s t m t l s t m t 2 . . . END -I F =14 I S A VERB THEN stmt E L S E stmt -TRANSFER 11 -TRANSFER C 2 -TRANSFER 12 WHILE C 2 -SKIP -HALT -ON CONDITION(14) TRANSFER 2(1) John gave him a bopk.( 2 ) He was g i v e n a book by John. . A ( A ( L ( T ( = l ) LET = 3 BE Y ( X ( S ( L ( = l ) ) ) ) LET =4 BE Y ( = 2 ) LET =5 BE Y ( X ( S ( X ( = Z ) ) ) ) *ALSO FIND THE SYNTACTIC SUBJECT LET =6 BE Y (EQUATIONHe was given a book by John. n system ( e . g . [ll], [12], [ 1 3 ] ) . The reason f o r the d i f f i c u l t y i s simply t h a t a t r a n s f e r phase is t i g h t l y i n t e r l a c e d with t h e a n a l y s i s and s y n t h e s i s p h a s e s and t h e theoretical base. For example, we do some a d j u s t m e n t s i n t r a n s f e r which o t n e r systems neutralize in a n a l y s i s , while some o t h e r s y stems consider aspects of word order and word choice in t r a n s f e r which we handle i n analysis and synthesis.The i n p u t t o t h i s program i s a j u n c t i o n t r e e o r J -t r e e . The programs. There a r e : (1) l e f t -r i g h t and r i g h t -l e f t o r d e r i n g of o p e r a n d s , and 2 The s e n t e n c e s " I t s u r p r i s e d me t h a t he cameM and "It i s s o big t h a t I c a n ' t l i f t it" have d i s c o n t i n u o u s o r d e r . With c o n t i n u o u s o r d e r i n g they would read " I t t h a t h e came surprised me" and "It was so that I can't lift it big."The synthesis program h a n d l e s discontinuous ordering by redirecting the flow of processing in such a way that the discontinuous element is omitted a t its normal position and processed instead a t a predetermined insertion point. Figure 2 shows the synthesis mainline with the reset routines and their skip points. Thus, t h e s y n t h e s i s program interprets a J-tree t o generate an output s t r i n g . When this o u t p u t string has been adjusted by the graphological and phonological r u l e s , which shape the f i n a l form o f t h e o u t p u t , t h e t a r g e t language t e x t appears. But t h e conclusion we have been forced t o repeatedly i s t h a t while surface phenomena are vastly disparate from language t o language, j u n c t i o n phenomena are more a l i k e t h a n different .While w e use t h e f a m i l i a r ANALYSIS/TRANSFER/SYNTHESIS scheme a s a g e n e r a l framework f o r o u r t r a n s l a t i o n system, the design o f these components is rigidly governed by t h e j u n c t i o n grammar model, with j u n c t i o n t r e e s serving a s t h e i n t e r l i n g u a .A n a l y s i s , o p e r a t i n g i n an i n t e r a c t i v e mode, Our development group i s c u r r e n t l y engaged in developing E n g l i s h a n a l y s i s and t r a n s f e r -s y n t h e s i s f o r t r a n s l a t i o n i n t o S p a n i s h , French, and German, u s i n g t h e U n i v e r s i t y ' s IBM 360/65.
null
Main paper: introduction t o junction grammar--by e l d o n lytle: Dennis Packard Assuming t h a t s e m a n t i c s must be t a k e n i n t o a c c o u n t i n a t r a n s - Given t h e E n g l i s h s u r f a c e s t r i n g , "He needs i t " , E n g l i s h a n a l y s i s would produce t h e j u n c t i o n t r e e : Consider t h e following sentence :(1) Every monkey t h a t swallows that, gets i n d i g e s t i o n .U t i l i z i n g a m o d i f i c a t i o n j u n c t i o n * c E ve ry n g e t s i n d i g e s t i o n monkey n t h a tA V- N swallows t h a tThe problem w i t h t h e above j u n c t i o n marker i s t h a t i t i s n o t c l e a r which t h a t i s t h e relative pronoun. Following Montague (see a l s o P a r t i e [ 3 ] and Gabbay The above t r e e would t h e n be l e x i c a l i z e d a s s e n t e n c ( 1 ) . I f we had s u b s c r i p t e d j u s t t h e o t h e rt h a t t h e n t h e m a r k~r would be l e xi c a l i z e d n o t as s e n t e n c e (1) but r a t h e r a s :(1A)Every monkey t h a t t h a t swallows g e t s i n d i g e s t i o n .A r e a d a b l e way o f h a n d l i n g such c l a u s e s is t h e method used i n J u n c t i o n Grammar: The proform with r e s p e c t t o which t h e modi- D-F/ B E --t G H~ A- BC/F--t r e a t m e n t of r e l a t i v e c l a u s e s w i t h i t s t r e a t m e n t of noun complements. These r u l e s propose t o account f o r r e s t r i c t i v e r e l a t i v e c l a u s e s and noun complements, r e s p e c t i v e l y , b o t h embedding an e n t i r e s e nt e n c e t o a nominal a n t e c e d e n t , encompassing both c o n s t i t u e n t s w i t h b r a c k e t s l a b e l l e d NP. I t seems c l e a r t h a t t h e two s t r u c t u r e s i n q u e s t i o n a r e i n f a c t r e l a t e d , i . e . i n some s e n s e s i m i l a r , b u t n o t i n the way suggested by t h e P -r u l e s proposed t o g e n e r a t e them.c l a u s e : I n t h e r e l a t i v e c l a u s e , an NP of t h e main c l a u s e c o i n c i d e s r e f e r e n t i a l l y w i t h an NP of t h e dependent c l a u s e ; i n t h e complement, a noun, o r p o t e n t i a l l y , a noun p h r a s e , of t h e main c l a u s e i s equated r e f e r e n t i a l l y w i t h t h e e n t i r e dependent c l a u s e . Thus s e n t e n c e ( 2 ) i s ambiguous over t h e r e l a t i v e c l a u s e and t h e complement r e a d i n g s . (Notice t h a t one can r e p l a c e t h a t w i t h which f o r t h e r e l a t i v e c l a u s e r e a d i n g b u t n o t f o r t h e complement r e a di n g . ] There i s n o t h i n g i n t h e P -r u l e f o r m u l a t i o n t o make t h i s o v e r l a p p i n g of c o n s t i t u e n t s e x p l i c i t , however. Hence, a mechanism f o r checking t h e c o r e f e r e n c e of NP1s i s r e q u i r e d s o that t h e r e l at i v e c l a u s e t r a n s f o r m a t i o n could a p p l y t o s e n t e n c e s embedded byS and produce t h e a p p r o p r i a t e r e l a t i v e pronoun. I t i s n o t c l e a r , however, whether t h i s mechanism i s supposed t o e s t a b l i s h a c o r e f e r e n c e r e l a t i o n between t h e head N and t h e complement S i n S t i l l more s e r i o u s i s what appears t o be implied by t h e r e l at i v e c l a u s e r u l e . Namely, t h e e n t i r e c l a u s e i s b r a c k e t e d w i t h a nominal c a t e g o r y (NP), s u g g e s t i n g t h a t i t , l i k e t h e complement, i s f u n c t i o n i n g i n i t s e n t i r e t y as a nominal c o n s t i t u e n t . The s t r u c t u r a l symmetry of t h e s e two r u l e s r e s u l t s i n a f a l s e g e n e r a li z a t i o n ( t h e i l l u s i o n t h a t b o t h c l a u s e s were nominalized) w h i l e f a i l i n g t o make e x p l i c i t t h e generality which a c t u a l l y e x i s t s and i s s e m a n t i c a l l y c r u c i a l ( t h e r e f e r e n t i a l o v e r l a p between c o n s t it u e n t s i n t h e main and dependent c l a u s e s ) . What i s needed a r e s t r u c t u r a l r e p r e s e n t a t i o n s which r e f l e c t t h e o v e r l a p p i n g of con- The schematic expression for full-subjunction is Z+X*Y.z a t i o n s . The J u n c t i o n Grammar s o l u t i o n i s as f o l l o w s : N sv A n N * N N A * S V ( t h a t )The following are representative members of t h i s schernq: ( S p e a k i n g Russian)N ' N * SV Adj- A d j * PVN ft P V i n g n speak Russian The speaking o f Russian i s d i f f i c u l t .n -i n g speakChildren s u c h that they hate candy are rare. Basic r e l a t i v e m o d i f i e r s a r e d e f i n e d t o be r e l a t i v e c l a u s e s of which t h e modifying node i s of noun c a t e g o r y , t h e r e l a t i v e marke r f o r such c l a u s e s being i n some c a s e s n u l l (e.g. t h e boy I saw was c r y i n g ) , o r such words a s which, who -9 and t h a t . Non-basic r e l a t i v e modifiers a r e d e f i n e d t o be a l l o t h e r s which e n t a i l i nt e r j u n c t i o n . Some of t h e s e r e l a t i v e s have n o t i n some c a s e s been recognized i n t h e l i t e r a t u r e as r e l a t i v e c o n s t r u c t i o n s a t a l l , John recommends G i l l e t t e , we cannot n e c e s s a r i l y conclude t h a t John i s an e x p e r t who recommends G i l l e t f e ( t h e reason being t h a t he might b e a blimp e x p e r t who j u s t recommends G i l l e t t e products t o h i s friends, while saying t h a t he i s an e x p e r t who recommends G i l l e t t e i m p l i e s t h a t he i s an e x p e r t on t h e types of products t h a t G i l l e t t e produces) .F u r t h e r , i f m o d i f i e r s such a s those i n (9) and (10) a r e con-s i d e r e d t o be r e l a t i v e s t a t e m e n t s , it e x p l a i n s o u r i n t u i t i o n t h a t obviously, s u r p r i s i n g l y , e t c . , a r e u s u a l l y m o d i f i e r s a t s t a t e m e n t l e v e l . For example, a n a t u r a l way, i t would seem, t o r e p r e s e n t p l e t a r g e t languages w i t h j u n c t i o n t r e e s a s t h e i n t e r l i n g u a . T h i s c o n f i g u r a t i o n , which has a l s o been proposed by Kay [ 7 ] , has some a t t r a c t i v e f e a t u r e s . sv n N + PV B i l l f i V + N brought- * sv N 2 , books N + PV N * N John A N-+ V I~ brought - N * N * NI INFX 00 1 INFX ~ 01 I Nouns ~~ --- 1 0011 I V? Seg = C Seg + 1 1 Nounc Nounc+l YES , Put p o i n t e r t o noun i n n o u n l i s t (nounc)EQUATION(N + CV + N) I N ('1 V ( s aw) V(1oves) w (boy) N $ ( P + N) ( i n c a r ) Topic NCombining t-he prepositional phrase and a n t e c e d e n t problems, we p r o - The program w i l l ask what by t h e church modifies--we w i l l answer "6" ( c a r ) . Then i t w i l l ask "Does which r e f e r t o church?" T h i s i s r e a l l y a s k i n g "Did t h e b i r d s e a t t h e b r e a d on t h e church?" I f we s a y "No" t h e n i t w i l l ask, "Does which r e f e r t o c a r ? " "Yes." The "C" will remove the "we" as object o f t h e p r e p o s i t i o n and r e p l a c e it with a noun t t i t tl A new o r d e r -r u l e i s then invoked, p o i n t i n g t o t h e noun l l i t . w The underlying s t r u c t u r e i s r e p r e s e n t e d " a f t e r i t t h a t we a t e dinner" U n f o r t u n a t e l y , I p l a i n l y saw t h e l i t t l e boy i n t h e b i g r e d c a r t h a t 1 2 3 4 5 6 7 8 9 1 9 11 1 2 1 3 1 4EQUATIONEQUATIONPP I \ P + N a f t e r /\ N * SV ( t h a t ) it A N we A V N a t e d i n n e r o r , i n infix: (N + (V + N) $ (P + N * (N + ( v + N ) ) ) ) .t h e g i r l l o v e s .1 5 1 6 1 7 18 (A+E) $ N $ (P + (A+EI $ (A+E) $N) 1 4 X ( t o p i c ) 4 (A+E) $ 0 1 (A+E) $As a summary of t h i s section, we w i l l f o l l o w a sample s e n t e n c e through t h e f i v e phases of a n a l y s i s . A f t e r t h e p r e p a r a t i o n phase, t h e t r a n s f e r language i n t e r p r e - With t h e p r e l i m i n a r y example of F i g u r e 2 i n mind, l e t u s now examine t r a n s f e r language a s a programming language. and e x e c u t e t h e s t a t e m e n t JOIN = 4 $ (Plfl267<12> + =5) 9 we obtain t h e t r e e : whose P node c o n t a i n s t h e s e m a n t i c index 1 0 2 6 7 and t h e s e m a n t i c f e a t u~e number 1 2 . ample, TRANSFER 1 2 WHILE C 2 e x e c u t e s t r a n s f e r number 1 2 r e p e a t e d l y u n t i l some c o n t e x t Lest w i t h i n t r a n s f e r 1 2 , o r some o t h e r t r a n s f e r c a l l e d by i t , s e t s c o n d i t i o n v a r i a b l e 2 t o f a l s e ( L e a to zero) . - Y ( A ( z 2 ) )*** LEVEL 1 *** ANTECEDENT 0 FROM LEVEL 0 ORDER TYPE 1 ORDER NODE # ( ( N * ( V + N ) ) N $ A + E -VV $ ( P + N ) 4-E ) ) /I REF 2 4 5 6 ~lc-tc* LEVEL 3 *JC* ANTECEDENT 10 FROM LEVEL 1 ORDER TYPE 1 ORDER NODE # ( N - ( V 3 N ) ) I REF 3 14 15 17 *** LEVEL 4 *** ANTECEDENT 1 7 FROM LEVEL 1 ORDER TYPE 2 ORDER NODE I!( ( N -VLET = 4 BE-LET =9 BE H -LET =11 BE = -LET C7 BE 4 -LET C 7 BE ~5 -LET C 7 BE C 2 -LEX C 7 BE INODCAT (=8) -LET C7 BE P 4 -LET FEATURES (=9) BE <-SINGULAR, +MASS)-LET H BE = 9-JOIN = 4 $ (PI8267 12 + = 5 ) 159 Consider t h e three c l o s e l y r e l a t e d sentences:-UNJOIN = 1 2 -REPLACE = l O WITH = 2 -REPLACE = 3 WITH @ -I F C9 IS TRUE THEN stmt -IF C 2 EQ 1 2 THEN DO s t m t l s t m t 2 . . . END -I F =14 I S A VERB THEN stmt E L S E stmt -TRANSFER 11 -TRANSFER C 2 -TRANSFER 12 WHILE C 2 -SKIP -HALT -ON CONDITION(14) TRANSFER 2(1) John gave him a bopk.( 2 ) He was g i v e n a book by John. . A ( A ( L ( T ( = l ) LET = 3 BE Y ( X ( S ( L ( = l ) ) ) ) LET =4 BE Y ( = 2 ) LET =5 BE Y ( X ( S ( X ( = Z ) ) ) ) *ALSO FIND THE SYNTACTIC SUBJECT LET =6 BE Y (EQUATIONHe was given a book by John. n system ( e . g . [ll], [12], [ 1 3 ] ) . The reason f o r the d i f f i c u l t y i s simply t h a t a t r a n s f e r phase is t i g h t l y i n t e r l a c e d with t h e a n a l y s i s and s y n t h e s i s p h a s e s and t h e theoretical base. For example, we do some a d j u s t m e n t s i n t r a n s f e r which o t n e r systems neutralize in a n a l y s i s , while some o t h e r s y stems consider aspects of word order and word choice in t r a n s f e r which we handle i n analysis and synthesis.The i n p u t t o t h i s program i s a j u n c t i o n t r e e o r J -t r e e . The programs. There a r e : (1) l e f t -r i g h t and r i g h t -l e f t o r d e r i n g of o p e r a n d s , and 2 The s e n t e n c e s " I t s u r p r i s e d me t h a t he cameM and "It i s s o big t h a t I c a n ' t l i f t it" have d i s c o n t i n u o u s o r d e r . With c o n t i n u o u s o r d e r i n g they would read " I t t h a t h e came surprised me" and "It was so that I can't lift it big."The synthesis program h a n d l e s discontinuous ordering by redirecting the flow of processing in such a way that the discontinuous element is omitted a t its normal position and processed instead a t a predetermined insertion point. Figure 2 shows the synthesis mainline with the reset routines and their skip points. Thus, t h e s y n t h e s i s program interprets a J-tree t o generate an output s t r i n g . When this o u t p u t string has been adjusted by the graphological and phonological r u l e s , which shape the f i n a l form o f t h e o u t p u t , t h e t a r g e t language t e x t appears. But t h e conclusion we have been forced t o repeatedly i s t h a t while surface phenomena are vastly disparate from language t o language, j u n c t i o n phenomena are more a l i k e t h a n different .While w e use t h e f a m i l i a r ANALYSIS/TRANSFER/SYNTHESIS scheme a s a g e n e r a l framework f o r o u r t r a n s l a t i o n system, the design o f these components is rigidly governed by t h e j u n c t i o n grammar model, with j u n c t i o n t r e e s serving a s t h e i n t e r l i n g u a .A n a l y s i s , o p e r a t i n g i n an i n t e r a c t i v e mode, Our development group i s c u r r e n t l y engaged in developing E n g l i s h a n a l y s i s and t r a n s f e r -s y n t h e s i s f o r t r a n s l a t i o n i n t o S p a n i s h , French, and German, u s i n g t h e U n i v e r s i t y ' s IBM 360/65. Appendix:
null
null
null
null
{ "paperhash": [ "lytle|a_grammar_of_subordinate_structures_in_english" ], "title": [ "A Grammar of Subordinate Structures in English" ], "abstract": [ "SENTENTIAL CONSTITUENTS That a sentence may function as an NP is evidenced by abstract sentential subjects and objects. Such constituents are NP's externally, but sentences internally: [ [S] ] : NP NP (1) It surprised me that you arrived so soon. The rule which accounts for this pattern is NP/S S, where the first S indicates the portion of the subordinate sentence incorporated into the superordinate NP, and the second S simply represents the subjoined sentence. In this case, of course, the entire sentence is incorporated into the NP. In English, the antecedent of S is generally pronounced as it, or -ing, if the option for the gerundive nominalization (factive) is selected. An illustrative phrase marker for sentences embedded in NP's is given in Figure 5.1. Abstract sentential constituents occur with antecedents other than NP. They may occur as adverbials, corresponding to AdvP/S S. Consider, for example, a sentence where an entire subjoined sentence functions as the manner adverbial; (2a) Dad fixed the door so that it wouldn't slam; as an adverb denoting cause; (2b) Dad oiled the hinges because the door was squeaking; 64 HETEROGENEOUS SUBJUNCTION" ], "authors": [ { "name": [ "E. Lytle" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "60335823" ], "intents": [ [] ], "isInfluential": [ false ] }
null
593
0.008432
null
null
null
null
null
null
null
null
461ee4c15a9e69fe0bb7ebb32808d861840f5d58
219306561
null
The {SQAP} Data Base for Natural Language Information
The Swedish Question Answering Pro j ect (SQAP) aims at handli~g many digferent kinds of facts, and nat only facts in a small dpecial application area. The SQAP data base consists of a network of nodes correspdnding to objects, properties, and events in the real.world. Deduction can be performed, and deduction rules can be input i n natural language and stored in the data base. This report describes the data base, specially focusing on problems in its d e s b n , both problems which have been solved and problems which are not yet solved. Specially full treatment is given to the data base representation df natural language noun phrases, and to the represen-Gation of deduction fules in the data base in the £ o m D£ data base "patterns"
{ "name": [ "Palme, Jacob" ], "affiliation": [ null ] }
null
null
null
1975-09-01
0
3
null
This paper describes t h e n a t u r a l language d a t a base s t r u c t u r e used i n t h e SQ,AP system we dish Question Answering system).Much o f t h a t system i s already working, but t h e paper does not only describe t h e s o l u t i o n s t o solved problems. D i f f i c u l t i e s and unsolved problems are a l s o presented, since I f e e l t h i s i s important t o f u r t h e r progress.answering system capable o f handllng f a c t s o f many d i f f e r e n t kinds. The system should thus not be r e s t r i c t e d t o a small s p e c i a l application area.There i s an obvious need f o r computers with a c a p a b i l i t y t o converse i n n a t u r a l human languages. Natural lmguages are more general-purpose than most a r t i f i c i a l languages, which means t h a t you can t a l k about a wider subject a r e a i f you use n a t u r a l ldnguages. Natural languages can be used by everyone without s p e c i a l t r a f n i n g , s o computers t a l k i n g n a t u r a l language can make more people able t o use more d i f f e r e n t computer f a c i l i t i e s . F i n a l l y , a r i z i n g p a r t o f computer usage i n t h e f u t u r e will be unintelligent processing of n a t u r a l language t e x t s , and such systems can be improved i f the processing is not wholly u n i n t e l l i g e n t . There a r e a l s o wellknown d i f f i c u l t i e s with n a t u r a l languages f o r computers. Natural language i s c l o s e l y connected t o human knowledge. Therefore, n a t u r a l l a n e a g e sentences can only be understood by a m m o r a computer with f a c t u a l howledge about t h e subject matter and with t h e a b i l i t y t o reason w,ith those f a c t s . To disambiguate such wellknown examples as "The p i g was i n the pentf (~ar-ille el 1964) o r !!He went t o the park with t h e girlu (~c h a n k 1969) t h e computer must have an underlying knowledge about various kinds of "pens" , about where "the girlw was previously and s o On.Also, t h e same thing can be s a i d i n many d i f f e r e n t ways, and a computer with n a t u r a l language c a p a b i l i t i e s must be able t o understand t h i s , s o t h a t f o r example it can see t h e s i m i l a r i t y between "Find t h e mean income of unmarried women with a t l e a s t two bhildren.lf and llSearch through -bhe personell f i l e . For each individual who i s a w o m a n , who i s not married, and who has a number of children g r e a t e r -khan two, accumulate income t o calculate t h e mean.11 Theref ore, a computer undexst anding n a t u r a l language must have a d a t a base with basic f a c t u a l knowledge about the w o r l d i n general or abouf the subject matter which the compvter i s -You should be able t o use this d a t a base t o make deductions.The capability t o do simple and n a t u r a l deductions f a s t i s more important than the c a p a b i l i t y t o make very a d v~c e d and longdrange deductions. Since the d a t a base w i l l be l a r g e , an i m p o r t a n t p a r t o f deduction will be the s e l e c t i o n of t h e relevant f a c t s a~d rules out o f the l a r g e mass of facts not needed f o r one s p e c i a l deduction.The d a t a base can be .more o r l e s s close t o n a t u r a l language.A d a t a base c l o s e t o n a t u r a l language makes input t r a n s l a t i o n e a s i e r , and also the l o s s of nuances during the input t r a n s l a t i o n w i l l be smaller. But t h e d a t a base must on t h e other hand have a l o g i c a l s t r u c t u r e which i s s u i t a b l e f o r deduction knd fact searching.One model o f n a t u r a l language knowledge i s the following: The knowledge consists of "conceptst1 and of r u l e s r e l a t i n g these concepts t o each other. A t y p i c a l concept might be " John1' , T r A l l young menrf, ltThe event when John meets M a x y i n the pa.rkn o r "The month of July, 1973". The concepts are r e l a t e d by r u l e s , which can be very simple r e l a t i o n s ( l i k e t h e r e l a t i o n between "111 young ment1 and the property ffyoungn) or complex patterns of concepts (~i k e the r u l e "If Mary i s weak and t i r e d , and she meets a strong b r u t a l man, then she w i l l be frightened.") These rules form a network l i n k i n g all concepts together.This model o f n a t u r a l language i s close t o t h a t often used by psychologists i n t r y i n g t o explain the working o f t h e i n t e l l i g e n c e i n t h e human mind.fPhe SQAP system uses a data base of that kind. The model may at first seem simple and straightfbrwazd. When you try to produce a worldng question-a,nsw&ring system, you w i l l however find that there a r e many difficulties and complications with such a data base. Thia report presents the raBe,% wortant of the problems we have m e t , and in some cases a l s o our solutions. 1 believe that o%her producer8 of natural language system w i l l sooner o r later encounter the same problem, and they may then benefit from our experience as presented in t h i s paper. The idea is that the data base is organized i n t o nodes, each node Yore complex r u l e s o r r e l a t i o n s between concepts are represented by e x t r a concepts. Thus there i s a concept f o r t h e event ItMary l i t t h e f i r e 1 ' and t h i s concept i s r e l a t e d t o "Maryw, Itthe f l r e u and " a c t of lightingv i n a structure like that i n f i g u r e 1. This s t r u c t u r e has f o u r concepts l i n k e d t o g e t h e r by three "prepositiona,lIt relations : CASE, BY and OBJ. From now on, I will i n this paper c a l l such r e l a t i o n s ltshort r e l a t i o n s f 1 .The d a t a base i s organized so t h a t t h e deduction r u l e s can follow t h e s h o r t r e l a k i o n s i n both d i r e c t i o n s , that i s go from "M+ryI1 t o ItMary l i t the f i r e t 1 o r from "Ma,z?y l i t t h e f i r e f f t o "Mary".
Sometimes t h e deduction requires a p a t t e r n o f more than one node, Such patterns a r e c a For this merging we create temporary variables and keys during input translation. The sentence "The m a n always with the gun is in the forestw is thus t r a n s l a t e d i n t o figure 9: nThe man
null
Noun phrases i n natural language u s u a l l y r e f e r t o one o r a s e t o f objects i n t h e r e a l world, l i k e f~Stockholmll o r " h r e r y house lin Swedent1 o r "The nice m a n with a bicycle1v. In our system each such concept i s represented by a node i n t h e d a t a b a s e , which could be c a l l e d an o b j e c t node.Each o b j e c t node i s a s s o c i a t e d w i t h one o r more p r e d i c a t e nodes e x p r e s s i n g p r o p e r t i e s of t h a t o b j e c t . I n our data base, w e mark p r e d i c a t e s w i t h t h e p o s t f i x ct*Pcr. Thus, t h e p h r a s e "An always happy g i r l " would i n o u r data base be represented l i k e in figure 2 :A statement l i k e "There is ad a l w a y s happy g i r l r f or "One g i r l 1 s always happy" w o u l d be represented i n t h e same way, with a n object node and t w o s h o r t r e l a t i o n 6 on ~t If w e meet t h e n a t u r a l language p h r a s e "One g i r l i s n i c e todaytT, t h e n we cannot r e p r e s e n t i t as simply. We have t o affix a t i m e t o the r e l a t i o n between t h e girl and HAFPY*Pcc. f o r the PRED short r e l a t l o n is a node of the type "eventtt.WOne girl is happy todayw will thus be repreaented like in figure 3.T CASE Figure 3 ))One girl is happy today))The advantage of having such an e x t r a concept i n t h e d a t a base i s that we can e a s i l y add more s h o r t r e l a t i o n s t o t h e event node llOne g i r l i s happy t o d q " , f o r example t o r e p r e s e n t "in the school" or "becautse of the weather" or "according t o w h a t Tom s a i d " .Since we want t o d e a l with t r u e s t a t e m e n t s , h y p o t h e t i c a l statements and statements belonging t o some person's b e l i e f s t r u c t u r e , we always add a r e l a t i o n t o an event node i n d ic a t i n g which b e l i e f s t r u c t u r e i t belongs t o , t r u e e v e n t s belong t o the s e t "Tfl#3+Sf1 of all t r u e s t a t e m e n t s . Since t h e relation V A R T TRUE*Sn is so common, we represent it in pictures with the earth sign of electric charts 1 A .-. . Note that t h e r e i s an e a r t h sign on t h e true e v e n t , but no e a r t h Thus w e can represent " A 1 1 n i c e girlsv w i t h a node r e p r e s e n t i n g t h e s e t of a l l nice girls .This means that we need q u a n t i f i e r s on t h e s h o r t r e l a t i o n s , t o be able t o e x p r e s s r e l a t i o n s h i p s between s e t s .Tf there is a short r e l a t i o n R between two s e t s A and B, then the relamtion R might n o t be t r u e between any member of A and any member of B. We have several cases: !Fhese and 0 t h~ cases axe represented in our data base with three qu.an,tifiers ALL, SOME and ITS, The difference belmeen SOME and 12% is shown by the difference between 14The difference between ITS and SOME can be understood if you look at t h e statement 'Wvery man i s i n a c a r v T h i s can mean that ffEvery m a n Is i f l s i d e one single car" o r i t can mean !'For e v e r y m a n t h e r e i s one car i n which he i s . The f i r s t p h~a s e might i n our data b a s e be r e p r e s e n t e d as '%very m a n M ALL I N SOME while the second might b e r e p r e s e n t e d as "Every m a n u ALL IN IT T h e~e are simple r u l e s t o mmipulate. the quantifiers when the In %he following, if no quantifier is marked on a s h o r t r e l a t i o n in a figure, then ALL is i m p l i c i t .5 , Deduction in the data base T h e data base does n o t contain a l l t r u e statements explicitly, some of %hem have t o b e deduced when needed, Basically all deduction rules can be seen as p a t t e r n matching, You haue a pattern swing for example t h a t Vf somethipg h o t is near something inflammable then the irnflammable will catch f i r e " . Then we h'ave sbme actual situation, explicit o r deduced, e.g. "The burnine c i g a r e t t e is thrown i n the p e t r o l t a n k n . In our d a t a base, as in figure 6 . Before using t h e deduction r u l e , we must match the pattern to t h e actual s i t u a t i o n . The p -a t t e r n can contain many interconnected r i d e s , and t h e reality may not at first resemble the p a t t e r n directly, deduction may b e necessary to see the resemblance.The simplest deduction r u l e possible i s just a p a t t e r n o f t w o s h o r t r e l a t i o n s from ,which a third c a n be deduced: "If A R1 B and B R2 C then A R3 C", a simple example: "If A is subset of B , and B is subset of C, then A is subset of Ctf. Since such rules l i n k t o g e t h e r nodes through a chain o f s h o r t r e l a t i o n s , they are c a l l e d chaining rules. Spme chaining rules require side r e l a t i o n s on B t o the fullfilled, for example "A BY B, and A CASE C implies B R E D CI1, but only if A is a true event. ,Every intelligent mann < ALL EQUAL IT #Every bad soldier)) sEvety intelligent man is a bad sol diem Y.a.riables have a new q u a n t i f i e r on them, DEF. This indicates that this s h o r t r e l a t i o n i s p a r t o f the d e f i n i t i o n o f t h a t variable. The v a r i a b l e '%very i n t e l l i g e n t m a n t t above corresponds t o the s e t of all o b j e c t s which s a t i s f y the d e f i n i t i o n . T h i s means t h a t ad soon as w e f i n d an object i n t h e d a t a base f o r which we know o r can deduce t h a t i t s a t i s f i e s t h e d e f i n i t i o n , t h e n we know that i t b e l o n g s t o the VARIABLE Above, and w e can thus deduce t h a t i t i s a bad s o l d i e r .Questions to a computer c a n require s h o r t o r l o n g answers.There are for example yes-no questions like r t I s a man with a balooh coming?" which in our d a t a base will be represented l i k e in f i g u r e 10. our system can answer t h e questions based on the f a c t s .The input language to our system is not f u l l natural english. the language is slightly simplified. me sentences in the example are mitten in this simplified englieh, If E l i z a had been a g i r lnith a f a s t car, then would she be loved by John?Mary i s meeting John and he i s driving h e r c a r . Is t h e poor boy, dangerous?I I . The EQUAL r e l a t i o n sThe Q U A L r e l a t i o n between h o singular elements means tlzt they me identical, However, s i n o e we can p u t quantifiers on the E&UAL relation, we can alsa. use it for many s e t relationships,Some exmaples: ALL A QUAC ALL B means t h a t the s e t s A and B me equal and contain not more than one element each.A and B a.re d i s j o i n t . StNE A EQUAL ALL A means t h a t A i s n o t empty.ALL A NOT EQUAL ALL A means t h a t A i s empty. SOME A EQUAL ALL A means t h a t A i s s i n g u l a r , that i s contains exactly one member.Natural language noun phrases a r e t r a n s l a t e d i n t o nodes marked as singular i n t h e d a t a base, i f :a) The noun phrase i s n o t p l u r a l .b ) The noun phrase i s not t r a n s l a t e d i n t o a VARIABLE i n t h e d a t a base.c ) The noun phrase i s i n t e r p r e k e a i n t h e s p e c i a l sense ( l i k e "A m a n i s walking on t h e s t r e e t u ) and n o t i n t h e g e n e r a l sense (like ~h r e r y man i s a-male humanll).Data base nodes are marked as non-empty i f they are o f t h e type p r e d i c a t e s (like " t h e act o f l i g h t i n g T 1 which w e call LIGHT*P) . This object-type node can be a CONSTUT, a DUMMY o r a VARIABLE. For general-sense statements, t h e nonians a r e u s u a l l y t r a n s l a t e d t o VARIABLES. Example: W v e r y good g i r l w i l l k i s s every brave Adjectives do n b t always i n d i c a t e p r o p e r t i e s which a r e g e n e r a l l y t r u e f o r t h e noun phrase. They can mean m a n y t h i n g s Examples:The good t e a c h e r ex he t e a c h e r which i s good as a t e a c h e r ) , The b i g a n t ex he ant which i s b i g f o r an a n t ) ,The r e d house ex he house which i s red).Therefore, a weaker r e l a t i o n ATTR i s used from a noun t o i t s Ln t h e data base, But we cannot give t h e node i t s e l f t h e name "Johnn o r trCa;mbridge", s i n c e t h e r e m a y be more t h a n one "Johnv In most. cases, the same time-restriction appliels t o attributesas t o the main verb in the sentence. If we say "A h u n g q g i r l ate a c o l d b u f f e t in a sundrenched meadow on a w a r m summer dayw then t h e time and space r e s t r i c t i a n s Prepositional attributes m y also be situation r e s t r i c t e d as for tHe sentence "An angly man with a @;lln is coming at t e n olclockvT, where the man at another time may not be "with a -nu. This might be r e p r e s e n t e d as shown in f i g u r e I l e . that Eliza i s awed if it knows t h a t Eliza is english, is a spinster, and i s cornin$ i n t o the church, all a t t h e same time. 13ut i f E l i z a was an e n g l i s h s p i n s t e r f i v e years ago, and comes into the church today, then we cannot make t h l s deduction. T h i s can be solved in two ways. E l t h e r t h e d a t a base representation o f "Every e n g l i s h s p i n s t e r who comes i n t o the church i s awed" i s changedinto "If a t a c e r t a i n time, a n e n g l i s h spinster comes i n t o t h e church, t h e n she i s awed" o r else the deduction rules a r e changed s o t h a t t h e time-limitat:ions are i m p l i c i t l y carried a l o n g and combined during deduction.MANThere is T M s can almost a l w a y s be done, but not i n some o a s e s , f o r example if we say ''John and Mary t-ogether are heavier than Peter." But in such cases hhere is usually some indication in natural language, like t h e word "togetherv i n d i c a t i n g t h a t the property of t h e composite cannot b e transferred t o i t s elements, Look again at the picture above showing the t r a n s l a t i o n of "John and M a z y are married." ALL -the fiodes with DEF on %hem above m e DUMMIES, This m e a n s that we f i r s t search for a previous-mentioned node with a flWAME: JOHN*Pw r e l a t i o n on it. If one i s found, lr Johnf1 w i l l merge w i t h i t , otherwise "JohnI1 w i l l become a new constant and the DEF i s changed t o ALL. The same t h i n g i s done f o r ItMaryl' Thereafter, when klJc)hnw and "Mary1' have been found i n t h e d a t a 14. Conjunctions between noun phrases i n the generdl sense In the previous chapter I pointed out the ambiguity between sentences l'ike "John and Mary are married" and "John and M a s y are humann where the first sentence says that the composite was n s r r i e d , while t h e second s a i d t h a t the elements individually were humans. I a l s o said t h a t such sentences could always be translated t o composites, since properties of composites can i n general be trasferred by deduction t o the e l e m e n t a q parts. This is not so easy in the general sense, see t h e following examples : "A11 men and women are getting married." "All men and women, a.re happy. "Every man and woman standing together are a maxried couple." l l A l l men and women a r e young people."Noun phrases i n t h e general sense m e t r a n s l T h i s means that the data base must be able t o make deductions on numbers, e , g , t o deduct that i f a composite has t h e r e l a t i o n NUM 2, then t h e r e l a t i o n NOT NUM 1 can be deduced. This i s The natural lanmage phrase "The father and t h e mother is John and M a z y " c a n n o t b e t r a n s l This means t h a t some simple sentences can be t r a n s l a t e d as r e l a t i o n s between p r e d i c a t e s , F o r example, t h e sentence "Every man i s a burrianu can be t r a n s l a t e d l i k e i n f i g u r e 27. Figure 27 which i s much simpler than t h e o t h e r t r a n s l a t i o n , in f i g u r e 28. ,John is riding the bike, From a valid &vent node ( t h a t i s a t t h e time and place o f the event e t c . ) the deduction procedures can deduce e.g, a PRED r e l a t i o n f r o m "Johnft t o 'RDE*~', and t h e s e deduced r el a t i o n s a r e v e r y useful i n l a t e r deduction. There i s a l s o a symmetric r e l a t i o n OBJPRED from t h e object t o t h e predicate. If we can deduce t h a t some object has OBJPRED t o a predicate l i k e m~* p , then w e can deduce that that object i s being ridden, that i s that the p r e d i c a t e RDED+P ( t h e passive of ~W P ) is appliable t o the object.W e can t h e r e f o r e draw t h e following f i g u r e 31 o f r e l a t i o n s : X BY Y & Y CASE Z implies X PRED It X PASS Y & Z PASSCASE P implies Y CASE XSeveral more t r i a n g l e s i n t h e figure form such chaining r u l e s , although not all of them. (~v e n if John is riding and the bike i s ridden, we c a n n o t t h e r e f o r e conclude that John i s r i d i n g just t h a t bike). All t h e c h a w i n g r u l e s involving t h e event node aze t r u e only when that event node i s t r u e , or v a l i d when the event node i s v a l i d ,If the data base contains a verb both i n zctive and passive f o r m , then there must be a r e l a t i o n PASS between them t o perm i t deduction. Since passive f o r m s are less common than a c t i v e , this PASS r e l a t i b n is generated whenever a passive verb appears..The always depressed man3 (DUMMY) One could argue t h a t we could avoid p a s s i v e v e r b s a l t o g e t h e r i n our d a t a b a s e by always u s i n g t h e CASE and OBJPRED r e l ations. There are t w o arguments against t h l s : Since all r e l a t i o n s expand into a chaining r u l e with EQUAL: "X R Y & "Y EQUAL Z implies X R Z" and s i n c e EQUAL can be expanded i n t o an event node using BY and OBJCASE, t h i s can be This sentence could be interpreted in the following way:a) 1.t i s v"There is a set of events, one for each man. One and the same girl is giving a different flower in each such event1'.In our data base, this i s represented like this: is -that -t;he event node is a constant i n t h e singular sense, a variable in t h e distributed sense.MAN-8 (Variable) GIVE-P f as.The t r a n s l a t i o n r u l e i s t h a t i f a l l t h e noun phrases marked with "a" o r "some" a r e t o be i n t e r p r e t e d i n t k e s i n g u l a r s e n s e , then the event can become a constant. If, however, one of t h e noun phrases marked with Itat' o r i s t o be i n t e r - ))If the weather ts rainy and a person is outdoors and the person is not wearing any raincoat, then the person will become wet, Figure 38 A new q u a n t i f i e r "THAT" i s introduced above. The reason f o r this i s t h a t if t h e r e axe two d i f f e r e n t persons, one who i s outdoors, and another who i s n o t wearing a r a i n c o a t , t h e n we do not w a n t t o conclude than any of them n e c e s s a r i l y will become wet. W e t h e r e f o r e have t o s i n g l e out i n t h e data base one person and t w o events i n which t h i s person i s t h e s u b j e c t . quantifier THAT on the p a t t e r n end.A n a t~r d language if-statement i n a question is t r a n s l a t e d in a quite d i f f e r e n t way. The statement "If the weather is rainy and John i s outdoors, w i l l he then be wet?" i s t r a n s l a t e d l i k e t h i s : "Add t h e temporary facts t h a t the weather i s rainy and that John is ouet;do-ors into t h e data base. Thereafter t r y t o .deduce if he w i l l be wet. When the question has been answered, then remove t h e tempwary f a c t s from the data base again. Ekecatable programs in some special programing la,nguage is p o t e n t i d l y a more powerful representation than ours.Heuristic d e s guiding the order of the dedkction search w e easier to include into such a deduction rule, However, the power in an a c h l system is of course limited to the set of pro@ams which the input translator can generate. &my of t h e programs will probably in r e a l i t y not contain anything else thsn our chaining r u l e s , variables and patterns, and such system will also require some mere or less hidden underlying network to select rules and facts o f interest dusing a certain deduction process.On the outemnost surface l e v e l , we have until now o n l y iwglemented yes-no questions in; our system. Other kinds of qnes tiona u;m however appear as sub-ques tions during the deduction, proceso. A question is f n many ways similar t o a natural lrtoguaere if-statement, Iri both c a s e s , a pattern of variables i s created, aad we w a n t to identify this patte;m with the data base.The t r a n s l a t i o n w i l l t h e r e f o r e have t o be something l i k e i n f i g u r e 41. E hes a match i n the data baae, e,g. for a atatement like "Lf a man i s l a t e , then the m a n behind h i m is even later." Here, there i s no previously known man, and the seoond translation w i k h the pattern key would not match "a manw in the if-ata%ement at a l l .Horevez, if solution a) i s adopted, t h i s text will not be treated correotly *A man I f no DUMMY p a t t e r n key i s c r e a t e d , then "his" i n t h e second sentence w i l l i d e n t i f y with "brother" i n t h e previous sentence. I n g e n e r a l , only a f t e r doing t h e refer-back s e a r c h i n t h e d a t a base w i l l we know whether a DUMMY w i l l match a VARIABLE o r a 'CONSTANT.I f a DUMMX matches a VARIABLE, then t h a t DUMMY may b e adding d e f i n i t i o n s t o t h a t VARIABLE. Look f o r example a t t h e sent e n c e " I f a l i o n meets a n e l e p h a n t , -and i f t h e l i o n s e e s t h e e l e p h a n t , then. . . Here, t h e DUMMIES i n t h e second phrase w i l l add t o t h e p a t t e r n key b e i n g b u i l t u p , and t h u s add t o t h e d e f i n i t i o n s of t h e VARIABLES Ifa l i o n t r and "an e l e p h a n t t f , This means t h a t t h e r e are two kinds of DEF-marked relations on DUMMIES, Phe f i r s t of them a r e those which axe t o be used during t h e refer-back search. And t h e second a r e t h o s e which a r e t o be added t o t h e VARIABLE, i f t h e DlJMMY matched a v a r i a b l e , In our system, we i n t e n d t o d i s t i n g u i s h between t h e s e by f i r s t g i v i n g t h e r e l a t i o n s which a r e t o be used i n t h e refer-back search. Then t h e refer-back s e a r c h i s done, a d t h e r e a f t e r t h e r e l a t i o n s are given which add DEF-s t o t h e d e f i n i t i o n of t h e matched VARIABLE.Another i n t e r e s t i n g case i s where t h e r e a r e t w o DUMMIES, one dependent on t h e o t h e r , and one of them matches a VARJXBLE.Look f o r example att h e sentence "If a g i r l i s i n t r o u b l e , t h e n h e r mother w i l l be a n g r y e f f Here, "heru becomes a,n i ndepen-dent DUMMY, while Ither mothern becomes a dependent DUMMY. The jtherff DDMEdY w i l l match t h e VARIABLE "a g i r l f f i n t h e ifclause. The DUMMY Ither motheru w i l l not f i n d any match a t all. And t h e i n t e r e s t i n g t h i n g i s t h a t because t h e independent Some o f the things w e a r e not ready with y e t are other conjunctions than "andtf, r e l a t i v e pronouns, i n t e r r o g a t i v e pronouns, negation, awcilliazy verbs other than l l b e n , compazative adjectives.W e do not y e t try t o resolve ambiguity by reference t o the data base.The is an addition which adda to the power of the rep~esentation. Special in our system may also be t h a t one s h o r t relation o m be extented when necessary into an event. This saves much m e m o r y compared to r e p m s e n t a t i o n s where the fullest; form is always used, even though i t is in most cases not needed. It is for example true t h a t fox a statement like t h a t in figure 3, t h e r e may be doube about only t h e BY relation, o r only the AT-TJ3D3 relation, or only t h e CASE relation. (we may be sure t h a t g i r l is happy", b u t not so sure about the day, or we rnay be s u r e that there is happiness today, but not sure where. ) A full representation would therefore r e q u i r e a place t o insert doubt on my short relation, whether t h e r e is doubt or not, and t h i s would double the data base s i z e . Deduction procedures i n a question a s w e r i n g system, FOA P rapport C 8 3 1 0 -~3 (~5 ) , January 1972. By . . . . . . . . . . . . . . . . 8 . . . . . . . . . . . . . 2 0 E x i s t e n t i a l q u a n t i f i e r . . . . , , 13, 14 Expansion of s h o r t r e l a t i o n s . . .CASE, BY e t c .4 6 , . . , . 5 3 5. 8, 1 1 . . . . . 43 , . . , . 2 2 13, 14, 16, 19, Knowledge and l a n g u a g e u n d e r s t a n d i n g , . r e p r e s e n t i n g a c o n c e p t . I n n a t m a l l a n g u a g e , t h e prepositions a r e u s e d to, r e p r e s e n t s h o r t s i m p l e and d i r e c t r e l a t i o n s between concepts "John i sin t h e b e d " , "The fire w a s lit & Marytt I n the d a t a b a s e , t h e i d e a o f p r e p o s i t i o n s i s extended s o that all aimple a d d i r e c t r e l a t i o n s between c o n c e p t s a r e r e p r e s e n t e d by i m p l i c i t p r e p o s i t i o n s . (~u s t as you could say t h a t t h e r e i s a a i m p l i c i t p r e p o s i t i o n Itby" i n t h e p h r a s e "Mary lit the fire" .)l i m i t a t i o n s i n space, i n time, i n i t s t r u t h v a l u e , and which can have a c m s e , a r e s u l t e t c .b e advantage with predicate calculus representation i s however that the theory of deci&%ility is much fuller aeveloped fhr that representation than for ours.t w o VARIABLES must pairwise match, It i s not enough t o find that t h e father i s married, not even enough t o find that he i s married t o one of John's daughters, He must be married t o just that daughter whose c h i l d r e n are a l l a l s o h i s c h i l d r e n .p r e v i o u s l y known o b j e c t .
The gener@ rule i s t h a t when two conjuncted nouns have been t r a n s l a t e d i n t o a composite o b j e c t , then i t ))The hurnldit~ on rajny daysin the tropics is high))COME-P PARENT-P 3 STUDENT-P PRED rparents of)) ))the three students)) (VARIABLE)
Main paper: introduction: This paper describes t h e n a t u r a l language d a t a base s t r u c t u r e used i n t h e SQ,AP system we dish Question Answering system).Much o f t h a t system i s already working, but t h e paper does not only describe t h e s o l u t i o n s t o solved problems. D i f f i c u l t i e s and unsolved problems are a l s o presented, since I f e e l t h i s i s important t o f u r t h e r progress.answering system capable o f handllng f a c t s o f many d i f f e r e n t kinds. The system should thus not be r e s t r i c t e d t o a small s p e c i a l application area. . natural language r e p r e s e n t a t i o n: There i s an obvious need f o r computers with a c a p a b i l i t y t o converse i n n a t u r a l human languages. Natural lmguages are more general-purpose than most a r t i f i c i a l languages, which means t h a t you can t a l k about a wider subject a r e a i f you use n a t u r a l ldnguages. Natural languages can be used by everyone without s p e c i a l t r a f n i n g , s o computers t a l k i n g n a t u r a l language can make more people able t o use more d i f f e r e n t computer f a c i l i t i e s . F i n a l l y , a r i z i n g p a r t o f computer usage i n t h e f u t u r e will be unintelligent processing of n a t u r a l language t e x t s , and such systems can be improved i f the processing is not wholly u n i n t e l l i g e n t . There a r e a l s o wellknown d i f f i c u l t i e s with n a t u r a l languages f o r computers. Natural language i s c l o s e l y connected t o human knowledge. Therefore, n a t u r a l l a n e a g e sentences can only be understood by a m m o r a computer with f a c t u a l howledge about t h e subject matter and with t h e a b i l i t y t o reason w,ith those f a c t s . To disambiguate such wellknown examples as "The p i g was i n the pentf (~ar-ille el 1964) o r !!He went t o the park with t h e girlu (~c h a n k 1969) t h e computer must have an underlying knowledge about various kinds of "pens" , about where "the girlw was previously and s o On.Also, t h e same thing can be s a i d i n many d i f f e r e n t ways, and a computer with n a t u r a l language c a p a b i l i t i e s must be able t o understand t h i s , s o t h a t f o r example it can see t h e s i m i l a r i t y between "Find t h e mean income of unmarried women with a t l e a s t two bhildren.lf and llSearch through -bhe personell f i l e . For each individual who i s a w o m a n , who i s not married, and who has a number of children g r e a t e r -khan two, accumulate income t o calculate t h e mean.11 Theref ore, a computer undexst anding n a t u r a l language must have a d a t a base with basic f a c t u a l knowledge about the w o r l d i n general or abouf the subject matter which the compvter i s -You should be able t o use this d a t a base t o make deductions.The capability t o do simple and n a t u r a l deductions f a s t i s more important than the c a p a b i l i t y t o make very a d v~c e d and longdrange deductions. Since the d a t a base w i l l be l a r g e , an i m p o r t a n t p a r t o f deduction will be the s e l e c t i o n of t h e relevant f a c t s a~d rules out o f the l a r g e mass of facts not needed f o r one s p e c i a l deduction.The d a t a base can be .more o r l e s s close t o n a t u r a l language.A d a t a base c l o s e t o n a t u r a l language makes input t r a n s l a t i o n e a s i e r , and also the l o s s of nuances during the input t r a n s l a t i o n w i l l be smaller. But t h e d a t a base must on t h e other hand have a l o g i c a l s t r u c t u r e which i s s u i t a b l e f o r deduction knd fact searching.One model o f n a t u r a l language knowledge i s the following: The knowledge consists of "conceptst1 and of r u l e s r e l a t i n g these concepts t o each other. A t y p i c a l concept might be " John1' , T r A l l young menrf, ltThe event when John meets M a x y i n the pa.rkn o r "The month of July, 1973". The concepts are r e l a t e d by r u l e s , which can be very simple r e l a t i o n s ( l i k e t h e r e l a t i o n between "111 young ment1 and the property ffyoungn) or complex patterns of concepts (~i k e the r u l e "If Mary i s weak and t i r e d , and she meets a strong b r u t a l man, then she w i l l be frightened.") These rules form a network l i n k i n g all concepts together.This model o f n a t u r a l language i s close t o t h a t often used by psychologists i n t r y i n g t o explain the working o f t h e i n t e l l i g e n c e i n t h e human mind.fPhe SQAP system uses a data base of that kind. The model may at first seem simple and straightfbrwazd. When you try to produce a worldng question-a,nsw&ring system, you w i l l however find that there a r e many difficulties and complications with such a data base. Thia report presents the raBe,% wortant of the problems we have m e t , and in some cases a l s o our solutions. 1 believe that o%her producer8 of natural language system w i l l sooner o r later encounter the same problem, and they may then benefit from our experience as presented in t h i s paper. The idea is that the data base is organized i n t o nodes, each node Yore complex r u l e s o r r e l a t i o n s between concepts are represented by e x t r a concepts. Thus there i s a concept f o r t h e event ItMary l i t t h e f i r e 1 ' and t h i s concept i s r e l a t e d t o "Maryw, Itthe f l r e u and " a c t of lightingv i n a structure like that i n f i g u r e 1. This s t r u c t u r e has f o u r concepts l i n k e d t o g e t h e r by three "prepositiona,lIt relations : CASE, BY and OBJ. From now on, I will i n this paper c a l l such r e l a t i o n s ltshort r e l a t i o n s f 1 .The d a t a base i s organized so t h a t t h e deduction r u l e s can follow t h e s h o r t r e l a k i o n s i n both d i r e c t i o n s , that i s go from "M+ryI1 t o ItMary l i t the f i r e t 1 o r from "Ma,z?y l i t t h e f i r e f f t o "Mary". . objects, events m d prdicates: Noun phrases i n natural language u s u a l l y r e f e r t o one o r a s e t o f objects i n t h e r e a l world, l i k e f~Stockholmll o r " h r e r y house lin Swedent1 o r "The nice m a n with a bicycle1v. In our system each such concept i s represented by a node i n t h e d a t a b a s e , which could be c a l l e d an o b j e c t node.Each o b j e c t node i s a s s o c i a t e d w i t h one o r more p r e d i c a t e nodes e x p r e s s i n g p r o p e r t i e s of t h a t o b j e c t . I n our data base, w e mark p r e d i c a t e s w i t h t h e p o s t f i x ct*Pcr. Thus, t h e p h r a s e "An always happy g i r l " would i n o u r data base be represented l i k e in figure 2 :A statement l i k e "There is ad a l w a y s happy g i r l r f or "One g i r l 1 s always happy" w o u l d be represented i n t h e same way, with a n object node and t w o s h o r t r e l a t i o n 6 on ~t If w e meet t h e n a t u r a l language p h r a s e "One g i r l i s n i c e todaytT, t h e n we cannot r e p r e s e n t i t as simply. We have t o affix a t i m e t o the r e l a t i o n between t h e girl and HAFPY*Pcc. f o r the PRED short r e l a t l o n is a node of the type "eventtt.WOne girl is happy todayw will thus be repreaented like in figure 3.T CASE Figure 3 ))One girl is happy today))The advantage of having such an e x t r a concept i n t h e d a t a base i s that we can e a s i l y add more s h o r t r e l a t i o n s t o t h e event node llOne g i r l i s happy t o d q " , f o r example t o r e p r e s e n t "in the school" or "becautse of the weather" or "according t o w h a t Tom s a i d " .Since we want t o d e a l with t r u e s t a t e m e n t s , h y p o t h e t i c a l statements and statements belonging t o some person's b e l i e f s t r u c t u r e , we always add a r e l a t i o n t o an event node i n d ic a t i n g which b e l i e f s t r u c t u r e i t belongs t o , t r u e e v e n t s belong t o the s e t "Tfl#3+Sf1 of all t r u e s t a t e m e n t s . Since t h e relation V A R T TRUE*Sn is so common, we represent it in pictures with the earth sign of electric charts 1 A .-. . Note that t h e r e i s an e a r t h sign on t h e true e v e n t , but no e a r t h Thus w e can represent " A 1 1 n i c e girlsv w i t h a node r e p r e s e n t i n g t h e s e t of a l l nice girls .This means that we need q u a n t i f i e r s on t h e s h o r t r e l a t i o n s , t o be able t o e x p r e s s r e l a t i o n s h i p s between s e t s .Tf there is a short r e l a t i o n R between two s e t s A and B, then the relamtion R might n o t be t r u e between any member of A and any member of B. We have several cases: !Fhese and 0 t h~ cases axe represented in our data base with three qu.an,tifiers ALL, SOME and ITS, The difference belmeen SOME and 12% is shown by the difference between 14The difference between ITS and SOME can be understood if you look at t h e statement 'Wvery man i s i n a c a r v T h i s can mean that ffEvery m a n Is i f l s i d e one single car" o r i t can mean !'For e v e r y m a n t h e r e i s one car i n which he i s . The f i r s t p h~a s e might i n our data b a s e be r e p r e s e n t e d as '%very m a n M ALL I N SOME while the second might b e r e p r e s e n t e d as "Every m a n u ALL IN IT T h e~e are simple r u l e s t o mmipulate. the quantifiers when the In %he following, if no quantifier is marked on a s h o r t r e l a t i o n in a figure, then ALL is i m p l i c i t .5 , Deduction in the data base T h e data base does n o t contain a l l t r u e statements explicitly, some of %hem have t o b e deduced when needed, Basically all deduction rules can be seen as p a t t e r n matching, You haue a pattern swing for example t h a t Vf somethipg h o t is near something inflammable then the irnflammable will catch f i r e " . Then we h'ave sbme actual situation, explicit o r deduced, e.g. "The burnine c i g a r e t t e is thrown i n the p e t r o l t a n k n . In our d a t a base, as in figure 6 . Before using t h e deduction r u l e , we must match the pattern to t h e actual s i t u a t i o n . The p -a t t e r n can contain many interconnected r i d e s , and t h e reality may not at first resemble the p a t t e r n directly, deduction may b e necessary to see the resemblance.The simplest deduction r u l e possible i s just a p a t t e r n o f t w o s h o r t r e l a t i o n s from ,which a third c a n be deduced: "If A R1 B and B R2 C then A R3 C", a simple example: "If A is subset of B , and B is subset of C, then A is subset of Ctf. Since such rules l i n k t o g e t h e r nodes through a chain o f s h o r t r e l a t i o n s , they are c a l l e d chaining rules. Spme chaining rules require side r e l a t i o n s on B t o the fullfilled, for example "A BY B, and A CASE C implies B R E D CI1, but only if A is a true event. ,Every intelligent mann < ALL EQUAL IT #Every bad soldier)) sEvety intelligent man is a bad sol diem Y.a.riables have a new q u a n t i f i e r on them, DEF. This indicates that this s h o r t r e l a t i o n i s p a r t o f the d e f i n i t i o n o f t h a t variable. The v a r i a b l e '%very i n t e l l i g e n t m a n t t above corresponds t o the s e t of all o b j e c t s which s a t i s f y the d e f i n i t i o n . T h i s means t h a t ad soon as w e f i n d an object i n t h e d a t a base f o r which we know o r can deduce t h a t i t s a t i s f i e s t h e d e f i n i t i o n , t h e n we know that i t b e l o n g s t o the VARIABLE Above, and w e can thus deduce t h a t i t i s a bad s o l d i e r . keys: Sometimes t h e deduction requires a p a t t e r n o f more than one node, Such patterns a r e c a For this merging we create temporary variables and keys during input translation. The sentence "The m a n always with the gun is in the forestw is thus t r a n s l a t e d i n t o figure 9: nThe man questions: Questions to a computer c a n require s h o r t o r l o n g answers.There are for example yes-no questions like r t I s a man with a balooh coming?" which in our d a t a base will be represented l i k e in f i g u r e 10. our system can answer t h e questions based on the f a c t s .The input language to our system is not f u l l natural english. the language is slightly simplified. me sentences in the example are mitten in this simplified englieh, If E l i z a had been a g i r lnith a f a s t car, then would she be loved by John?Mary i s meeting John and he i s driving h e r c a r . Is t h e poor boy, dangerous?I I . The EQUAL r e l a t i o n sThe Q U A L r e l a t i o n between h o singular elements means tlzt they me identical, However, s i n o e we can p u t quantifiers on the E&UAL relation, we can alsa. use it for many s e t relationships,Some exmaples: ALL A QUAC ALL B means t h a t the s e t s A and B me equal and contain not more than one element each.A and B a.re d i s j o i n t . StNE A EQUAL ALL A means t h a t A i s n o t empty.ALL A NOT EQUAL ALL A means t h a t A i s empty. SOME A EQUAL ALL A means t h a t A i s s i n g u l a r , that i s contains exactly one member.Natural language noun phrases a r e t r a n s l a t e d i n t o nodes marked as singular i n t h e d a t a base, i f :a) The noun phrase i s n o t p l u r a l .b ) The noun phrase i s not t r a n s l a t e d i n t o a VARIABLE i n t h e d a t a base.c ) The noun phrase i s i n t e r p r e k e a i n t h e s p e c i a l sense ( l i k e "A m a n i s walking on t h e s t r e e t u ) and n o t i n t h e g e n e r a l sense (like ~h r e r y man i s a-male humanll).Data base nodes are marked as non-empty i f they are o f t h e type p r e d i c a t e s (like " t h e act o f l i g h t i n g T 1 which w e call LIGHT*P) . This object-type node can be a CONSTUT, a DUMMY o r a VARIABLE. For general-sense statements, t h e nonians a r e u s u a l l y t r a n s l a t e d t o VARIABLES. Example: W v e r y good g i r l w i l l k i s s every brave Adjectives do n b t always i n d i c a t e p r o p e r t i e s which a r e g e n e r a l l y t r u e f o r t h e noun phrase. They can mean m a n y t h i n g s Examples:The good t e a c h e r ex he t e a c h e r which i s good as a t e a c h e r ) , The b i g a n t ex he ant which i s b i g f o r an a n t ) ,The r e d house ex he house which i s red).Therefore, a weaker r e l a t i o n ATTR i s used from a noun t o i t s Ln t h e data base, But we cannot give t h e node i t s e l f t h e name "Johnn o r trCa;mbridge", s i n c e t h e r e m a y be more t h a n one "Johnv In most. cases, the same time-restriction appliels t o attributesas t o the main verb in the sentence. If we say "A h u n g q g i r l ate a c o l d b u f f e t in a sundrenched meadow on a w a r m summer dayw then t h e time and space r e s t r i c t i a n s Prepositional attributes m y also be situation r e s t r i c t e d as for tHe sentence "An angly man with a @;lln is coming at t e n olclockvT, where the man at another time may not be "with a -nu. This might be r e p r e s e n t e d as shown in f i g u r e I l e . that Eliza i s awed if it knows t h a t Eliza is english, is a spinster, and i s cornin$ i n t o the church, all a t t h e same time. 13ut i f E l i z a was an e n g l i s h s p i n s t e r f i v e years ago, and comes into the church today, then we cannot make t h l s deduction. T h i s can be solved in two ways. E l t h e r t h e d a t a base representation o f "Every e n g l i s h s p i n s t e r who comes i n t o the church i s awed" i s changedinto "If a t a c e r t a i n time, a n e n g l i s h spinster comes i n t o t h e church, t h e n she i s awed" o r else the deduction rules a r e changed s o t h a t t h e time-limitat:ions are i m p l i c i t l y carried a l o n g and combined during deduction.MAN composite o b j e c t s: There is T M s can almost a l w a y s be done, but not i n some o a s e s , f o r example if we say ''John and Mary t-ogether are heavier than Peter." But in such cases hhere is usually some indication in natural language, like t h e word "togetherv i n d i c a t i n g t h a t the property of t h e composite cannot b e transferred t o i t s elements, Look again at the picture above showing the t r a n s l a t i o n of "John and M a z y are married." ALL -the fiodes with DEF on %hem above m e DUMMIES, This m e a n s that we f i r s t search for a previous-mentioned node with a flWAME: JOHN*Pw r e l a t i o n on it. If one i s found, lr Johnf1 w i l l merge w i t h i t , otherwise "JohnI1 w i l l become a new constant and the DEF i s changed t o ALL. The same t h i n g i s done f o r ItMaryl' Thereafter, when klJc)hnw and "Mary1' have been found i n t h e d a t a 14. Conjunctions between noun phrases i n the generdl sense In the previous chapter I pointed out the ambiguity between sentences l'ike "John and Mary are married" and "John and M a s y are humann where the first sentence says that the composite was n s r r i e d , while t h e second s a i d t h a t the elements individually were humans. I a l s o said t h a t such sentences could always be translated t o composites, since properties of composites can i n general be trasferred by deduction t o the e l e m e n t a q parts. This is not so easy in the general sense, see t h e following examples : "A11 men and women are getting married." "All men and women, a.re happy. "Every man and woman standing together are a maxried couple." l l A l l men and women a r e young people."Noun phrases i n t h e general sense m e t r a n s l T h i s means that the data base must be able t o make deductions on numbers, e , g , t o deduct that i f a composite has t h e r e l a t i o n NUM 2, then t h e r e l a t i o n NOT NUM 1 can be deduced. This i s fitting composite o b j e c t s i n t o t h e sentence: The gener@ rule i s t h a t when two conjuncted nouns have been t r a n s l a t e d i n t o a composite o b j e c t , then i t ))The hurnldit~ on rajny daysin the tropics is high))COME-P PARENT-P 3 STUDENT-P PRED rparents of)) ))the three students)) (VARIABLE) equality between c o m p o s i t e oob j e c t s: The natural lanmage phrase "The father and t h e mother is John and M a z y " c a n n o t b e t r a n s l This means t h a t some simple sentences can be t r a n s l a t e d as r e l a t i o n s between p r e d i c a t e s , F o r example, t h e sentence "Every man i s a burrianu can be t r a n s l a t e d l i k e i n f i g u r e 27. Figure 27 which i s much simpler than t h e o t h e r t r a n s l a t i o n , in f i g u r e 28. ,John is riding the bike, From a valid &vent node ( t h a t i s a t t h e time and place o f the event e t c . ) the deduction procedures can deduce e.g, a PRED r e l a t i o n f r o m "Johnft t o 'RDE*~', and t h e s e deduced r el a t i o n s a r e v e r y useful i n l a t e r deduction. There i s a l s o a symmetric r e l a t i o n OBJPRED from t h e object t o t h e predicate. If we can deduce t h a t some object has OBJPRED t o a predicate l i k e m~* p , then w e can deduce that that object i s being ridden, that i s that the p r e d i c a t e RDED+P ( t h e passive of ~W P ) is appliable t o the object.W e can t h e r e f o r e draw t h e following f i g u r e 31 o f r e l a t i o n s : X BY Y & Y CASE Z implies X PRED It X PASS Y & Z PASSCASE P implies Y CASE XSeveral more t r i a n g l e s i n t h e figure form such chaining r u l e s , although not all of them. (~v e n if John is riding and the bike i s ridden, we c a n n o t t h e r e f o r e conclude that John i s r i d i n g just t h a t bike). All t h e c h a w i n g r u l e s involving t h e event node aze t r u e only when that event node i s t r u e , or v a l i d when the event node i s v a l i d ,If the data base contains a verb both i n zctive and passive f o r m , then there must be a r e l a t i o n PASS between them t o perm i t deduction. Since passive f o r m s are less common than a c t i v e , this PASS r e l a t i b n is generated whenever a passive verb appears..The always depressed man3 (DUMMY) One could argue t h a t we could avoid p a s s i v e v e r b s a l t o g e t h e r i n our d a t a b a s e by always u s i n g t h e CASE and OBJPRED r e l ations. There are t w o arguments against t h l s : Since all r e l a t i o n s expand into a chaining r u l e with EQUAL: "X R Y & "Y EQUAL Z implies X R Z" and s i n c e EQUAL can be expanded i n t o an event node using BY and OBJCASE, t h i s can be This sentence could be interpreted in the following way:a) 1.t i s v"There is a set of events, one for each man. One and the same girl is giving a different flower in each such event1'.In our data base, this i s represented like this: is -that -t;he event node is a constant i n t h e singular sense, a variable in t h e distributed sense.MAN-8 (Variable) GIVE-P f as.The t r a n s l a t i o n r u l e i s t h a t i f a l l t h e noun phrases marked with "a" o r "some" a r e t o be i n t e r p r e t e d i n t k e s i n g u l a r s e n s e , then the event can become a constant. If, however, one of t h e noun phrases marked with Itat' o r i s t o be i n t e r - ))If the weather ts rainy and a person is outdoors and the person is not wearing any raincoat, then the person will become wet, Figure 38 A new q u a n t i f i e r "THAT" i s introduced above. The reason f o r this i s t h a t if t h e r e axe two d i f f e r e n t persons, one who i s outdoors, and another who i s n o t wearing a r a i n c o a t , t h e n we do not w a n t t o conclude than any of them n e c e s s a r i l y will become wet. W e t h e r e f o r e have t o s i n g l e out i n t h e data base one person and t w o events i n which t h i s person i s t h e s u b j e c t . quantifier THAT on the p a t t e r n end.A n a t~r d language if-statement i n a question is t r a n s l a t e d in a quite d i f f e r e n t way. The statement "If the weather is rainy and John i s outdoors, w i l l he then be wet?" i s t r a n s l a t e d l i k e t h i s : "Add t h e temporary facts t h a t the weather i s rainy and that John is ouet;do-ors into t h e data base. Thereafter t r y t o .deduce if he w i l l be wet. When the question has been answered, then remove t h e tempwary f a c t s from the data base again. Ekecatable programs in some special programing la,nguage is p o t e n t i d l y a more powerful representation than ours.Heuristic d e s guiding the order of the dedkction search w e easier to include into such a deduction rule, However, the power in an a c h l system is of course limited to the set of pro@ams which the input translator can generate. &my of t h e programs will probably in r e a l i t y not contain anything else thsn our chaining r u l e s , variables and patterns, and such system will also require some mere or less hidden underlying network to select rules and facts o f interest dusing a certain deduction process.On the outemnost surface l e v e l , we have until now o n l y iwglemented yes-no questions in; our system. Other kinds of qnes tiona u;m however appear as sub-ques tions during the deduction, proceso. A question is f n many ways similar t o a natural lrtoguaere if-statement, Iri both c a s e s , a pattern of variables i s created, aad we w a n t to identify this patte;m with the data base.The t r a n s l a t i o n w i l l t h e r e f o r e have t o be something l i k e i n f i g u r e 41. E hes a match i n the data baae, e,g. for a atatement like "Lf a man i s l a t e , then the m a n behind h i m is even later." Here, there i s no previously known man, and the seoond translation w i k h the pattern key would not match "a manw in the if-ata%ement at a l l .Horevez, if solution a) i s adopted, t h i s text will not be treated correotly *A man I f no DUMMY p a t t e r n key i s c r e a t e d , then "his" i n t h e second sentence w i l l i d e n t i f y with "brother" i n t h e previous sentence. I n g e n e r a l , only a f t e r doing t h e refer-back s e a r c h i n t h e d a t a base w i l l we know whether a DUMMY w i l l match a VARIABLE o r a 'CONSTANT.I f a DUMMX matches a VARIABLE, then t h a t DUMMY may b e adding d e f i n i t i o n s t o t h a t VARIABLE. Look f o r example a t t h e sent e n c e " I f a l i o n meets a n e l e p h a n t , -and i f t h e l i o n s e e s t h e e l e p h a n t , then. . . Here, t h e DUMMIES i n t h e second phrase w i l l add t o t h e p a t t e r n key b e i n g b u i l t u p , and t h u s add t o t h e d e f i n i t i o n s of t h e VARIABLES Ifa l i o n t r and "an e l e p h a n t t f , This means t h a t t h e r e are two kinds of DEF-marked relations on DUMMIES, Phe f i r s t of them a r e those which axe t o be used during t h e refer-back search. And t h e second a r e t h o s e which a r e t o be added t o t h e VARIABLE, i f t h e DlJMMY matched a v a r i a b l e , In our system, we i n t e n d t o d i s t i n g u i s h between t h e s e by f i r s t g i v i n g t h e r e l a t i o n s which a r e t o be used i n t h e refer-back search. Then t h e refer-back s e a r c h i s done, a d t h e r e a f t e r t h e r e l a t i o n s are given which add DEF-s t o t h e d e f i n i t i o n of t h e matched VARIABLE.Another i n t e r e s t i n g case i s where t h e r e a r e t w o DUMMIES, one dependent on t h e o t h e r , and one of them matches a VARJXBLE.Look f o r example att h e sentence "If a g i r l i s i n t r o u b l e , t h e n h e r mother w i l l be a n g r y e f f Here, "heru becomes a,n i ndepen-dent DUMMY, while Ither mothern becomes a dependent DUMMY. The jtherff DDMEdY w i l l match t h e VARIABLE "a g i r l f f i n t h e ifclause. The DUMMY Ither motheru w i l l not f i n d any match a t all. And t h e i n t e r e s t i n g t h i n g i s t h a t because t h e independent Some o f the things w e a r e not ready with y e t are other conjunctions than "andtf, r e l a t i v e pronouns, i n t e r r o g a t i v e pronouns, negation, awcilliazy verbs other than l l b e n , compazative adjectives.W e do not y e t try t o resolve ambiguity by reference t o the data base.The is an addition which adda to the power of the rep~esentation. Special in our system may also be t h a t one s h o r t relation o m be extented when necessary into an event. This saves much m e m o r y compared to r e p m s e n t a t i o n s where the fullest; form is always used, even though i t is in most cases not needed. It is for example true t h a t fox a statement like t h a t in figure 3, t h e r e may be doube about only t h e BY relation, o r only the AT-TJ3D3 relation, or only t h e CASE relation. (we may be sure t h a t g i r l is happy", b u t not so sure about the day, or we rnay be s u r e that there is happiness today, but not sure where. ) A full representation would therefore r e q u i r e a place t o insert doubt on my short relation, whether t h e r e is doubt or not, and t h i s would double the data base s i z e . Deduction procedures i n a question a s w e r i n g system, FOA P rapport C 8 3 1 0 -~3 (~5 ) , January 1972. By . . . . . . . . . . . . . . . . 8 . . . . . . . . . . . . . 2 0 E x i s t e n t i a l q u a n t i f i e r . . . . , , 13, 14 Expansion of s h o r t r e l a t i o n s . . .CASE, BY e t c .4 6 , . . , . 5 3 5. 8, 1 1 . . . . . 43 , . . , . 2 2 13, 14, 16, 19, Knowledge and l a n g u a g e u n d e r s t a n d i n g , . r e p r e s e n t i n g a c o n c e p t . I n n a t m a l l a n g u a g e , t h e prepositions a r e u s e d to, r e p r e s e n t s h o r t s i m p l e and d i r e c t r e l a t i o n s between concepts "John i sin t h e b e d " , "The fire w a s lit & Marytt I n the d a t a b a s e , t h e i d e a o f p r e p o s i t i o n s i s extended s o that all aimple a d d i r e c t r e l a t i o n s between c o n c e p t s a r e r e p r e s e n t e d by i m p l i c i t p r e p o s i t i o n s . (~u s t as you could say t h a t t h e r e i s a a i m p l i c i t p r e p o s i t i o n Itby" i n t h e p h r a s e "Mary lit the fire" .)l i m i t a t i o n s i n space, i n time, i n i t s t r u t h v a l u e , and which can have a c m s e , a r e s u l t e t c .b e advantage with predicate calculus representation i s however that the theory of deci&%ility is much fuller aeveloped fhr that representation than for ours.t w o VARIABLES must pairwise match, It i s not enough t o find that t h e father i s married, not even enough t o find that he i s married t o one of John's daughters, He must be married t o just that daughter whose c h i l d r e n are a l l a l s o h i s c h i l d r e n .p r e v i o u s l y known o b j e c t . Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
593
0.005059
null
null
null
null
null
null
null
null
13ec1418261019b4ad77996b9d247262a028f9b5
219302977
null
Natural Language as a Special Case of Programming Languages
I o f f e r a tentative answer t o a quesCion poaed by Leo $poetel: 'what t y p e of automata would produce and use structures such 88 natural languages rpossess3 ' ? loam ChowIcy hae pointed o u t that natural languages share certain common structural caaracteriatics, and he argues t h a t these 1ing~ierkl.c universals have implications for our understanding of hum-menta.1 processes. In my The Borm o$ ME---(1975), I 8ug~est that we should develop a model of human
{ "name": [ "Sampson, Geoffrey" ], "affiliation": [ null ] }
null
null
null
1975-09-01
2
0
null
about the ~sychological machinery involved in the comprehension of natnrgl language, based on comparihg the structure of natural language with that of actual computer programming langu-3 ages in practical use.ently. By comparison with Winograd I am less interested in the practical problems of cormaunicating with an automaton in idioma t i o , 'surface-structurea English, and mare interested in what chqacteristiccs o f the huinm language-processing automatondare suggested by those features of English which appear to be universal,It is usual to distinguish the terms automaton andcomputer: an automaton is a mathematical abstraction of a certain kind, while a computer i s a p h y s i c a l object designed t o embody the properties of a particular automaton (cf. Putnam tl9603 1961: 147), as an ink line on a sheet of graph paper is designed t o embody the properties $of a continuous function; thus e.g. a computer 3 u t not an automaton, may break down, as a graph, but not a function, may be smudged. Naturally, though, the only automata Tor which there exist corresponding colpputers are automata .which it is both p o s a i b l e ana u s e f u l to realize physically; so t h e class of computers represents a rather narrow subset of t h e class of automata as defined below. We shall sometimes speak of 'computersv meaning 'automata of the class to which actual computers correspond1; category mistakes need not bother us if we a r e a l e r t to t h e i r dangers. W e may define an automaton as a quadruple ( f, 9, 9 -Int, -Suc), in which 9 is a ( f i n i t e or infinite) s e t of s t a t e s , & is a (finite or infinite) language (i.e. set qf strings of symbols), -I n t is a partial function from 9 X 2 (the Cartesian product of 9 with $) into f (the input funct- We t r e a t the flow of time as a -succession of disczlete instants (corresponding to cycles of a c t u a l computers). Between any adjacent p a i r of instants, the automaton i s i n some s t a t e -S 8 9, At any given instant, a program may be input.If the automaton is in -. . .Int(S, -XI); if (g, I t ) # dom(Int), wa say that -L is undefined f o r -6 (and no change of state occurs), If no program is input, the automaton moves t o some state -8' such that ---S Suc S t , provided t h e r e is such a s t a t e -S t . (Other- wise, no change of s t a t e occurs, and -S is called a stopping s t a t e . )If -Suo is a (partial)function (i.e. if f o r each -S there is at most one s t a t e -S t such t h a t --S Suc S ) , t h e automaton is deterministic .~i c a l charge (representing the d i g i t s 0 an& I) over t h e ' f e p r i t e cores in a s t o r e together w i t h a s e t of working r e g i s t e r s and an add.ress counter. The number of s t a t e s of such an automaton 1s finite but very large: a simple computer with a s t o r b containing 4096 words of 16 b i t s together with a single working register would have on t h e order of 5 x 10 79736 states. he programs of the machine language of. such an automaton w i l l eonsist of sequences of machine words not exceeding the size oi the store, and thus t h e machine language w i l l again be finite. The input of such a program containing, say,n words w & l l cause the automaton to load these words 'in-t;o the first a places in it s t o r e , replacing the current contents, and to set t h e address counter to 7 . 'Phe successor-state function is determined by the number in the addzess-counter together with the code translating machine words into in$.%mct2ons; whenever the counzer contain8 the numberi the au-t;omaton ch-es i t s s t a t e by executing the instruction in t h e Iith place in seore and incrementing t h e :ounter by one, A proper subset of the automaton's states a r e stopping states: whenever the storage word indicated by the address counter is not the code of any instruction, t h e machine stops.For any s t a t e -S of a deterministic automaton, we may use the term succession of S for the seauence of s t a t e s the auto--maton w i l l pass through under t h~ control .of i t s successor-state function, beginning with -S and ending) (if the succession i s f i n i t e ) at a stopping-state. A computer is arranged so t h a t , on entering certain states, it performs certain output actions ( e . g . it prints a symbolic representation of p a r t of its internal state onto paper). The art of programming such a computer consists of finding an input program which moves t h e computer into a state, t h e succession of which causes the computer to perform ~c t i o n s constituting a solution to t h e programmer's problem, while being f i n i t e and as short as possible. istsq have argued that: the relation between sentences and their semantic represeatations is defined by the transformational rules revealed by independently-motivated syntactic analysis fe.g. Poetal L'l9701 1971 : 252f. ) ; although this hypothesis i s certainly not altogether aorrect' (see e.g. Partee 1971), it seems. likely that the semantic representation o#' a sentence is some simple funetion of its.syn$actic deep structure and its surface structure. The rules of iaference for natural languages w i l l no doubt exhibit the 'structure-dependence' charact- For discussion o f the philosophical problems involved in this way of describing natural-language semantic analysis, including problems relating t o the aaalytic/synthetic distinction, cf.Sampson (1970, 1973a, 1975a : ch. 7).It is tempting to view t h e mind of a speakel' of e.g.English as an automaton in tihe defined sense, with the sentences of English as the programs of its machine language, and the rules of inference of English determining the successors t a t e relatioa. In other words, some component of t h e mind of an English-speaker would be a d~v i c e capable of entering any the relation between sentences and semantic representations is a function. In practice it w i l l not be (ambiguous sentences @ll have mope than one semantic representation), so 'the'should read 'one of the ... (respectively)'. one of a (perhaps infinitely) large nuhlber of discrete s t a t e s ;hearing (or reading) a sentence w o u l d move this device from one s t a t e t o another in accordance with d e f i n i t e rules; and other wler;! related to the rules of inference of English would govern' how it passes through different states when not immediately reacting to speech (i.,e. when the owner of the mind is thinking).Although the analogy is tempting, extant computers and their machine languages are n o t promising as sources f o r a t h e o r y of the relation. between human minds and n a t u r a l languages. The machine language sketched above is not at all reminiscent of natural languages. The l a t t e r typically contain infinitely many sentences, only the simplest of which a r e used in practice; the machine language of 52 contains an enormous but finite number of programs, and the programs which are useful in practice (those which compute important functions) a r e not typidally 'simple' in any obvious sense.Portunately, the machine languages of .thg various extant computers are not the only artificial programming languages in use. Partly f o r the very reason that machine languages are so different from natural languages, most programs are written P not in machine languages b u t in so-called 'high-levtl' .programming languages, sudh as FORTRAN, SNOBOL, APL, PL/? (to name a few among many). which he is in fact interacting.High-level lapguages, aad t h e abstract automata whose 'machine languagest they are, differ from one another in more in-teresting ways than do real computers and their machine languages; and furthermore (not surprisingly, since high-level languages a r e designed to be easily usable by human programmers) they are much more comparable with human languages than are r e a l machine languages. (Typically, a high-level programming language is a context-free phrase-structure language, f o r instance.) I shall suggest t h a t the relationship betweer high-level languages ahd their corresponding automata gives us much b e t t e r clues about human mental machinery than does that between real computers and machine languages.Let me fiitst give an example of a high-level language: I shall choose t h e language APL (see e.g. Iverson 1962, Pakin 1968). AP$ is interestbing For our purposes because it is particularly high-level: 5.e. it is r e l a t e d more distantly to machine languages of real computers, and more c l o l e l y to human langudges, than many other high-level languages. It is a real-time rather thm batch-processing language, which means that it is designed to be used in such a way t h a t the r e s u l t of inputting a program will normally be crucially dependent on the prior @-bate of the system (in a batch-processing language, programs a r e designed to be unaffected by those remains of t h e p r i o r state which survive their input): this is appropriate f o r an analogy with human language, singe presum-a b l y the e f f e c t on a person of hearing a sentence deperids in ,-general on his p r i o r system of knowledge and b e l i e f .' ! B e complete language APL includes many features which are i r r e l e v a n t t o our analogy. F o r instance, there is a large amount of apparatus f o r making and breaking contact with the system, and the like; we shall ignore this, ~u s t as we shall ignore the fact t h a t in human speech the effect of an utterance on a person depends among other things on whether the 6 person is awake.Also, APL provides what amounts to a method uf using the language t o alter itself by adding new vocabulary; to discuss this would again complicate t h e issues we are i n t e rested inm7 We shall assume t h a t programmer and system are permanently contact one another, and shall restrict our attention to a subset of APL to be defined below: raeher than resorting to a subscript to distinguish the restricted language from APL in i t s f u l l complexity, we shall understand 'gPLr to mean t h e subset of APL under consideration.'The practicing computer user may find my definition the real-time/batch-processing distinction idiosyncratic; difference I describe is the only one relevant f o r our p r of the esent purposes, but it is far rrom the most salient difference in practice.6 In AP'L terms, we ignore a l l system ins~ructions, i.e. words beginning with A, Note that we use w m underlinin (corresponding to bold type i n p r i n t ) t o quote b s s e q uencee of symbols from an object-language, whether this is an artificial language such as ATL or a natural language such as English.'hn BPL terms, we i g n o r e the dqyinition I mode and the use of characters V A : + .We begin by defining t h e s e t 9 APL of states of &APL.F i r s t , we recursively define a set of APL-prop&ties: : . , n) of th; n a t k a l numbers i n t o S; note that the null s e t 0 i z t h e r e f o r e the l e n g t h 4 s t r g g -overany set.any p o s i t i vwith an alphabetic character: t h e r e & r e therefore infinitely many APL-identifiers. We define Ident as the s e t including all -identifiers t o g e t h e r with an e n t i t y , assumed t o be distinct from all the A P L i d e n t i f i e r s , delloted by. t h e symbol -0 , deic + [any of a small f i n i t e s e t of symbol-strings denoting 91n practice one cannot write a length-0 string, and one cgnnot distinguish a length-? string from a rank-0 property; but Fhe sentehces of dm-, a r e the strings defined by the above grammar, disambiguated by t h e use of round brackets (with assoc@tion t o the r i g h t where l o t indicated by bracketing).The sequence of symbols m+ may optionally be deleted when i n i t -MlmC ial in a sentence. l4 Clearly there &re infinitely many sentences in &APL. A.sentence o f hL is an APL-program.We now go 0x1 t o specMy the function -IntApL from aeapL i n t o which specifies t h e *change of APE s t a t e brought about by a g i~e n APL-program.To determine the new s t a t e arrived at from an arbitrary I ignore these practical complications f o r the sake of simplicity." 1 rgnore complications relating to strings containing the inverted comma character, 'l'l Some of these function^, and their ~a m e s , are common to all 'dialects* of APL: e.g. 5, which denotes the function taking integers i n t o their factorials, strings of integers i n t o the oorresponding strings of f a~t o r i a l a , etc., and which is1 undefined 0.g. for literal APL-properties. !Phe f a c i l i t y of 'user-definition' (cf. note 7) permits a programmer to alter APL+by adding new functions.'12AP~ cantazns no triadic functions other than userdefined ones. current s t a t e on input of an arbitrary program, we consider the phrase-marker of which that program is the terminal string. Beginning at the leaves and working towards the r o o t , and evaluatinq the rightmoat node whenever there is a choice, we associate eachdacr node with an APL-property as i t s denotation and eachaest node with a change to be made to the current APE- -Ana s e t node dominating a memberi of Ident, followed Suppose the program is input i n the morning, say a t 11.30 a.m. Then dscr will denote the string 11 30 0. The function -1 > takes (12 0 0, I 1 30 0,) i n t o 1 0 0, which becomes the denotw a t i o n of dscr in fact dscr will denote 1 0 0 whenever the -+3 1311CCIL3 program i s input i n t h e morning and 0 0 0 whenever it is input in the afternoon (when the hour integer will be 1 4 or more).The monadic function +/ adds t h e numbers in a string, so if dscr denotes ' I 0 0 then dscr,, denotes 1. Dscr denotes 10 -3-5(identified by 3, so dscr, also denotes 10. Accordingly, '21n t h e full version of APL. a -c a n oocur as a rewriteof dscr, in which case dscr is assigned an APL-property input by theprogramer at t h n m e dscr is evaluated by t h e system, We ignore this, since it i n t e r m s with the analogy with natural language. IL the full version it is also possible to output symbol-str'ings which do not represent individual APEproperties; again we ignore this. is a stopping state. A programmer working in BPL has no wish for tihe system to take actions beyond those specified by his programs: by defining monadic, dyadic, ,or triadic functions of any complexity he wishes, he can g e t t h e answers t o hie questions simply by carrying out the s t a t echanges specified in his program. (1n the maohine language of a .genuine computer, on t h e other hand, the state-changes brought about by grograme are of no intrinsicr interest, and the input o f a program ie of value only in tha* it brings the computer t o a s t a t e from w h i c h i t proceeds spontaneously to perform actions useful t o its programmer. )dscr -2 -dscrl df - deic -It may seem.contradictory to say that a real d i g i t a l computer, which will have only f i n i t e l y many s t a t e s and possi b l e programs, can be made t o simulate an automaton such as which has i n f i n i t e l y m a n y s t a t e s and programs. And, of oouree, in practice the simulation ie,not perfect. Although an -state may contain any number of o b j e c t s , for. any APL computer/compiler system t h e r e w i l l be a f i n i t e l i m i t on the number of objects in a s t a t e ; although any real number m a y be an BPL-property, in a practical APL system real numbers are approximated t o a fini-te tolerance. The m a t i o n is q u i t e analogous to the case of natural language, where the individual's tperfomance ' ie an imperfect r e a l i z a t i o n of an ideal* tcompetence', in one sense of t h a t distinction; j u s t as in linguisti c s , so in the case of high-level programming languages it i s normal to give a description of the ideal system separately from a statement of the limitations on t h e realizat5,on of t h a t system in practice, which w i l l d i f f e r from one person to another in the natural language case, from one computer/compiler pair to another in the programming language case.Other high-level programming languages d i f f e r from APL not only in terms of their sentences but in terms of t h e nature of the states on which t h e i~ sentences act. Thus in s t a t e s of e.g. SNOBOL, all objects a r e character strings; in PL/1, objects include not only arrays of the APL kind but a l s o trees, trees of arrays, arrays of trees, etc. Ijpace doea not permit a survey of the differences between high4Level languages w i t h respect t o the nature of t h e i r states.A t this point we are ready to begin t o answer Apostel's question, about what sort of automata natural languages a r e appropriate programming languages f o r . Any answer,to such a novel question must' obviously be very speculative; but the ideas that f o l l o w seam plausible enough to be worth consideration. We do not know with any certainty evbn what the semantic representations or syntactic deep structures of our sent-ences are; but we have seen t h a t there is good reason to think the two may hs similar, and we can make In earlier, unpublished work I have called t h i a t h e topicon (coined on the analogy of 'lexicong), since I envisage it as coneaining a s e t o f entities corresponding to the objects o f which ita owner is aware, and to which ha can therefore take a definite description to refer.is to be a s e t of p o s s i b l e topicon-staeea. m e s e t 8 of topicons t a t e s available to speakers of natural languagee other than English w i l l differ from ( f 17 below), but not in respect of the properties on chich thia paper will concentrate.Note that a topicon-state is c e r t a i n l y not t o be equated with a % t a t e of mind' or tpsychological stateq: a topicon is claimed to be only one small part of a human's mental machinery, and there w i l l be many way8 in which the l a t t e r can vary --e.g. the human may be happy or sad, asleep or awake -without implying any difference in topicon-state.Just as an APEstate contains a s e t of APLobjects with properties drawn from a fixed clase, so a topicon-state will - Gince-I shall frequently be speaking of the relations between linguistic sqressions, topicon-referents, and th'e entities in the outside world which the linguistic expressions denots, l e t me lay down some tsrminolog&cal conventions, I shall use denotation f o r the relation between an'IE and he thing which a hearer takes that IE to correspond to; my theory asserts that denotation i8 a composition of two relations, a relation of reference between linguistic expressions and topicon referents, and a relation of representation between topicon-referents and things. Thus, i f the phrase your car said t o P now --- focus of his attention. This will translate iatm our theory as the notion that the referents in a topicon a r e arrayed in some kind of space, one point of which constitutes the focus of attention at any given time, The nature of this space, and the factors which determine t h e p o s i t i o n of the referents and focus of attention in it,, will be considered in 916 below; f o r the moment, let us simply assume fhat the notion can be made precise. Then we can say t h a t any IE consisting of the word the followed by a series x, ; 1 ... w of adjectives and noun AEH* -n w i l l r e f e r to the nearest r e f e r e n t t5 the hearer% focus of attention having a l l the properties ( ) , ( ) . . . , and 5%) Thus, t h e car will r e f e r to the nearest $(car) referent In &PL, object8 :an be referred to by their identifiers. topicon, unless he is overhearing words addressed to someone e l s e ) whatever other referents are in the vicinity, so it would be o t i o s e to modify a d e i c t i c w i t h a genitive NP.--f f -v --J I M M VSo far I have discussed only referents corresponding to ndun-phraaes in syntax, and representing individuals in the outside world. However, some referents will represent what would more normally be c a l l e d 'facts' or 'events' than 'individuals'.Ordinary predicate logic distinguishes sharply between individuals on the one hand, and facts or events on the other: the former are translated i n t o singular terms, facts, as well as things, may be denoted by s u i t a b l e linguistic expressions, then suppose t h a t topicon contains referents representing facts (propositional referents) as well a s referents representing things (individual referents).We w i l l suppose f u r t h e r t h a t the referents in a topicon are linked in a graph structure in which propositional referents dominate n-tuples of (propositional or individual) r e f e r e n t s , corresp-onding t o the arguments of t h e respective propositions. Consider e.g. one who knows that someone called John bought a car: his topicon will contain a structure of t h e following form:In ( 4 is a car, while r represents the fact t h a t the t h i n g repre--3 sented by 2, is a John ('is called "Johnu1. as we usually say in the case of proper names).To say that a referent, say I&, has the property § ( c a r ) , is to say t h a t there is some referent , in this case r which dominates t h e I-tuple rand which is -5' labelled car. The number of referents dominated by a given referent in a topicon will corrdlate with the label of the latter referent. In other words, preterite tense picks out the nearest §(time)referent as CHh he picks out the nearest §(male)referent. The graph structure i n t o which an individual referent enters can be used to pick out that referent by mebs of r e l a tive clauses. Thus if the c a r refers to g2, then the IE: The principle that each sentence received by a hearer creates a new referent in the hearer's topicon suggests a natural way of reconstructing within the theory the notion of a focus of attention9, which varies with the t o p i c s being d i scussed: we may define the focus of a t t e n t i o n as the most recently-created referent at any given time. The graph structure associated with propositional referents offers a way of formal-izing the notion of distance between referents in the topicon: we may define the distance between any two referents as the minimum number of edges ( e U n e s which link nodes) t h a t must be traversed t"o g e t from one r e f e r e n t to the other. Thus, consider t h e sequence of sentences: (ihJ Johna car.--- and (ii) but before (iii) the hearer's topicon will include the structure of (16), w i t h the focus at 5 (the referent created by (ii)):(16) contains two referents to which hecould refer, namely g,,apd I+; r+ is one edge from the focus and ; , is three edges away. Therefore the theory predicts t h a t he in (iii) will be taken to r e f e r to r + rather than y,, and this prediction seems correct: he in (iii) w i l l be taken t o denote the m a n who has w been h i t , rather than John. (Notice that this cannot be pred i c t e d from t h e situation described: when a driver h i t s a pedestrian, the driver is as likely as the pedestrian to c a l l t h e police. )We m a y no* define the automaton which I claim to represent the mind of a speaker of English. ! b e grammar of t h e subs e t of English we are analysing is as follows: I, you, now, . . .the -NP (of IE) - he it v-3 rwc Deic +Noun + M q , man, . . . e , k~~o w , ...P r ' red, real, ... h -- P P~ + l o vThe finite s e t of predicates of English, together with the phonetic shapes of particles such as the and .of, w i l l be spec--A A P ific to the English language. I would hypothesize that in other respects (17) generates the semantic representations of sentences in any natural language, though the rules which relate the phrase-markers generated by (17) to the correspondi n g surface forms will vary from l a n g u a g s t o language.Some of the latter rules which operate in English w i l l be obvious. The topicon s t a t e to which (19) is input is assumed to be as in ( 2 0 ) below (without the material drawn in d o t t e d l i n e s ) , with t h e current focus at r (indicated by concentric circles):Theowner of the topicon diagrammed i n (20), t o whom (19) i s addressed, i s represented in ( 2 0 ) by g5: he is a man called Dick who has caught and eaten a fish, and who l o v e s t h e denotatum of s7, who is a woman teacher who has bought a horse. The denotatum of g2 is a man called John who has also bought a horse and has eaten a fish which was caught by t h e denotatum 'of El,, a man teacher called Tom, who loves t h e same woman as Dick.We now use mles Rl-R10 to interpret the nodes of ( ? 9 ) , beginning with the leftmost interpretable leaf (since the materi a l on the l e f t of (19) is what i s heard first).Nounl is dominated by IE, so by R9 -Ref (Noun,,) = iz,, , g53; hence by R1 -~e f (NP2) is also jz1, z2, S i m i l a r l y -~e f (9) is Er2\, so, t r i p i a l l y , by R2 Ref(IE4) -is {r2'). . R~~( N P~)is under the control of the successor-stale relation is .to be t h e reconstruction within the topicon theory of the p r e t h e o r e t i c a l notion of thinkinq, this characteristic seems desirable: we do not feel that human thought flows along deterministic channels. 26Although the e f f e c t s of most changes of s t a t e in t h e cases of t h e machine-language discussed in 82 and of APL were confined to the automata themselves, in both cases c e r t a i n state-changes were associated with action by t h e automaton on its environment. Thus, whenever an APL-state acquired an object named -3 a a representation of the p r o p e r t y of t h a t obje c t was, printed by t h e system on an output sheet of paper. We m a y imagine that action is linked to thought in this way a l s o in the human case. Suppose some referent in a topicon represents the person who owns that topicon; then it might be that whenever, during a sequence of state-changes controlled by the successor-state relation, the topicon acquires a refer- 19. There are two obvious problems Connected with the notion that the referents in a topicon, which are supposed to correspond to t h e e n t i t i e s of which the topicon-owner is aware and the propositions he believes, are created by input sentences. The first problem is that no allowance is made for the possibility that speakers are not believed. Thus, i f the topiconowner hears John, the denotatum of s2, say I b a t a car yest- The second problem i s that it i s simply untrue that a person acquires b e l i e f s about t h e existence of entities and the truth of propositions only by being told about them, I may come to believe that there exists a red car either because John tells me that he has bought a red car and I believe h i m , or because I see the red car; similarly, I may come to believe that John bought t h e red car either because he t e l l s me so or because I watched the transaction take place. The car may subsequently be denoted by the phrase the red car, and the pro- I diagram the two cases in (21) and ( 2 2 ) , on the next page. The part of the diagram in solid lines is t h e same in each case, and represents part of the hearer's topicon before the change of s t a t e . In (21) the dotted l i n e s represent t h e effect on the t o p i c a of aeeing John buy a car; in t h i s case, since t h e topicon owner sees the car, r e may assume t h a t he adds some further facts about it (such as t h a t it is red) to his topicon.In (22) t h e dotted lines show the r e s u l t instead of hearing John say I b s t a car. In this case, the referent represent--N W ing the car will be dominated j u s t by the car node and t h e node, since the hearer has no independent information about it. Similarly, one can imagine thaf there might be rules of inference taking a topicon from the s t a t e created by the reception of S k t the door! to a s t a t e which causes the topicon-owner to --shut the door. However, here we come close to the point at which my theory in its present s t a t e breaks down; I defer aiscussing thisAccording to the theom I have sketched, English as gram@ language is not dissimilar to D L , SNOBOL, etc, resembles the l a t t e r in that i t s states consist of arrays ~b j e c t s &awn from a specified class (although the precis structure of the arrays is different as between English a the artificial programming languages, as it ie between the latter themselves), and io. that the structural descriptions of I t s sentences include a subclass of nodes which pick o u t objects from the current s t a t e and another rsubclass which add new objects t o the current state. English differs from &PL, SNO-BO&, etc., in lacking identifiers, and in using the property of distance between objects in a s t a t e in order t o identify objects.My theory is certainly inadequate to account f o r many quite elementary facts about English and other natural languages. If may be t h a t i t s deficiencies are t o o great f o r the theory t o merit consideration. However, I would argue that it Before discuseing the objections to it, let me mention a number of points to which my theory offers satisfactory soltatioas.In the first place, the theory is attractive simply because it offers an answer (emn i t t h e answer eventually turns o u t to be wrong) t o the question why humans should spend so much of' their time exchanging the abstract structures called 'sentences': unlike cultivating the ground or building houses, the utility of this occupation is not immediately apparent to the observer (Sampsan 1972a, 1 9 7 5~ 133-6). In my theory, the exchange of sentences, like direct, observation 'of the environment, helps humans build up a complex but finite 'map-' Or 'model' of the world, a model whtch can be described in quite * concrete terms and which controls the human's actions in m y s which, again, in prindiple should be quite e x p l i c i t l y definable.The notion tmodel' is of course can offer no explanation of t h e syntactic distinction between nouns and adjectives, which serves no obvious semantic m c t i o n ; however, since the distinction appears to be universal in natural laaguagGs, my account of English semantiq representations w i l l incorporate it. ( W e solution to this puzzle ray have to do with the fact that Borne adjectives are 'syncategorematic' in a way which nouns never a r e : a 'goad actor' is not necessarily good though he is necessarily an actor.)
null
null
null
null
Main paper: .: It may seem.contradictory to say that a real d i g i t a l computer, which will have only f i n i t e l y many s t a t e s and possi b l e programs, can be made t o simulate an automaton such as which has i n f i n i t e l y m a n y s t a t e s and programs. And, of oouree, in practice the simulation ie,not perfect. Although an -state may contain any number of o b j e c t s , for. any APL computer/compiler system t h e r e w i l l be a f i n i t e l i m i t on the number of objects in a s t a t e ; although any real number m a y be an BPL-property, in a practical APL system real numbers are approximated t o a fini-te tolerance. The m a t i o n is q u i t e analogous to the case of natural language, where the individual's tperfomance ' ie an imperfect r e a l i z a t i o n of an ideal* tcompetence', in one sense of t h a t distinction; j u s t as in linguisti c s , so in the case of high-level programming languages it i s normal to give a description of the ideal system separately from a statement of the limitations on t h e realizat5,on of t h a t system in practice, which w i l l d i f f e r from one person to another in the natural language case, from one computer/compiler pair to another in the programming language case.Other high-level programming languages d i f f e r from APL not only in terms of their sentences but in terms of t h e nature of the states on which t h e i~ sentences act. Thus in s t a t e s of e.g. SNOBOL, all objects a r e character strings; in PL/1, objects include not only arrays of the APL kind but a l s o trees, trees of arrays, arrays of trees, etc. Ijpace doea not permit a survey of the differences between high4Level languages w i t h respect t o the nature of t h e i r states.A t this point we are ready to begin t o answer Apostel's question, about what sort of automata natural languages a r e appropriate programming languages f o r . Any answer,to such a novel question must' obviously be very speculative; but the ideas that f o l l o w seam plausible enough to be worth consideration. We do not know with any certainty evbn what the semantic representations or syntactic deep structures of our sent-ences are; but we have seen t h a t there is good reason to think the two may hs similar, and we can make In earlier, unpublished work I have called t h i a t h e topicon (coined on the analogy of 'lexicong), since I envisage it as coneaining a s e t o f entities corresponding to the objects o f which ita owner is aware, and to which ha can therefore take a definite description to refer.is to be a s e t of p o s s i b l e topicon-staeea. m e s e t 8 of topicons t a t e s available to speakers of natural languagee other than English w i l l differ from ( f 17 below), but not in respect of the properties on chich thia paper will concentrate.Note that a topicon-state is c e r t a i n l y not t o be equated with a % t a t e of mind' or tpsychological stateq: a topicon is claimed to be only one small part of a human's mental machinery, and there w i l l be many way8 in which the l a t t e r can vary --e.g. the human may be happy or sad, asleep or awake -without implying any difference in topicon-state.Just as an APEstate contains a s e t of APLobjects with properties drawn from a fixed clase, so a topicon-state will - Gince-I shall frequently be speaking of the relations between linguistic sqressions, topicon-referents, and th'e entities in the outside world which the linguistic expressions denots, l e t me lay down some tsrminolog&cal conventions, I shall use denotation f o r the relation between an'IE and he thing which a hearer takes that IE to correspond to; my theory asserts that denotation i8 a composition of two relations, a relation of reference between linguistic expressions and topicon referents, and a relation of representation between topicon-referents and things. Thus, i f the phrase your car said t o P now --- focus of his attention. This will translate iatm our theory as the notion that the referents in a topicon a r e arrayed in some kind of space, one point of which constitutes the focus of attention at any given time, The nature of this space, and the factors which determine t h e p o s i t i o n of the referents and focus of attention in it,, will be considered in 916 below; f o r the moment, let us simply assume fhat the notion can be made precise. Then we can say t h a t any IE consisting of the word the followed by a series x, ; 1 ... w of adjectives and noun AEH* -n w i l l r e f e r to the nearest r e f e r e n t t5 the hearer% focus of attention having a l l the properties ( ) , ( ) . . . , and 5%) Thus, t h e car will r e f e r to the nearest $(car) referent In &PL, object8 :an be referred to by their identifiers. topicon, unless he is overhearing words addressed to someone e l s e ) whatever other referents are in the vicinity, so it would be o t i o s e to modify a d e i c t i c w i t h a genitive NP.--f f -v --J I M M VSo far I have discussed only referents corresponding to ndun-phraaes in syntax, and representing individuals in the outside world. However, some referents will represent what would more normally be c a l l e d 'facts' or 'events' than 'individuals'.Ordinary predicate logic distinguishes sharply between individuals on the one hand, and facts or events on the other: the former are translated i n t o singular terms, facts, as well as things, may be denoted by s u i t a b l e linguistic expressions, then suppose t h a t topicon contains referents representing facts (propositional referents) as well a s referents representing things (individual referents).We w i l l suppose f u r t h e r t h a t the referents in a topicon are linked in a graph structure in which propositional referents dominate n-tuples of (propositional or individual) r e f e r e n t s , corresp-onding t o the arguments of t h e respective propositions. Consider e.g. one who knows that someone called John bought a car: his topicon will contain a structure of t h e following form:In ( 4 is a car, while r represents the fact t h a t the t h i n g repre--3 sented by 2, is a John ('is called "Johnu1. as we usually say in the case of proper names).To say that a referent, say I&, has the property § ( c a r ) , is to say t h a t there is some referent , in this case r which dominates t h e I-tuple rand which is -5' labelled car. The number of referents dominated by a given referent in a topicon will corrdlate with the label of the latter referent. In other words, preterite tense picks out the nearest §(time)referent as CHh he picks out the nearest §(male)referent. The graph structure i n t o which an individual referent enters can be used to pick out that referent by mebs of r e l a tive clauses. Thus if the c a r refers to g2, then the IE: The principle that each sentence received by a hearer creates a new referent in the hearer's topicon suggests a natural way of reconstructing within the theory the notion of a focus of attention9, which varies with the t o p i c s being d i scussed: we may define the focus of a t t e n t i o n as the most recently-created referent at any given time. The graph structure associated with propositional referents offers a way of formal-izing the notion of distance between referents in the topicon: we may define the distance between any two referents as the minimum number of edges ( e U n e s which link nodes) t h a t must be traversed t"o g e t from one r e f e r e n t to the other. Thus, consider t h e sequence of sentences: (ihJ Johna car.--- and (ii) but before (iii) the hearer's topicon will include the structure of (16), w i t h the focus at 5 (the referent created by (ii)):(16) contains two referents to which hecould refer, namely g,,apd I+; r+ is one edge from the focus and ; , is three edges away. Therefore the theory predicts t h a t he in (iii) will be taken to r e f e r to r + rather than y,, and this prediction seems correct: he in (iii) w i l l be taken t o denote the m a n who has w been h i t , rather than John. (Notice that this cannot be pred i c t e d from t h e situation described: when a driver h i t s a pedestrian, the driver is as likely as the pedestrian to c a l l t h e police. )We m a y no* define the automaton which I claim to represent the mind of a speaker of English. ! b e grammar of t h e subs e t of English we are analysing is as follows: I, you, now, . . .the -NP (of IE) - he it v-3 rwc Deic +Noun + M q , man, . . . e , k~~o w , ...P r ' red, real, ... h -- P P~ + l o vThe finite s e t of predicates of English, together with the phonetic shapes of particles such as the and .of, w i l l be spec--A A P ific to the English language. I would hypothesize that in other respects (17) generates the semantic representations of sentences in any natural language, though the rules which relate the phrase-markers generated by (17) to the correspondi n g surface forms will vary from l a n g u a g s t o language.Some of the latter rules which operate in English w i l l be obvious. The topicon s t a t e to which (19) is input is assumed to be as in ( 2 0 ) below (without the material drawn in d o t t e d l i n e s ) , with t h e current focus at r (indicated by concentric circles):Theowner of the topicon diagrammed i n (20), t o whom (19) i s addressed, i s represented in ( 2 0 ) by g5: he is a man called Dick who has caught and eaten a fish, and who l o v e s t h e denotatum of s7, who is a woman teacher who has bought a horse. The denotatum of g2 is a man called John who has also bought a horse and has eaten a fish which was caught by t h e denotatum 'of El,, a man teacher called Tom, who loves t h e same woman as Dick.We now use mles Rl-R10 to interpret the nodes of ( ? 9 ) , beginning with the leftmost interpretable leaf (since the materi a l on the l e f t of (19) is what i s heard first).Nounl is dominated by IE, so by R9 -Ref (Noun,,) = iz,, , g53; hence by R1 -~e f (NP2) is also jz1, z2, S i m i l a r l y -~e f (9) is Er2\, so, t r i p i a l l y , by R2 Ref(IE4) -is {r2'). . R~~( N P~)is under the control of the successor-stale relation is .to be t h e reconstruction within the topicon theory of the p r e t h e o r e t i c a l notion of thinkinq, this characteristic seems desirable: we do not feel that human thought flows along deterministic channels. 26Although the e f f e c t s of most changes of s t a t e in t h e cases of t h e machine-language discussed in 82 and of APL were confined to the automata themselves, in both cases c e r t a i n state-changes were associated with action by t h e automaton on its environment. Thus, whenever an APL-state acquired an object named -3 a a representation of the p r o p e r t y of t h a t obje c t was, printed by t h e system on an output sheet of paper. We m a y imagine that action is linked to thought in this way a l s o in the human case. Suppose some referent in a topicon represents the person who owns that topicon; then it might be that whenever, during a sequence of state-changes controlled by the successor-state relation, the topicon acquires a refer- 19. There are two obvious problems Connected with the notion that the referents in a topicon, which are supposed to correspond to t h e e n t i t i e s of which the topicon-owner is aware and the propositions he believes, are created by input sentences. The first problem is that no allowance is made for the possibility that speakers are not believed. Thus, i f the topiconowner hears John, the denotatum of s2, say I b a t a car yest- The second problem i s that it i s simply untrue that a person acquires b e l i e f s about t h e existence of entities and the truth of propositions only by being told about them, I may come to believe that there exists a red car either because John tells me that he has bought a red car and I believe h i m , or because I see the red car; similarly, I may come to believe that John bought t h e red car either because he t e l l s me so or because I watched the transaction take place. The car may subsequently be denoted by the phrase the red car, and the pro- I diagram the two cases in (21) and ( 2 2 ) , on the next page. The part of the diagram in solid lines is t h e same in each case, and represents part of the hearer's topicon before the change of s t a t e . In (21) the dotted l i n e s represent t h e effect on the t o p i c a of aeeing John buy a car; in t h i s case, since t h e topicon owner sees the car, r e may assume t h a t he adds some further facts about it (such as t h a t it is red) to his topicon.In (22) t h e dotted lines show the r e s u l t instead of hearing John say I b s t a car. In this case, the referent represent--N W ing the car will be dominated j u s t by the car node and t h e node, since the hearer has no independent information about it. Similarly, one can imagine thaf there might be rules of inference taking a topicon from the s t a t e created by the reception of S k t the door! to a s t a t e which causes the topicon-owner to --shut the door. However, here we come close to the point at which my theory in its present s t a t e breaks down; I defer aiscussing thisAccording to the theom I have sketched, English as gram@ language is not dissimilar to D L , SNOBOL, etc, resembles the l a t t e r in that i t s states consist of arrays ~b j e c t s &awn from a specified class (although the precis structure of the arrays is different as between English a the artificial programming languages, as it ie between the latter themselves), and io. that the structural descriptions of I t s sentences include a subclass of nodes which pick o u t objects from the current s t a t e and another rsubclass which add new objects t o the current state. English differs from &PL, SNO-BO&, etc., in lacking identifiers, and in using the property of distance between objects in a s t a t e in order t o identify objects.My theory is certainly inadequate to account f o r many quite elementary facts about English and other natural languages. If may be t h a t i t s deficiencies are t o o great f o r the theory t o merit consideration. However, I would argue that it Before discuseing the objections to it, let me mention a number of points to which my theory offers satisfactory soltatioas.In the first place, the theory is attractive simply because it offers an answer (emn i t t h e answer eventually turns o u t to be wrong) t o the question why humans should spend so much of' their time exchanging the abstract structures called 'sentences': unlike cultivating the ground or building houses, the utility of this occupation is not immediately apparent to the observer (Sampsan 1972a, 1 9 7 5~ 133-6). In my theory, the exchange of sentences, like direct, observation 'of the environment, helps humans build up a complex but finite 'map-' Or 'model' of the world, a model whtch can be described in quite * concrete terms and which controls the human's actions in m y s which, again, in prindiple should be quite e x p l i c i t l y definable.The notion tmodel' is of course can offer no explanation of t h e syntactic distinction between nouns and adjectives, which serves no obvious semantic m c t i o n ; however, since the distinction appears to be universal in natural laaguagGs, my account of English semantiq representations w i l l incorporate it. ( W e solution to this puzzle ray have to do with the fact that Borne adjectives are 'syncategorematic' in a way which nouns never a r e : a 'goad actor' is not necessarily good though he is necessarily an actor.) : about the ~sychological machinery involved in the comprehension of natnrgl language, based on comparihg the structure of natural language with that of actual computer programming langu-3 ages in practical use.ently. By comparison with Winograd I am less interested in the practical problems of cormaunicating with an automaton in idioma t i o , 'surface-structurea English, and mare interested in what chqacteristiccs o f the huinm language-processing automatondare suggested by those features of English which appear to be universal,It is usual to distinguish the terms automaton andcomputer: an automaton is a mathematical abstraction of a certain kind, while a computer i s a p h y s i c a l object designed t o embody the properties of a particular automaton (cf. Putnam tl9603 1961: 147), as an ink line on a sheet of graph paper is designed t o embody the properties $of a continuous function; thus e.g. a computer 3 u t not an automaton, may break down, as a graph, but not a function, may be smudged. Naturally, though, the only automata Tor which there exist corresponding colpputers are automata .which it is both p o s a i b l e ana u s e f u l to realize physically; so t h e class of computers represents a rather narrow subset of t h e class of automata as defined below. We shall sometimes speak of 'computersv meaning 'automata of the class to which actual computers correspond1; category mistakes need not bother us if we a r e a l e r t to t h e i r dangers. W e may define an automaton as a quadruple ( f, 9, 9 -Int, -Suc), in which 9 is a ( f i n i t e or infinite) s e t of s t a t e s , & is a (finite or infinite) language (i.e. set qf strings of symbols), -I n t is a partial function from 9 X 2 (the Cartesian product of 9 with $) into f (the input funct- We t r e a t the flow of time as a -succession of disczlete instants (corresponding to cycles of a c t u a l computers). Between any adjacent p a i r of instants, the automaton i s i n some s t a t e -S 8 9, At any given instant, a program may be input.If the automaton is in -. . .Int(S, -XI); if (g, I t ) # dom(Int), wa say that -L is undefined f o r -6 (and no change of state occurs), If no program is input, the automaton moves t o some state -8' such that ---S Suc S t , provided t h e r e is such a s t a t e -S t . (Other- wise, no change of s t a t e occurs, and -S is called a stopping s t a t e . )If -Suo is a (partial)function (i.e. if f o r each -S there is at most one s t a t e -S t such t h a t --S Suc S ) , t h e automaton is deterministic .~i c a l charge (representing the d i g i t s 0 an& I) over t h e ' f e p r i t e cores in a s t o r e together w i t h a s e t of working r e g i s t e r s and an add.ress counter. The number of s t a t e s of such an automaton 1s finite but very large: a simple computer with a s t o r b containing 4096 words of 16 b i t s together with a single working register would have on t h e order of 5 x 10 79736 states. he programs of the machine language of. such an automaton w i l l eonsist of sequences of machine words not exceeding the size oi the store, and thus t h e machine language w i l l again be finite. The input of such a program containing, say,n words w & l l cause the automaton to load these words 'in-t;o the first a places in it s t o r e , replacing the current contents, and to set t h e address counter to 7 . 'Phe successor-state function is determined by the number in the addzess-counter together with the code translating machine words into in$.%mct2ons; whenever the counzer contain8 the numberi the au-t;omaton ch-es i t s s t a t e by executing the instruction in t h e Iith place in seore and incrementing t h e :ounter by one, A proper subset of the automaton's states a r e stopping states: whenever the storage word indicated by the address counter is not the code of any instruction, t h e machine stops.For any s t a t e -S of a deterministic automaton, we may use the term succession of S for the seauence of s t a t e s the auto--maton w i l l pass through under t h~ control .of i t s successor-state function, beginning with -S and ending) (if the succession i s f i n i t e ) at a stopping-state. A computer is arranged so t h a t , on entering certain states, it performs certain output actions ( e . g . it prints a symbolic representation of p a r t of its internal state onto paper). The art of programming such a computer consists of finding an input program which moves t h e computer into a state, t h e succession of which causes the computer to perform ~c t i o n s constituting a solution to t h e programmer's problem, while being f i n i t e and as short as possible. istsq have argued that: the relation between sentences and their semantic represeatations is defined by the transformational rules revealed by independently-motivated syntactic analysis fe.g. Poetal L'l9701 1971 : 252f. ) ; although this hypothesis i s certainly not altogether aorrect' (see e.g. Partee 1971), it seems. likely that the semantic representation o#' a sentence is some simple funetion of its.syn$actic deep structure and its surface structure. The rules of iaference for natural languages w i l l no doubt exhibit the 'structure-dependence' charact- For discussion o f the philosophical problems involved in this way of describing natural-language semantic analysis, including problems relating t o the aaalytic/synthetic distinction, cf.Sampson (1970, 1973a, 1975a : ch. 7).It is tempting to view t h e mind of a speakel' of e.g.English as an automaton in tihe defined sense, with the sentences of English as the programs of its machine language, and the rules of inference of English determining the successors t a t e relatioa. In other words, some component of t h e mind of an English-speaker would be a d~v i c e capable of entering any the relation between sentences and semantic representations is a function. In practice it w i l l not be (ambiguous sentences @ll have mope than one semantic representation), so 'the'should read 'one of the ... (respectively)'. one of a (perhaps infinitely) large nuhlber of discrete s t a t e s ;hearing (or reading) a sentence w o u l d move this device from one s t a t e t o another in accordance with d e f i n i t e rules; and other wler;! related to the rules of inference of English would govern' how it passes through different states when not immediately reacting to speech (i.,e. when the owner of the mind is thinking).Although the analogy is tempting, extant computers and their machine languages are n o t promising as sources f o r a t h e o r y of the relation. between human minds and n a t u r a l languages. The machine language sketched above is not at all reminiscent of natural languages. The l a t t e r typically contain infinitely many sentences, only the simplest of which a r e used in practice; the machine language of 52 contains an enormous but finite number of programs, and the programs which are useful in practice (those which compute important functions) a r e not typidally 'simple' in any obvious sense.Portunately, the machine languages of .thg various extant computers are not the only artificial programming languages in use. Partly f o r the very reason that machine languages are so different from natural languages, most programs are written P not in machine languages b u t in so-called 'high-levtl' .programming languages, sudh as FORTRAN, SNOBOL, APL, PL/? (to name a few among many). which he is in fact interacting.High-level lapguages, aad t h e abstract automata whose 'machine languagest they are, differ from one another in more in-teresting ways than do real computers and their machine languages; and furthermore (not surprisingly, since high-level languages a r e designed to be easily usable by human programmers) they are much more comparable with human languages than are r e a l machine languages. (Typically, a high-level programming language is a context-free phrase-structure language, f o r instance.) I shall suggest t h a t the relationship betweer high-level languages ahd their corresponding automata gives us much b e t t e r clues about human mental machinery than does that between real computers and machine languages.Let me fiitst give an example of a high-level language: I shall choose t h e language APL (see e.g. Iverson 1962, Pakin 1968). AP$ is interestbing For our purposes because it is particularly high-level: 5.e. it is r e l a t e d more distantly to machine languages of real computers, and more c l o l e l y to human langudges, than many other high-level languages. It is a real-time rather thm batch-processing language, which means that it is designed to be used in such a way t h a t the r e s u l t of inputting a program will normally be crucially dependent on the prior @-bate of the system (in a batch-processing language, programs a r e designed to be unaffected by those remains of t h e p r i o r state which survive their input): this is appropriate f o r an analogy with human language, singe presum-a b l y the e f f e c t on a person of hearing a sentence deperids in ,-general on his p r i o r system of knowledge and b e l i e f .' ! B e complete language APL includes many features which are i r r e l e v a n t t o our analogy. F o r instance, there is a large amount of apparatus f o r making and breaking contact with the system, and the like; we shall ignore this, ~u s t as we shall ignore the fact t h a t in human speech the effect of an utterance on a person depends among other things on whether the 6 person is awake.Also, APL provides what amounts to a method uf using the language t o alter itself by adding new vocabulary; to discuss this would again complicate t h e issues we are i n t e rested inm7 We shall assume t h a t programmer and system are permanently contact one another, and shall restrict our attention to a subset of APL to be defined below: raeher than resorting to a subscript to distinguish the restricted language from APL in i t s f u l l complexity, we shall understand 'gPLr to mean t h e subset of APL under consideration.'The practicing computer user may find my definition the real-time/batch-processing distinction idiosyncratic; difference I describe is the only one relevant f o r our p r of the esent purposes, but it is far rrom the most salient difference in practice.6 In AP'L terms, we ignore a l l system ins~ructions, i.e. words beginning with A, Note that we use w m underlinin (corresponding to bold type i n p r i n t ) t o quote b s s e q uencee of symbols from an object-language, whether this is an artificial language such as ATL or a natural language such as English.'hn BPL terms, we i g n o r e the dqyinition I mode and the use of characters V A : + .We begin by defining t h e s e t 9 APL of states of &APL.F i r s t , we recursively define a set of APL-prop&ties: : . , n) of th; n a t k a l numbers i n t o S; note that the null s e t 0 i z t h e r e f o r e the l e n g t h 4 s t r g g -overany set.any p o s i t i vwith an alphabetic character: t h e r e & r e therefore infinitely many APL-identifiers. We define Ident as the s e t including all -identifiers t o g e t h e r with an e n t i t y , assumed t o be distinct from all the A P L i d e n t i f i e r s , delloted by. t h e symbol -0 , deic + [any of a small f i n i t e s e t of symbol-strings denoting 91n practice one cannot write a length-0 string, and one cgnnot distinguish a length-? string from a rank-0 property; but Fhe sentehces of dm-, a r e the strings defined by the above grammar, disambiguated by t h e use of round brackets (with assoc@tion t o the r i g h t where l o t indicated by bracketing).The sequence of symbols m+ may optionally be deleted when i n i t -MlmC ial in a sentence. l4 Clearly there &re infinitely many sentences in &APL. A.sentence o f hL is an APL-program.We now go 0x1 t o specMy the function -IntApL from aeapL i n t o which specifies t h e *change of APE s t a t e brought about by a g i~e n APL-program.To determine the new s t a t e arrived at from an arbitrary I ignore these practical complications f o r the sake of simplicity." 1 rgnore complications relating to strings containing the inverted comma character, 'l'l Some of these function^, and their ~a m e s , are common to all 'dialects* of APL: e.g. 5, which denotes the function taking integers i n t o their factorials, strings of integers i n t o the oorresponding strings of f a~t o r i a l a , etc., and which is1 undefined 0.g. for literal APL-properties. !Phe f a c i l i t y of 'user-definition' (cf. note 7) permits a programmer to alter APL+by adding new functions.'12AP~ cantazns no triadic functions other than userdefined ones. current s t a t e on input of an arbitrary program, we consider the phrase-marker of which that program is the terminal string. Beginning at the leaves and working towards the r o o t , and evaluatinq the rightmoat node whenever there is a choice, we associate eachdacr node with an APL-property as i t s denotation and eachaest node with a change to be made to the current APE- -Ana s e t node dominating a memberi of Ident, followed Suppose the program is input i n the morning, say a t 11.30 a.m. Then dscr will denote the string 11 30 0. The function -1 > takes (12 0 0, I 1 30 0,) i n t o 1 0 0, which becomes the denotw a t i o n of dscr in fact dscr will denote 1 0 0 whenever the -+3 1311CCIL3 program i s input i n t h e morning and 0 0 0 whenever it is input in the afternoon (when the hour integer will be 1 4 or more).The monadic function +/ adds t h e numbers in a string, so if dscr denotes ' I 0 0 then dscr,, denotes 1. Dscr denotes 10 -3-5(identified by 3, so dscr, also denotes 10. Accordingly, '21n t h e full version of APL. a -c a n oocur as a rewriteof dscr, in which case dscr is assigned an APL-property input by theprogramer at t h n m e dscr is evaluated by t h e system, We ignore this, since it i n t e r m s with the analogy with natural language. IL the full version it is also possible to output symbol-str'ings which do not represent individual APEproperties; again we ignore this. is a stopping state. A programmer working in BPL has no wish for tihe system to take actions beyond those specified by his programs: by defining monadic, dyadic, ,or triadic functions of any complexity he wishes, he can g e t t h e answers t o hie questions simply by carrying out the s t a t echanges specified in his program. (1n the maohine language of a .genuine computer, on t h e other hand, the state-changes brought about by grograme are of no intrinsicr interest, and the input o f a program ie of value only in tha* it brings the computer t o a s t a t e from w h i c h i t proceeds spontaneously to perform actions useful t o its programmer. )dscr -2 -dscrl df - deic - Appendix:
null
null
null
null
{ "paperhash": [ "fillmore|verbs_of_judging:_an_exercise_in_semantic_description" ], "title": [ "Verbs of judging: An exercise in semantic description" ], "abstract": [ "(1969). Verbs of judging: An exercise in semantic description. Paper in Linguistics: Vol. 1, No. 1, pp. 91-117." ], "authors": [ { "name": [ "C. Fillmore" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "143260598" ], "intents": [ [] ], "isInfluential": [ false ] }
null
593
0
null
null
null
null
null
null
null
null
02219d7b599b738e3659e22fe499e619934bea0b
64541562
null
Speech Generation from Semantic Nets
8atur.l languag8 o u t p u t can b* generatrd fram remantic nets by ptoc*rsing ternplats8 asroeiated with concept# in the n e t r A # q t a t verb teaplater i s belng derived from a Study of t h e surfrc9 syntax o f ran@ 3000 Englirh Verb88 f h e actlve forms of t h e verb$ have been t Z ~r r i f l s d rtcorb&n$ t o subjectr o b j e e t C m 1 ~ and compl~aent(r1) there Syntactic Patterns, augmented w i t h case nams, ara used as a grammat t o Cantrol th4 generation O t t e x t , Thlr text in turn i s pasrad through a speech syntheri8 program and output by 4 VOTRAX speech rynth4o&zax, ThLr analyrar rhould ultimately benefit systems a t t s a p t l n g t o understand E r l ~l i ~h i n p u t by p r a v i d i n g surface etructurs t o d e e p c u l i t r u e t u r d maps using the rare trmpirter r r emp1Qytd by t h e generator. T h f r reararch w4r rupportrd by t h e Detenra Advaneed Reuearch Proleeta Agancy a f th* Dcprrtmmnt @ t D ~f e n r e and manit6rrd by the U, 8 . &rmY R s # ~r r c h Offica under Cantract No, DAHC04~75-C-0006.
{ "name": [ "Slocum, Jonathan" ], "affiliation": [ null ] }
null
null
null
1975-11-01
0
5
null
null
t h e y nust spark, or a t irart w r i t e , t h e urcr*r nrtur61 Language, mafor @rgumentr. F t r~t t h e l u b j r e t auat b e p a n e r r f a d ro
null
null
null
Main paper: it computer6 r t r t o canmonicrtr e f t r e t l v c l y with p e o p l e ,: t h e y nust spark, or a t irart w r i t e , t h e urcr*r nrtur61 Language, mafor @rgumentr. F t r~t t h e l u b j r e t auat b e p a n e r r f a d ro Appendix:
null
null
null
null
{ "paperhash": [], "title": [], "abstract": [], "authors": [], "arxiv_id": [], "s2_corpus_id": [], "intents": [], "isInfluential": [] }
null
591
0.00846
null
null
null
null
null
null
null
null
96661895ad291325ce8bd651e46bced265d78d89
219304841
null
Contextual Reference Resolution
With the exception of pranomial reference, little, has been written (in the field of computational linguistics) about the phenomenon of reference i n natural language. This paper investigates the power and use of reference in natural language. and the problems involved in its resolution. An algorithm is sketched for accomplishing reference resolution using a notion of cross-sentential focus, a mechanism f o r hypothesizing all possible contextual references, and a judgment mechanism f o r dis -~r i r n i n a t i ng among the hypotheses. The reference resolution problem The present work began as an attempt to develop a set of algorithms and/or heurietics to enable a primitive-based, inferencedriven model of a natural language user (Schank 1972 Rieger 1974) to properly resolve pronomial references acmee eentence bmndaries. The authors quickly realized, however, that the problem of pronomial reference resolution is only a .small aspect of a problem which might be termed nominal reference resolution, itself but a 8-11 aspect of the problem 'of the coherence of d text, (or conversation) i, em the manner in which it llmeansu more than the logicd conjunction of the meaningp of its individual constituent aentences, Examples of tqe f i r s t problem, i. e. pronomial reference resohtion a r e given in sentence sequences 1-4 below. 1. Yesterday some boys from our village chased a pack of wild dogs; the largest one fell into a ditch. 2. The wild dogs which forage just outside our village suffer from a strange bone-wealeining disealte. Yesterday some boys from our village chased a pack of wild dogs* the largeat one broke a leg and fell into a ditch. 3. Yesterday John chased Bill half a block; he was soon out of breath. 4. My friend Bill has an extremely severe case of asthma. Yesterday John chased Bill half a block; he was soon out of breath. The ~r o b l e m in utterance (text, conversation etc. ) excerpts of the above type is #hat of determining the referents of W e various occurrences
{ "name": [ "Klappholz, David and", "Lockman, Abe" ], "affiliation": [ null, null ] }
null
null
null
1975-11-01
3
13
null
The reference resolution problem The present work began as an attempt to develop a set of algorithms and/or heurietics to enable a primitive-based, inferencedriven model of a natural language user (Schank 1972 Rieger 1974 to properly resolve pronomial references acmee eentence bmndaries. The authors quickly realized, however, that the problem of pronomial reference resolution is only a .small aspect of a problem which might be termed nominal reference resolution, itself but a 8-11 aspect of the problem 'of the coherence of d text, (or conversation) i, em the manner in which it llmeansu more than the logicd conjunction of the meaningp of its individual constituent aentences,Examples of tqe f i r s t problem, i. e. pronomial reference resohtion a r e given in sentence sequences 1-4 below.1. Yesterday some boys from our village chased a pack of wild dogs; the largest one fell into a ditch.2. The wild dogs which forage just outside our village suffer from a strange bone-wealeining disealte. Yesterday some boys from our village chased a pack of wild dogs* the largeat one broke a leg and fell into a ditch.3. Yesterday John chased Bill half a block; he was soon out of breath.4. My friend Bill has an extremely severe case of asthma. YesterdayJohn chased Bill half a block; he was soon out of breath.The ~r o b l e m in utterance (text, conversation etc. ) excerpts of the above type is #hat of determining the referents of W e various occurrences of the pronouns I ' one, I ' and "heN For the moment we simply note that usually preferred referents of the two occurreqces of llonell a r e I1boyl1 and I l d~g~~, (examples 1 and 2 respectively) and those of the two occurrehces of "hew a r e I1Johnl1 and Bill (exampies 3 and 4 respectively. )The more general p r~b l e m of nominal reference resolution is exhibited in €he following annotated excerpt from a recent newpaper article (N. Y. Times 7/15/75, byline Arnold Lubasch); subscripted bracketing of the excerpt is intended only to enable later reference to specific parts of the text. (Clark (1975) discussee the problem from a viewpoint different from that of this paper, )The reader who remains unconvinced by the examples above that local context (and specific world knowledge relating to local context) must play a crucial role in reference resolution i s asked to consider the two sentence sequences 5a, 6, and 5b, 6.5. a. The founding fathers had a difficult time agreeing on how the basic laws governing our country should be framed.b. Those foolish people a t the country club have spent an incredible amount of time arguing about club rules.6. The second article of the constitution, for example, was argued about for months before agreement was reached.In sentence sequence 5a, 6, Itthe second articlett clearly refers to the second article of the constitution of the United States, while in sentence sequence 5b, 6, the reference i s to the second article of the constitution of the country club. In each case the only factor involved in resolving the reference is the semantic content of its 10-1 aontextin this case the meaning of the sentende preceding the one in which tke reference occurs. In order for textual occurrm-ce of such proper-noun-like objects to be properly handled, their standard default referents must be listed in the lexicon. This is not to say that occurrences of proper-noun-like objects cannot be references to objects occurring previously in the text; rather it is the case that their default options must also be considered a s possible referents.A s final examples of the reference resolution problem l e t u s consider sentence sequences 9 and 10 below.9. The president was shot while riding in a motorcade down one of the major boulevards of Dallas yesterday; it caused a panic on Wall Street.10. John was invited to tea at the Quimbyls last Saturday; he would have loveq to go, but he knew held be busy then.In example 9, while t&e first sentence of the eequence contains a number of noun objects (president, motorcade, boulevards, Dallas) which a r e potential referents for the occurrence of r l i t l l in the second sentence, none of the these is in fact, the proper referent; rather, the proper referent of I f i t l t i s the event (or fact) that "The president was shot while . . . . I t In example 10 we have an instance of a n adverbial reference ("thenf1) which must be recognized a s referring to flyesterday" rather than bo some non adverbial object occurring in the f i r s t sentence of that example, in addition it will often miss a quite obvious referent entirely, and, in fact, resolves non-pronomial references only accidentally if at all.Before presenting a sketch of a proposed solution to the nominal reference resolution problem, it would be well to detail more precisely the overall language processing enviornment within which it i s meint to operate and of which it is a most necessary part.First, we adsume that a relatively small set, S, of semantic (ii) There is a one-to-o* mapping f r o m meanings of (natural language) sentences to formulas of L.While a set of p r k i i t i v e s and a meaning representation language even demonstably close to satisfying the above conditions have yet to be produced, we will, in examples to follow, make use of meaning represen tations; the only claim we will make for them is that the functibns served by their constituent constructs must be served by the elements of any adequate system.In addition to a meaning representation scheme we will assume en encoding of world knowledge of the sort which a lltypicallt adult might possess, again with the same obvious caveat.While the question of translation from natural language sentence8to'meaning representations will not be touched upon h e r e , we will sasume sentence -by-sentence translation of the s o r t exhibited in various examples to follow.The solution PO the reference resolution problem r e s t s in recognizing the fact that reference is an elliptical device, and +that the human under stander of. natural language cannot recapture that which was elided once he is too far from it in the text; in fact, he cannot resolve a reference to a p i n t in the text more than a few-sentences back without going back and pondering i t (if he can do so at all). We should note that this i s true even ih the case in which the referent doesn't actually appear in the text, but appears only in an inference f r o m some statement made in the text.In this latter casea case which we will discuss only a t the very end of this paper t h e reference is not resolvable (and would not therefore have Been made by the c~e a t o r of the text in the first place). unless the statement f r o m which the inference is made appears shortly before in the text . Though we cannot say precisely how far back i s meant by. "shortly before, " it is certainly no m o r e than a few sentences. Fbr a given sentence, S, appearing in a text we will r e f e r to the gequence of sentences preceding S by no more than the intended distance as the focus of S.In t e r m s of computer implementation, we will, in the processing of a text (which we conc-eive of as proceeding sentence-by-sentence), maintain the following focus sets. 11. Stan argued with his s i s t e r F r a n in a n attempt to convince her that she should bring Mary, whom he would like to get to knpw, on their e. The prospect really excites him.f. He arguecl thatf t wouldn't tie Mary up for m o r e than half a day.g. -It's €he best one in the country, you know.h. -She thruughtit was a t w r i b l e idea.i. She happened to be busy then, but expressed a n interest in coming along ahother t h e . h. Both ---@he a& it a r e ambiuous; ifshe is taken to be "Fran, l 1 then it refers to EXENT (Fran will bring Mary ,. .); ifshe is taken to be "Maryll), thenit r e f e r s to EVENT (Mary will come.. . )The point i s , of course, that any item in (the meaning representation of) a sentence, S, m a y be referenced by some item in (the meaning repr eeentation of) a latter sentence.On the other side of the coin the question of identifying potential r e -ferences is just a s important a s that of identifying the seb of all possible referents for an object which is known to reference something.If we were' concerned only with pronomial referenee reaolution, the problem would have a simple solution; every pronoun is a reference.F o r nominal items other than pronouns the problem is far less simple;if a noun occurs in a text just how do we know if there i s a previously occurring nominal item to which it refers? As much a s we would like there to be algorithmically testable criteria, i. e. recognizable syntactic and/or semantic cues, for making the decision, there s e e m to be none.Thus, the mechanism we propose considers every token appearing It i s clear that following step II any further processing of reference hypotheses requires that all members of H be considered relahive to one avther, since the correctness or incorrectness of one may depend crucially upon that of others. In the general case not all hypotheses will turn out to be correct, and in fact some may contradict o t h e r sfor instance in the case of two hypothesis-triples with identical f i r s t and second elements and different third elements.Once it has been created, the set H is submitted to a "judgment mechanismft whose task it i s to choose some of the hypotheses a s valid and others a s invalid. The judgement mechanism must clearly have access to the world knowledge stored in memory, and must be capable of performing inferencing of a sort which produces decisions a s to the relative Eklihoods of the various hypotheses.Before giving example8 of just how such a judgment mechanism might work, we should make it clear that our sense of I1inferencing1l is very different from Riegerls (1974) . In Riegerls sense inferencing is undirected, while ours is directed toward the goal of validati~g hypotheses.There is, in addition, another sense in which the s o r t of inferencing to be done by the judgment mechanism is directed. The fact that the rgasons for validating or throwing out a particular reference hypothesis (on the part of human natural language users) involve the information coweyed in local context as well a s world knowledge relating to items contained in that information (and world knowledge relating to items contained in world knowledge relating to items contained in that information, ctc. ) constitutes a good guess as to the particular pieces of world kncrwledge and the rules of inference which must be involved in judging that hypothesis.14 and 15 below contain components of possible meaning repfek sentations of the two sentencel of sentence sequence 1 at the beginning of this paper.14. 'The meaning reprecentations proposed for the two sentences a r e C1A%hGhGAC5AC& and G7hCshCo A C~O A C~~ respectively. Note that we are not claiming that the predicates CHASED, and FALL INTO and the constants YESTERDAY, BOY, DOG, PAST and DITCH a r e a t the leve3. of semantic primitives; rather, the above analyses a r e at just the level which we need t o illustate the operation of the reference resolution mechanism. Furthermore, the symbols YESTERDAY, BOY, DOG, PAST and DITCH ahould be taken a s pointers to the definitions of the appropriate items encoded in memory in whatever fashion. The b r a c k e t h g in the notation [A] , whereA is a pointer to a definition, is meant to be a function which takes A into an object whose meaning ie the class of items satisfying the meaning pointed to by A.Once the translation of the first sentence of sequence 1 into its meaning representation has been completedon the assumption that that sentence is at the beginning of the text being processedthe various focus sem will contain the followkg: After the second sentence i s translated the set, H, of referencetriple hypotheses presented to the judgment mechanism will then be the Sentence sequence 2 at the beginning of this paper would be handled in precisely the same manner a s sentence sequence 1 up to the point at which 11y3 is a member of xl1I and "y, i s a member of x," were the r emaining hypotheses. The knowledge that Ifthe dogsLt refer red to suffer from a strange bone -weakening diaease would bhen cause the judgment mechaniam to strengthen the likelihood that tlonell refers to "dogs, thuscausing Ityl is a member of x," to be the preferred judgment.Sentence sequence16 below contains an example of EVENT reference.The presidnet was shot yesterday. It caused a panic on Wall Street.Omitting all other details of the translation into meaning representation we simply note that the primitive -level predicate into which cause" is tranqlated requires an object of the form EVENT (F) as its subject (i. e. if we say something like "John caused a stir" what we m e a n is that John did something and the event (or fact) that he did that caused a stir.) Thus, when the 2nd sentence is handled, the only possible referents for will be the objects contained in the EVENT focus, namely just EVENT (the president was shot yesterrlay). The judgment mechanism thus must s k p l y decide if the event (or fact) that the president was shot yesterday was likely to have caused a panic on Wall Street, a judgment which, with adequate world knowledge, should certainly be confirmed.Sentence sequence 17 is a very similar case.17. The president was shot yesterday. Bill told me all about it. It caused a panic on Wall Street.In order to resolve the reference 'lit" in the last sentence of 17, the judgment mechanism would have to decide on the relative likelihoods of i and ii below (i) The event (or fact) that the president was shot yesterday caused a panic on Wall Street.(ii) The event (ok fact) that Bill told me about the president being shot yesterday cauaed a panic on Wall Street.Again, with the availability of reasonable world knowledge about such things a s presidents, their being shot and panics, the judgment mechanism should be able to choose the proper referent for "it1IWhile a fully detailed specification of the judgment mechanism must await further investigation, the above examples should illustrate, at least in part, the manner in which we conceive of its operation.The phenomenon with which we have been dealing is one example of what we would like to call the llcreativefl aspect of language use; more specifically, reference of the s o r t we have describedand attempted to handleis an elliptical device necessary for effective communication;moreover, it is a device which exhibits the ability of language to "change the ground rulestf in a very flexible and fluid manner in response to context.At this point we must admit that there is an even more creative type of reference than the sort we have dealt with. 18 below i s an example of this type of reference.18. Last week I caught a cold while vieiting my mother in Chicago; as ueual , the chicken eoup had too much pepper in it.The interesting reference in the above example i s ILchickeh soup. There i s no item in the first sentence to which it is directly related; on the other hand, few people have any trouble resolving it by interpolating between the two sentences of example 18 the idea expressed in sentence 19below:1,q. When I get sick my mother makes me chicken soup.If sentence 19 were available, our reference resolution mechanism would easily come up with an identity relation between the two occurrences of I t chicken eoup Obviously, for our proposed mechanism to resolve this reference, some sort of inferencing must f i r s t work on the 1st sentence of 18 to produce the meaning of 19 a s an inference. Thus it is clear that reference resolution and general inferencing must be interleaved.The mechanism proposed abave does not handle the entire problem.It does, however, seem to be a minimal model of reference resoIdtion (minimal in the sense that at least this much must be going on). In addition, it provides for that control over the use of general inferencing which is required to avoid a combinatbrial explosion (BOOM).
null
null
null
null
Main paper: : The reference resolution problem The present work began as an attempt to develop a set of algorithms and/or heurietics to enable a primitive-based, inferencedriven model of a natural language user (Schank 1972 Rieger 1974 to properly resolve pronomial references acmee eentence bmndaries. The authors quickly realized, however, that the problem of pronomial reference resolution is only a .small aspect of a problem which might be termed nominal reference resolution, itself but a 8-11 aspect of the problem 'of the coherence of d text, (or conversation) i, em the manner in which it llmeansu more than the logicd conjunction of the meaningp of its individual constituent aentences,Examples of tqe f i r s t problem, i. e. pronomial reference resohtion a r e given in sentence sequences 1-4 below.1. Yesterday some boys from our village chased a pack of wild dogs; the largest one fell into a ditch.2. The wild dogs which forage just outside our village suffer from a strange bone-wealeining disealte. Yesterday some boys from our village chased a pack of wild dogs* the largeat one broke a leg and fell into a ditch.3. Yesterday John chased Bill half a block; he was soon out of breath.4. My friend Bill has an extremely severe case of asthma. YesterdayJohn chased Bill half a block; he was soon out of breath.The ~r o b l e m in utterance (text, conversation etc. ) excerpts of the above type is #hat of determining the referents of W e various occurrences of the pronouns I ' one, I ' and "heN For the moment we simply note that usually preferred referents of the two occurreqces of llonell a r e I1boyl1 and I l d~g~~, (examples 1 and 2 respectively) and those of the two occurrehces of "hew a r e I1Johnl1 and Bill (exampies 3 and 4 respectively. )The more general p r~b l e m of nominal reference resolution is exhibited in €he following annotated excerpt from a recent newpaper article (N. Y. Times 7/15/75, byline Arnold Lubasch); subscripted bracketing of the excerpt is intended only to enable later reference to specific parts of the text. (Clark (1975) discussee the problem from a viewpoint different from that of this paper, )The reader who remains unconvinced by the examples above that local context (and specific world knowledge relating to local context) must play a crucial role in reference resolution i s asked to consider the two sentence sequences 5a, 6, and 5b, 6.5. a. The founding fathers had a difficult time agreeing on how the basic laws governing our country should be framed.b. Those foolish people a t the country club have spent an incredible amount of time arguing about club rules.6. The second article of the constitution, for example, was argued about for months before agreement was reached.In sentence sequence 5a, 6, Itthe second articlett clearly refers to the second article of the constitution of the United States, while in sentence sequence 5b, 6, the reference i s to the second article of the constitution of the country club. In each case the only factor involved in resolving the reference is the semantic content of its 10-1 aontextin this case the meaning of the sentende preceding the one in which tke reference occurs. In order for textual occurrm-ce of such proper-noun-like objects to be properly handled, their standard default referents must be listed in the lexicon. This is not to say that occurrences of proper-noun-like objects cannot be references to objects occurring previously in the text; rather it is the case that their default options must also be considered a s possible referents.A s final examples of the reference resolution problem l e t u s consider sentence sequences 9 and 10 below.9. The president was shot while riding in a motorcade down one of the major boulevards of Dallas yesterday; it caused a panic on Wall Street.10. John was invited to tea at the Quimbyls last Saturday; he would have loveq to go, but he knew held be busy then.In example 9, while t&e first sentence of the eequence contains a number of noun objects (president, motorcade, boulevards, Dallas) which a r e potential referents for the occurrence of r l i t l l in the second sentence, none of the these is in fact, the proper referent; rather, the proper referent of I f i t l t i s the event (or fact) that "The president was shot while . . . . I t In example 10 we have an instance of a n adverbial reference ("thenf1) which must be recognized a s referring to flyesterday" rather than bo some non adverbial object occurring in the f i r s t sentence of that example, in addition it will often miss a quite obvious referent entirely, and, in fact, resolves non-pronomial references only accidentally if at all.Before presenting a sketch of a proposed solution to the nominal reference resolution problem, it would be well to detail more precisely the overall language processing enviornment within which it i s meint to operate and of which it is a most necessary part.First, we adsume that a relatively small set, S, of semantic (ii) There is a one-to-o* mapping f r o m meanings of (natural language) sentences to formulas of L.While a set of p r k i i t i v e s and a meaning representation language even demonstably close to satisfying the above conditions have yet to be produced, we will, in examples to follow, make use of meaning represen tations; the only claim we will make for them is that the functibns served by their constituent constructs must be served by the elements of any adequate system.In addition to a meaning representation scheme we will assume en encoding of world knowledge of the sort which a lltypicallt adult might possess, again with the same obvious caveat.While the question of translation from natural language sentence8to'meaning representations will not be touched upon h e r e , we will sasume sentence -by-sentence translation of the s o r t exhibited in various examples to follow.The solution PO the reference resolution problem r e s t s in recognizing the fact that reference is an elliptical device, and +that the human under stander of. natural language cannot recapture that which was elided once he is too far from it in the text; in fact, he cannot resolve a reference to a p i n t in the text more than a few-sentences back without going back and pondering i t (if he can do so at all). We should note that this i s true even ih the case in which the referent doesn't actually appear in the text, but appears only in an inference f r o m some statement made in the text.In this latter casea case which we will discuss only a t the very end of this paper t h e reference is not resolvable (and would not therefore have Been made by the c~e a t o r of the text in the first place). unless the statement f r o m which the inference is made appears shortly before in the text . Though we cannot say precisely how far back i s meant by. "shortly before, " it is certainly no m o r e than a few sentences. Fbr a given sentence, S, appearing in a text we will r e f e r to the gequence of sentences preceding S by no more than the intended distance as the focus of S.In t e r m s of computer implementation, we will, in the processing of a text (which we conc-eive of as proceeding sentence-by-sentence), maintain the following focus sets. 11. Stan argued with his s i s t e r F r a n in a n attempt to convince her that she should bring Mary, whom he would like to get to knpw, on their e. The prospect really excites him.f. He arguecl thatf t wouldn't tie Mary up for m o r e than half a day.g. -It's €he best one in the country, you know.h. -She thruughtit was a t w r i b l e idea.i. She happened to be busy then, but expressed a n interest in coming along ahother t h e . h. Both ---@he a& it a r e ambiuous; ifshe is taken to be "Fran, l 1 then it refers to EXENT (Fran will bring Mary ,. .); ifshe is taken to be "Maryll), thenit r e f e r s to EVENT (Mary will come.. . )The point i s , of course, that any item in (the meaning representation of) a sentence, S, m a y be referenced by some item in (the meaning repr eeentation of) a latter sentence.On the other side of the coin the question of identifying potential r e -ferences is just a s important a s that of identifying the seb of all possible referents for an object which is known to reference something.If we were' concerned only with pronomial referenee reaolution, the problem would have a simple solution; every pronoun is a reference.F o r nominal items other than pronouns the problem is far less simple;if a noun occurs in a text just how do we know if there i s a previously occurring nominal item to which it refers? As much a s we would like there to be algorithmically testable criteria, i. e. recognizable syntactic and/or semantic cues, for making the decision, there s e e m to be none.Thus, the mechanism we propose considers every token appearing It i s clear that following step II any further processing of reference hypotheses requires that all members of H be considered relahive to one avther, since the correctness or incorrectness of one may depend crucially upon that of others. In the general case not all hypotheses will turn out to be correct, and in fact some may contradict o t h e r sfor instance in the case of two hypothesis-triples with identical f i r s t and second elements and different third elements.Once it has been created, the set H is submitted to a "judgment mechanismft whose task it i s to choose some of the hypotheses a s valid and others a s invalid. The judgement mechanism must clearly have access to the world knowledge stored in memory, and must be capable of performing inferencing of a sort which produces decisions a s to the relative Eklihoods of the various hypotheses.Before giving example8 of just how such a judgment mechanism might work, we should make it clear that our sense of I1inferencing1l is very different from Riegerls (1974) . In Riegerls sense inferencing is undirected, while ours is directed toward the goal of validati~g hypotheses.There is, in addition, another sense in which the s o r t of inferencing to be done by the judgment mechanism is directed. The fact that the rgasons for validating or throwing out a particular reference hypothesis (on the part of human natural language users) involve the information coweyed in local context as well a s world knowledge relating to items contained in that information (and world knowledge relating to items contained in world knowledge relating to items contained in that information, ctc. ) constitutes a good guess as to the particular pieces of world kncrwledge and the rules of inference which must be involved in judging that hypothesis.14 and 15 below contain components of possible meaning repfek sentations of the two sentencel of sentence sequence 1 at the beginning of this paper.14. 'The meaning reprecentations proposed for the two sentences a r e C1A%hGhGAC5AC& and G7hCshCo A C~O A C~~ respectively. Note that we are not claiming that the predicates CHASED, and FALL INTO and the constants YESTERDAY, BOY, DOG, PAST and DITCH a r e a t the leve3. of semantic primitives; rather, the above analyses a r e at just the level which we need t o illustate the operation of the reference resolution mechanism. Furthermore, the symbols YESTERDAY, BOY, DOG, PAST and DITCH ahould be taken a s pointers to the definitions of the appropriate items encoded in memory in whatever fashion. The b r a c k e t h g in the notation [A] , whereA is a pointer to a definition, is meant to be a function which takes A into an object whose meaning ie the class of items satisfying the meaning pointed to by A.Once the translation of the first sentence of sequence 1 into its meaning representation has been completedon the assumption that that sentence is at the beginning of the text being processedthe various focus sem will contain the followkg: After the second sentence i s translated the set, H, of referencetriple hypotheses presented to the judgment mechanism will then be the Sentence sequence 2 at the beginning of this paper would be handled in precisely the same manner a s sentence sequence 1 up to the point at which 11y3 is a member of xl1I and "y, i s a member of x," were the r emaining hypotheses. The knowledge that Ifthe dogsLt refer red to suffer from a strange bone -weakening diaease would bhen cause the judgment mechaniam to strengthen the likelihood that tlonell refers to "dogs, thuscausing Ityl is a member of x," to be the preferred judgment.Sentence sequence16 below contains an example of EVENT reference.The presidnet was shot yesterday. It caused a panic on Wall Street.Omitting all other details of the translation into meaning representation we simply note that the primitive -level predicate into which cause" is tranqlated requires an object of the form EVENT (F) as its subject (i. e. if we say something like "John caused a stir" what we m e a n is that John did something and the event (or fact) that he did that caused a stir.) Thus, when the 2nd sentence is handled, the only possible referents for will be the objects contained in the EVENT focus, namely just EVENT (the president was shot yesterrlay). The judgment mechanism thus must s k p l y decide if the event (or fact) that the president was shot yesterday was likely to have caused a panic on Wall Street, a judgment which, with adequate world knowledge, should certainly be confirmed.Sentence sequence 17 is a very similar case.17. The president was shot yesterday. Bill told me all about it. It caused a panic on Wall Street.In order to resolve the reference 'lit" in the last sentence of 17, the judgment mechanism would have to decide on the relative likelihoods of i and ii below (i) The event (or fact) that the president was shot yesterday caused a panic on Wall Street.(ii) The event (ok fact) that Bill told me about the president being shot yesterday cauaed a panic on Wall Street.Again, with the availability of reasonable world knowledge about such things a s presidents, their being shot and panics, the judgment mechanism should be able to choose the proper referent for "it1IWhile a fully detailed specification of the judgment mechanism must await further investigation, the above examples should illustrate, at least in part, the manner in which we conceive of its operation.The phenomenon with which we have been dealing is one example of what we would like to call the llcreativefl aspect of language use; more specifically, reference of the s o r t we have describedand attempted to handleis an elliptical device necessary for effective communication;moreover, it is a device which exhibits the ability of language to "change the ground rulestf in a very flexible and fluid manner in response to context.At this point we must admit that there is an even more creative type of reference than the sort we have dealt with. 18 below i s an example of this type of reference.18. Last week I caught a cold while vieiting my mother in Chicago; as ueual , the chicken eoup had too much pepper in it.The interesting reference in the above example i s ILchickeh soup. There i s no item in the first sentence to which it is directly related; on the other hand, few people have any trouble resolving it by interpolating between the two sentences of example 18 the idea expressed in sentence 19below:1,q. When I get sick my mother makes me chicken soup.If sentence 19 were available, our reference resolution mechanism would easily come up with an identity relation between the two occurrences of I t chicken eoup Obviously, for our proposed mechanism to resolve this reference, some sort of inferencing must f i r s t work on the 1st sentence of 18 to produce the meaning of 19 a s an inference. Thus it is clear that reference resolution and general inferencing must be interleaved.The mechanism proposed abave does not handle the entire problem.It does, however, seem to be a minimal model of reference resoIdtion (minimal in the sense that at least this much must be going on). In addition, it provides for that control over the use of general inferencing which is required to avoid a combinatbrial explosion (BOOM). Appendix:
null
null
null
null
{ "paperhash": [ "rieger|conceptual_memory:_a_theory_and_computer_program_for_processing_the_meaning_content_of_natural_langu" ], "title": [ "Conceptual memory: a theory and computer program for processing the meaning content of natural langu" ], "abstract": [ "Abstract : Humans perform vast quantities of spontaneous, subconscious computation in order to understand even the simplest language utterances. The computation is principally meaning-based. With syntax and traditional semantics playing insignificant roles. This thesis supports this conjecture by synthesis of a theory and computer program which account for many aspects of language behavior in humans. It is a theory of language and memory." ], "authors": [ { "name": [ "C. Rieger" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "53880811" ], "intents": [ [ "background", "methodology" ] ], "isInfluential": [ true ] }
null
591
0.021997
null
null
null
null
null
null
null
null
e60284e4331d89ebf669d2d6a5cc53b77037612f
219304927
null
{PHLIQA} 1: Multilevel Semantics in Question Answering
This paper outlinee a recently implemented que~tion answering system , called PHLIQA 1 , which answers English questions about a data base . Unlike other existing aysteme , that directly tramlate a syntactic deep structure into a program to be executed, PHLIQA 1 leads a question through several intermediate etages of semantic analysis . In every stage the question is represented a0 an expression of a formal language, The paper describes aome features of the Languages that are &uc~essivelg used during the analyeis process : the English-oriented Formal Language , the World Model Language and the Data Base Language . Next, we ahow the separate conversion steps that can be distinguished i n the process. We indicate the problems that are handled by these conversions , and that are often neglected in other systems.
{ "name": [ "Medema, P. and", "Bronnenberg, W. J. and", "Bunt, H. C. and", "Landsbergen, S. P. J. and", "Scha, R. J. H. and", "Schoenmakers, W. J. and", "van Utteren, E. P. C." ], "affiliation": [ null, null, null, null, null, null, null ] }
null
null
null
1975-11-01
3
8
null
PHLIQA 1 is an experimental ~y e t e m for answering isolated English questions about a data base . We have singled this out as the central problem of queation anawerlng , and therefore postponed the treatment of declaratives and imperrt tives , as well aa the analyak of discourse untll a later vereion of the system .The data baee is about computer installations in Europe and their users . At the moment, it is small and resides in core-but its structure and content are those of a realistic Codagyl format data base on disk ( CODASYL Data Base Task Group [ 1971 ' J ) Only one module of the system , the wevaluation componenVT , would have to be c h m q p d in order to handle a l h a l t f data base . fig. 1 ) :--Underetandtng the question : Translating the question into a formal expreesion which represents its meaning with respect to the world model of the -Computing the answer : Elaborating this expreseion , thereby finding the answer, it is repreeented in the system' s internal formalism.--Formulating the answer : Translating this answer into a form that can be more readily under8 toad . respect to the world model of th@ system. Its conrrtants correspond to the concepts that canstitute the universe of discourse . The language is independent of the input language that ie udled ( in this case English) , and also independent of the storage structure of the data base.If w e now look at a further subdivierion of the component& , the difference between PHLIQA 1 and other systems becornea apparent . -What the semantic types of it8 Immediate sub-expressions are allowed to be .I ( Semantic Deep Structure ) EFL-WML ----- owledge of tsanslation -----World Structure expre $ sion of World Model Language I - WML-DBL --t -- translation f ---( There is never a restriction on the syntactic form of the sub-expressions , )-How the semantic type of the remitting expression is derived from the semantic types of the immediate sub-expressions .Given the types of the elementary expressions ( the constants and variables ) , A comtant reprersenting a single object has a simple type . E.g, , 6 has the type " integer " , A c6nstant representing a collection of objedta of type oc has a type of the form <d> . E,g. , companies has the type "(company)" intagera has the type "(integer) . SEMk . The rule i s applicable if at least one of the conditions COND is true . Then SEM ia constructed according to ACTION and A ~impler example is the specification of the subject in a clauae like ' to uee a computer ', The eemantic surface structure of this clause means: there is a usesituation , with ~a m e computer as its object , and an unspecified subject .Phase 2 can be said to ' disambiguate ' thi@ expression in a context like ' when did Shell start to q e a computer 3 .A transformation specifies the subject of the use-situation as Shell '. This transformation would not apply if w e had the verb propose instead of start ' .The condition8 of phase 2 and phase 3 contain a rkhortcuV' to the world model1 the semantic types of the world model interpretations of the EFL congtants are inspected i n order to avoid the construction of semantic deep e tructures that have no interpretation in the world model . This blocks many unfruitful parsing paths.The translation from a semantic deep structure ( EFL expraseion ) into an un- --the data baee is more limited than the world model . Some questions that can be expreased in WML can be answered only partially o r not a t all r the WML expresrition has no DBL translation. The present convertor detects such expressions and can generate a message which specifies what information ia lacking .Examples of this caae are r the s e t '' integers ' * ( if the attempt of the previous convertor to eliminate it has been umuccesr~ful ) , and the date-ottakingo u t --o w e ?* of a computer ( which happens to be not in the data base ) .3. Paraphrase of the DBL exprenr~ion , in order to improve the efficiency of its evaluation .The DBL expression produced by the previous convertor can already be evaluated, but it m a y be possible t o paraphrase it in such a way, that the evaluaii~n of the paraphrase expression is more efficient, This conversion is worthwhile because , even with our small data base , the evaluation is often the most time-consuming part of the whole process ; compared to thie , the time that transformations take is negligible .
The value of a Data Base Language expression is completely defined by the sernaxltic rules of the Data Base Language ( see section 3 . 2 . ) , and one could cohceive of an algorithm that corresponds exactly to these rules . For reasons of efficiency, the actual algorithm differs from such an qlgorithm in some major respects r in evaluating quantlficatiom over sets , it does not evaluate more element0 of the sat than ie necessary for determining the value of the quantification .if ( e-g. during the evaluation of a quantification) , a variable assumes a new value , this doe8 not cause the, re-evaluation of any subexpressions that don* t contain this variable .Currently , evaluation occurs with respeet to a small data base in Core , To handle a real data base on dierk , only the evaluation of constantn would have to change .The sections 4 thmugh 7 sketched what the basic modulea of the system ( the convertors ") do . W e shall now make some very general rernarh about the way they were implemented . These r e m a r k apply to all convertors except the parser, whioh is described in some detail by Medema [ 1975 ] .The convertors can be viewed as functiong which map an input expression into a set of zero or more output expressions . Such a function fa defined by a collection, of transformations , acting on subexpresslons of the input expression . Each tr&aaformation wnrrists of a condition and an action , The action ie applied to a subexpression if the condition holde for it . The action can either be a procedure transformfngra subexpression to its * lower level equivalent '' or it can be the decbian this subexpressfon cannot be translated to the next lower level '' , "I1 convertore are implemented as procedures which operate on the tree that If the answer fs found, it is displayed. If requested, the ayatem can continue its search for more interpretatlorn . If the answer level is not reached , it displays the buffered message from the " lowest " convertor that was reached ,The PHLIQA 1 program was written in SPL ( a PL/1 dialect) , and runs under the MDS time sharing system on the Philips Pl.400 computer of the Philips Research Laboratories a t Eindhoven .The quantfflcatio~i~lambiguation ghaae of the EFG-WML translation, the efficiency-conVersion ( step 3 ) in the WML-DBL translation , as well a s some parts of the grammar , a r e not yet part of the running system , though the convertors are complekly coded and the grammar is elaborately specified.During the design of PHLIQA 1 , the PHLIQA project was coordinated by Piet Medema . He and Eric van Utteren deaigned the algorithmic structure of the ayetern and made decisions about many general aspectxi of implsrnentatlon .The formal languages and related transformation rules were designed by Harry Bunt . Jan Landabergen and Remko Scha . Wijnand Schoenmakera deaigned the evaluation component. Jan Landsbergen wrote a grammar for an extensive subset of English A l l author6 were involved in the implementation of the system .During the design of PHLIQA 1 , exteneiva discussione with members of the SRI Speech Understanding team have helped us in making our ideasl more explicit,
null
null
null
Main paper: introduction: PHLIQA 1 is an experimental ~y e t e m for answering isolated English questions about a data base . We have singled this out as the central problem of queation anawerlng , and therefore postponed the treatment of declaratives and imperrt tives , as well aa the analyak of discourse untll a later vereion of the system .The data baee is about computer installations in Europe and their users . At the moment, it is small and resides in core-but its structure and content are those of a realistic Codagyl format data base on disk ( CODASYL Data Base Task Group [ 1971 ' J ) Only one module of the system , the wevaluation componenVT , would have to be c h m q p d in order to handle a l h a l t f data base . fig. 1 ) :--Underetandtng the question : Translating the question into a formal expreesion which represents its meaning with respect to the world model of the -Computing the answer : Elaborating this expreseion , thereby finding the answer, it is repreeented in the system' s internal formalism.--Formulating the answer : Translating this answer into a form that can be more readily under8 toad . respect to the world model of th@ system. Its conrrtants correspond to the concepts that canstitute the universe of discourse . The language is independent of the input language that ie udled ( in this case English) , and also independent of the storage structure of the data base.If w e now look at a further subdivierion of the component& , the difference between PHLIQA 1 and other systems becornea apparent . -What the semantic types of it8 Immediate sub-expressions are allowed to be .I ( Semantic Deep Structure ) EFL-WML ----- owledge of tsanslation -----World Structure expre $ sion of World Model Language I - WML-DBL --t -- translation f ---( There is never a restriction on the syntactic form of the sub-expressions , )-How the semantic type of the remitting expression is derived from the semantic types of the immediate sub-expressions .Given the types of the elementary expressions ( the constants and variables ) , A comtant reprersenting a single object has a simple type . E.g, , 6 has the type " integer " , A c6nstant representing a collection of objedta of type oc has a type of the form <d> . E,g. , companies has the type "(company)" intagera has the type "(integer) . SEMk . The rule i s applicable if at least one of the conditions COND is true . Then SEM ia constructed according to ACTION and A ~impler example is the specification of the subject in a clauae like ' to uee a computer ', The eemantic surface structure of this clause means: there is a usesituation , with ~a m e computer as its object , and an unspecified subject .Phase 2 can be said to ' disambiguate ' thi@ expression in a context like ' when did Shell start to q e a computer 3 .A transformation specifies the subject of the use-situation as Shell '. This transformation would not apply if w e had the verb propose instead of start ' .The condition8 of phase 2 and phase 3 contain a rkhortcuV' to the world model1 the semantic types of the world model interpretations of the EFL congtants are inspected i n order to avoid the construction of semantic deep e tructures that have no interpretation in the world model . This blocks many unfruitful parsing paths.The translation from a semantic deep structure ( EFL expraseion ) into an un- --the data baee is more limited than the world model . Some questions that can be expreased in WML can be answered only partially o r not a t all r the WML expresrition has no DBL translation. The present convertor detects such expressions and can generate a message which specifies what information ia lacking .Examples of this caae are r the s e t '' integers ' * ( if the attempt of the previous convertor to eliminate it has been umuccesr~ful ) , and the date-ottakingo u t --o w e ?* of a computer ( which happens to be not in the data base ) .3. Paraphrase of the DBL exprenr~ion , in order to improve the efficiency of its evaluation .The DBL expression produced by the previous convertor can already be evaluated, but it m a y be possible t o paraphrase it in such a way, that the evaluaii~n of the paraphrase expression is more efficient, This conversion is worthwhile because , even with our small data base , the evaluation is often the most time-consuming part of the whole process ; compared to thie , the time that transformations take is negligible . the evaluation of a data base language expression: The value of a Data Base Language expression is completely defined by the sernaxltic rules of the Data Base Language ( see section 3 . 2 . ) , and one could cohceive of an algorithm that corresponds exactly to these rules . For reasons of efficiency, the actual algorithm differs from such an qlgorithm in some major respects r in evaluating quantlficatiom over sets , it does not evaluate more element0 of the sat than ie necessary for determining the value of the quantification .if ( e-g. during the evaluation of a quantification) , a variable assumes a new value , this doe8 not cause the, re-evaluation of any subexpressions that don* t contain this variable .Currently , evaluation occurs with respeet to a small data base in Core , To handle a real data base on dierk , only the evaluation of constantn would have to change .The sections 4 thmugh 7 sketched what the basic modulea of the system ( the convertors ") do . W e shall now make some very general rernarh about the way they were implemented . These r e m a r k apply to all convertors except the parser, whioh is described in some detail by Medema [ 1975 ] .The convertors can be viewed as functiong which map an input expression into a set of zero or more output expressions . Such a function fa defined by a collection, of transformations , acting on subexpresslons of the input expression . Each tr&aaformation wnrrists of a condition and an action , The action ie applied to a subexpression if the condition holde for it . The action can either be a procedure transformfngra subexpression to its * lower level equivalent '' or it can be the decbian this subexpressfon cannot be translated to the next lower level '' , "I1 convertore are implemented as procedures which operate on the tree that If the answer fs found, it is displayed. If requested, the ayatem can continue its search for more interpretatlorn . If the answer level is not reached , it displays the buffered message from the " lowest " convertor that was reached ,The PHLIQA 1 program was written in SPL ( a PL/1 dialect) , and runs under the MDS time sharing system on the Philips Pl.400 computer of the Philips Research Laboratories a t Eindhoven .The quantfflcatio~i~lambiguation ghaae of the EFG-WML translation, the efficiency-conVersion ( step 3 ) in the WML-DBL translation , as well a s some parts of the grammar , a r e not yet part of the running system , though the convertors are complekly coded and the grammar is elaborately specified.During the design of PHLIQA 1 , the PHLIQA project was coordinated by Piet Medema . He and Eric van Utteren deaigned the algorithmic structure of the ayetern and made decisions about many general aspectxi of implsrnentatlon .The formal languages and related transformation rules were designed by Harry Bunt . Jan Landabergen and Remko Scha . Wijnand Schoenmakera deaigned the evaluation component. Jan Landsbergen wrote a grammar for an extensive subset of English A l l author6 were involved in the implementation of the system .During the design of PHLIQA 1 , exteneiva discussione with members of the SRI Speech Understanding team have helped us in making our ideasl more explicit, Appendix:
null
null
null
null
{ "paperhash": [ "madama|a_control_structure_for_a_question-answering_system" ], "title": [ "A Control Structure For A Question-Answering System" ], "abstract": [ "The c o n t r o l s t r u c t u r e o f a q u e s t i o n a n s w e r i n g s y s t e m i s d e r i v e d f r o m a s e t o f b a s i c a s s u m p t i o n s . F o r i t s d e s i g n and i m p l e m e n t a t i o n , a n o r m a l a l g o r i t h m i c l a n g u a g e i s u s e d . The d e s i g n l e a d s t o \" s y m m e t r i c \" p r o c e d u r e s , t h e r e s u l t i n g p r o g r a m s u s e t h e s t a c k m e c h a n i s m f o r r e c u r s i o n t o g i v e t h e c o r r e c t s t o r a g e a l l o c a t i o n . Two s t r a t e g i e s a r e d i s c u s s e d i n more d e t a i l : a t r i v i a l w e l l k n o w n d e p t h f i r s t s t r a t e g y and a new o n e , c a l l e d f l e x i b l e d e c i s i o n t r e e s t r a t e g y . N o l i n g u i s t i c a s p e c t s ( e i t h e r s y n t a c t i c o r s e m a n t i c ) o f t h e s y s t e m a r e d i s c u s s e d I n t r o d u c t i o n The a u t h o r and h i s c o l l e a g u e s h a v e d e v e l o p e d a p r o t o t y p e o f a q u e s t i o n a n s w e r i n g s y s t e m . A d o c u m e n t d e s c r i b i n g t h e w h o l e s y s t e m and t h e c o n s i d e r a t i o n s t h a t h a v e l e a d t o t h e d e s i g n h a s n o t y e t b e e n p u b l i s h e d , b u t w i 1 1 a p p e a r b e f o r e l o n g . A s t h i s p a p e r d i s c u s s e s t h e a l g o r i t h m i c a s p e c t s o n a f a i r l y a b s t r a c t l e v e l , o n l y a r o u g h s k e t c h o f t h e s y s t e m ' s p r o p e r t i e s i s n e e d e d . T h i s s k e t c h i s g i v e n i n s e c t i o n 1 . I n o r d e r t o b e a b l e t o g i v e some e x a m p l e s , s e c t i o n 1 a l s o c o n t a i n s a s u p e r f i c i a l d e s c r i p t i o n o f some p r o p e r t i e s o f t h e i n v e r s e o f d i s c o u r s e o f o u r s y s t o m . The r e a d e r s h o u l d h o w e v e r r e a l i z e , t h a t t h i s p a p e r i s n o t and i s n o t i n t e n d e d t o be a d e s c r i p t i o n o f t h e s y s t e m . Some o f t h e c h a r u r t e r i s t i e s t h a t seem t o b e common f o r a l l A . I . s y s t e m s , l i k e t h e e x i s t e n c e o f l o c a l a m b i g u i t i e s , a r e a l s o p r e s e n t i n o u r s y s t e m . I n c o n t r a s t t o w h a t seems t o b e c u s t o m a r y f o r t h e c o n s t r u c t i o n o f A . I . p r o g r a m s , n o l i s t p r o c e s s i n g l a n g u a g e ( L I S p o r one o f i t s i m p r o v e m e n t s PLANNER o r CONNIVER) was u s e d . I n t h e o p i n i o n o f t h e a u t h o r , m a t t e r s s u c h a s a u t o m a t i c b a c k t r a c k i n g and g a r b a g e c o l l e c t i o n s h o u l d k e p t i n s i g h t . They may b e p r e s e n t o n l y i m p l i c i t l y a s o p p o s e d t o b e i n g p r o g r a m m e d e x p l i c i t l y b u t t h e p r o g r a m m e r s h o u l d c o n t r o l t h e m , a s h e c o n t r o l s some s t o r a g e a l l o c a t i o n m e c h a n i s m v i a i n v o c a t i o n and t e r m i n a t i o n o f p r o c e d u r e s , t h e r e b y i m p l i c i t l y c o n t r o l l i n g t h e s i z e o f t h e s t a c k . One o f t h e t a r g e t s o f t h e p r o j e c t was t o r e a c h a c l e a n , w e l l s t r u e t u r e d p r o g r a m b y means o f a \" t o p d o w n \" d e s i g n m e t h o d . T h i s p a p e r c l a i m s t o show how t h e c o n t r o l s t r u c t u r o was d e r i v e d ( i n a n i n f o r m a l w a y ) f r o m a s m a l l number o f b a s i c a s s u m p t i o n s . F o r t h e n o t a t i o n o f t h e a l g o r i t h m s i n s e c t i o n 6 , n o f o r m a l l y d e f i n e d p r o g r a m m i n g l a n g u a g e i s u s e d . M o s t o f t h e s y m b o l s a r e b o r r o w e d f r o m ALGOL68, t h e r e m a i n i n g o n e s a r e s u p p o s e d t o b e s e l f e x p l a n a t o r y . F o r d e s i g n and p u b l i c a t i o n p u r p o s e s , a l g o r i t h m i c d e s c r i p t i o n s t h a t a r e e a s y t o r e a d f o r human r e a d e r s a r e p r e f e r r e d t o a l g o r i t h m i c d e s c r i p t i o n s t h a t c a n b e p r o c e s s e d b y a c o m p i l e r . 1 • A r o u g h s k e t c h o f t h e p r o p e r t i e s o f t h e s y s t e m The s y s t e m ( c a l l e d PHLIQA f o r P H i L I p s Q u e s t i o n A n s w e r i n g ) i s a b l e t o a n s w e r q u e s t i o n s f o r m u l a t e d i n a n a t u r a l l a n g u a g e ( E n g l i s h ) , w h e r e t h e a n s w e r s t o t h e s e q u e s t i o n s a r e f a c t s , e i t h e r t o b e f o u n d d i r e c t l y i n a d a t a b a s e i n s i d e t h e s y s t e m o r t o b e d e r i v e d f r o m i n f o r m a t i o n s t o r e d i n t h e d a t a b a s e . As in mos t p r e s e n t , day A • 1 • s y s terns ? t h e c i a i m i s n o t t h a t PHLIQA c a n u n d e r s t a n d t h e f u l l n a t u r a l l a n g u a g e ; i t i s o n l y c a p a b l e o f p r o c e s s i n g t h e q u e s t i o n s r e l a t i n g t o t h e r e s t r i c t e d u n i v e r s e o f d i s c o u r s e , a s d i c t a t e d b y t h e c o n t e n t s o f t h e d a t a b a s e . H o w e v e r , t h e s y s t e m i s d e s i g n e d i n s u c h a w a y , t h a t a g r e a t p a r t o f t h e p r o g r a m i s i n d e p e n d e n t o f t h e a c t u a l u n i v e r s e o f d i s c o u r s e ; t h i s may become c l e a r a t t h e end o f t h i s s e c t i o n . The p r o c e s s t h a t a n s w e r s a q u e s t i o n , i s d i v i d e d i n t o t h r e e s u b p r o c e s s e s : t h e i n t e r p r e t a t i o n o f t h e q u e s t i o n ( r e s u l t i n g i n a n e v a l u a b l e e x p r e s s i o n ) , t h e e v a l u a t i o n o f t h a t e x p r e s s i o n ( r e s u l t i n g i n t h e v a l u e o f t h e a n s w e r ) and t h e f o r m u l a t i o n o f t h e a n s w e r . I n t h e c u r r e n t p a p e r , o n l y t h e i n t e r p r e t a t i o n p r o c e s s i s d i s c u s s e d . I n o r d e r t o g e t a t r a n s p a r e n t and w e l l s t r u c t u r e d p r o g r a m , t h e i n t e r p r e t a t i o n p r o c e s s i s t o o c o m p l i c a t e d t o b e p e r f o r m e d i n one g i a n t s t e p . B e t w e e n t h e n a t u r a l l a n g u a g e and t h e l e v e l o f t h e e v a l u a b l e e x p r e s s i o n s , a number o f i n t e r m e d i a t e l a n g u a g e l e v e l s h a s b e e n d e s i g n e d The c o n v e r s i o n f r o m a n e x p r e s s i o n a t a c e r t a i n l a n g u a g e l e v e l t o i t s e q u i v a l e n t a t t h e n e x t l o w e r l e v e l i s p e r f o r m e d b y p r o g r a m m o d u l e s c a l l e d c o n v e r t e r s . The g e n e r a l and common a s p e c t s o f t h e c o n t r o l s t r u c t u r e o f t h o s e c o n v e r t e r s f o r m t h e t o p i c o f t h i s p a p e r . The c o n c r e t e d e f i n i t i o n s o f t h e i n t e r m e d i a t e l a n g u a g e s a r e n o t r e l e v a n t t o t h i s d i s c u s s i o n , and a r e n o t g i v e n . A l l t h e s e l a n g u a g e s h a v e one a s p e c t i n common: e x c e p t f o r t h e u p p e r m o s t l e v e l ( t h e n a t u r a l l a n g u a g e ) a l l e x p r e s s i o n s i n t h e s e l a n g u a g e s a r e r e p r e s e n t e d a s t r e e s . A l l c o n v e r t e r s ( w i t h t h e t r i v i a l e x c e p t i o n o f t h e u p p e r m o s t o n e ) w i l l h a v e a t r e e a s i n p u t and w i l l p r o d u c e a" ], "authors": [ { "name": [ "P. Madama" ], "affiliation": [ { "laboratory": null, "institution": null, "location": null } ] } ], "arxiv_id": [ null ], "s2_corpus_id": [ "2670352" ], "intents": [ [] ], "isInfluential": [ false ] }
null
591
0.013536
null
null
null
null
null
null
null
null