DOC PREVIEW
PLfeaturesParameters

This preview shows page 1-2-3-4 out of 13 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Features and parameters for different purposes Peter Ladefoged Linguistics Department, UCLA, Los Angeles, CA 90095-1543 Abstract The phonetic description of a language must be related to the phonology. A computerized description of a language can have a very faithful phonetic component, but its phonetic structures are not appropriate for a phonological description. In current systems of linguistic analysis there are three aspects of phonology: (1) the representation of the lexical contrasts in a language; (2) the specification of the constraints on the sounds in lexical items; and (3) the description of phonological patterns of sounds as evident in the relations between the underlying lexical items and the observable phonetic output. There is a conflict between the phonetic component required for the first of these goals and that required for the other two. Characterizing the sounds of languages can be done most efficiently by using a large number of features, all defined in articulatory terms. This will result in having more features than are necessary to characterize phonological patterns efficiently. In addition, some phonological patterns depend on auditory characteristics which will require auditorily defined features. Yet other patterns are observable in a language considered as a social institution rather than a mental concept. There are many ways in which one can make a description of the sounds of a language, and linguists often forget about the most obvious one. Figure 1 is an example of part of a description of the sounds of English, namely their waveforms. Figure 1. The waveforms of the English vowels /i/ and /u/. This is a very complete and accurate description, albeit a rather lengthy one. It would take several pages to show the waveforms necessary to describe all the sounds of English. But providing a set of sound waves is certainly one way of formalizing the phonetic component of a language.Most people also overlook the next most obvious form of description, an x-ray or MRI movie, comprised of frames as in Figure 2. This is not quite as good a description as a waveform— one cannot reconstruct the complete sounds just from these MRI data. But it is not hard to envisage ways in which this description could be elaborated by the addition of physiological data so that it would form a reasonably complete description of the sounds of English. Figure 2. Three frames from an MRI movie of the vowels of French. These rather unconventional forms of description might well form the phonetic component of a certain kind of grammar, one describing how children acquire language. A set of waveforms is one of the principal sources that a child has when learning to talk. The other principal component available to a child is the second set of data mentioned above, knowledge of the physiological mechanisms involved. Furthermore, if we want to write a grammar that provides the basis for teaching people the language and how to pronounce it, these forms of description might well be part of it. Opinions differ on how best to teach a language, and obviously it depends on who is doing the teaching, who is doing the learning, and why they want to learn it. But many authorities would say that often the best way is to listen to lots of the language, hear carefully constructed phrases and learn when and where to use them. The phonetic component of a grammar of this kind could well be a set of waveforms as illustrated above. A conventional grammar of a language must account for everything between the thoughts behind an utterance — the semantics — and the corresponding sounds — the phonetics. In this paper, when considering such a grammar we will assume that enough is known about the semantic component for it to start generating a sentence from possible thoughts, and that the syntactic and phonological components are well formalized, and can turn the semantic component into a phonological output. Our interest is in the appropriate phonetic component. At the moment the ways in which we can model a grammar on a computer involve only limited thoughts and sentences. Those working in semantics and syntax still have a lot of work to do before all the sentences of a language could be generated by a computer. But given a prompt, such as the question ‘How do I measure the lengths of the vowels in the word aware?’ a computer can be programmed to take this question, work out what it means, and generate an appropriate response such as ‘Please ask your TA to help you’. The phonetic component of a language description involves turning this answer into sounds, which it can do quite easily, asanyone can test by typing these or other phrases into Rhetorical Systems demonstration at: http://www.rhetorical.com/cgi-bin/demo.cgi. When people hear computer generated speech, they generally think that the phonetic component is fairly good. The intonation could often be improved, but the general speech quality is fine. Of course, the semantic component of a computer model of a grammar could be better programmed so that it was a little more helpful, but that is not our concern here. Speech synthesis shows that we know how to make phonetic descriptions so that we can go from the output generated by the semantic and syntactic components of a language generator to reasonably high quality speech. This is a grammar in accordance with the definition given at the beginning of this paper, an account of a language that goes from the thoughts behind an utterance to its realization as sounds that we can hear. What is the phonetic component in this grammar doing in most of the current computer speech synthesis systems? The answer is that it does not use phonemes or features or any of the usual linguistic units to generate sound waves. The phonetic component of a computer system may use an orthographic text, but it does not really require one. Chinese characters would do just as well, as long as there was a character for each possible English syllable. The way most speech


PLfeaturesParameters

Download PLfeaturesParameters
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view PLfeaturesParameters and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view PLfeaturesParameters 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?