jueves, 26 de abril de 2012
LEARNING PHONETIC SYMBOLS IN A FUNNY WAY
Have you ever wondered when looking in a dictionary what the funny little symbols that appear next to the words mean? They are phonetic symbols, a guide to pronunciation, however you need to know the sound associated with the symbols in order to build up the sound of the word. To help you I have developed this simple guide to the most common symbols.
PHONETICS AND TECHNOLOGY NOWADAYS
Recent advances in computer technology now make it feasible to commercialize products
that perform man-machine communication by voice in real time. This has fueled many
companies to invest in speech technology, creating many jobs during the last few years.
Academic research has also benefited from this growth because companies are
conducting joint projects with universities. Many of these projects are funded by the
European Commission.
Nowadays, there are software packages for personal computers that can perform limited
Automatic Speech Recognition (from here on abbreviated to ASR). After the system has
adapted to the user’s voice, it is able to recognize words separated by pauses with error
rates below 5%. Likewise, there are software-only Text-To-Speech (from here on
abbreviated to TTS) systems that can generate intelligible speech. Modern
microprocessors are powerful enough as to perform both TTS and limited ASR in real
time, without the need for additional hardware. While acknowledging the many
accomplishments, we also have to accept the many limitations of current systems. While
intelligibility of the best TTS systems is high enough to be useful in certain applications,
speech quality is still low enough that the technology will not be ubiquitous until a major
breakthrough appears. The limitations of ASR systems are even greater: the word error
rate for continuous speech is still too high to be useful except for some special
applications. Even the best systems are too fragile to the presence of new words and
moderately noisy environments. The technology is still in its infancy and the challenges
are large indeed, but momentum is clearly growing and commercially viable spoken
language interfaces will emerge before the year 2000.
A solution of the ultimate problem in speech technology, the development of a
conversational computer, is an extremely difficult task that has eluded researchers for the
last 30 years. While a great deal of progress has been achieved, it could easily be another
30 years until we have a machine that can pass the so-called Turing test (under this test, a
blind-folded human cannot distinguish whether he or she is talking to another human or
to a computer). This means that while both industry and academia are creating many job
opportunities today, they will likely create many more in the years to come. A market
research study conducted in 1992 (Meisel, 1992) forecast that world-wide revenues from
speech technology products in 1995 will approach $2.5 billion, reaching $26 billion in
the year 2000. A total of 137 organizations were listed in this study as suppliers of speech
technology products in 1992, 22 of those being European.
The existence of many different languages in Europe makes it difficult for a speech
product to easily reach a broad coverage. Unlike other computer products such as word
processors, spreadsheets and databases, which are relatively easy to translate from one
language to another, localization of speech technology products is a very labor intensive
process. This barrier will inevitably slow down the introduction of speech products in
some countries with smaller markets. Nevertheless, it also implies that a number of
specific jobs will be created to generate a version of the product for each language.
Nevertheless, it is important to note that advances in speech technology are reducing the
dissimilarities of speech systems in different languages by defining more general
frameworks under which to share more components. The possibility of contributing to
change the way we communicate with machines is a very exciting proposition. Building a
system like HAL (the human-like robot in “2001: A Space Odyssey”) promises to be a
very challenging task, and the road to these systems will be filled with excitement.
As you probably know, phonetic symbols are a great help when it comes to learning to pronounce English words correctly. Any time you open a dictionary, you can find the correct pronunciation of words you don't know by looking at the phonetic pronunciation that follows the word. Unfortunately, learning the phonetic alphabet is not always the easiest thing to do.
The use of phonetic symbols in foreign language teaching and learning is potentially very advantageous. Provided that the values of phonetic symbols are known and that the foreign language learner can produce and discriminate the sounds symbols stand for, these advantages include, among other things, increased awareness of L2 sound features, “visualisation” of such intangible entities as sounds, increased learner autonomy when checking pronunciation in dictionaries, etc. (see Mompean 2005 for a full account of the potential advantages of phonetic notation). Despite the convenience of phonetic notation in foreign language teaching and learning, any potential benefit depends crucially on how the notation is taught and learned. Good teaching practices may increase learners’ motivation to use phonetic symbols. In contrast, a negative learning experience may cause phonetic notation to be perceived as something unattractive and even irrelevant to learning the foreign language. It is therefore essential to analyse the issue of how best to take advantage of phonetic symbols in the foreign language classroom.
The use of phonetic symbols in foreign language teaching and learning is potentially very advantageous. Provided that the values of phonetic symbols are known and that the foreign language learner can produce and discriminate the sounds symbols stand for, these advantages include, among other things, increased awareness of L2 sound features, “visualisation” of such intangible entities as sounds, increased learner autonomy when checking pronunciation in dictionaries, etc. (see Mompean 2005 for a full account of the potential advantages of phonetic notation). Despite the convenience of phonetic notation in foreign language teaching and learning, any potential benefit depends crucially on how the notation is taught and learned. Good teaching practices may increase learners’ motivation to use phonetic symbols. In contrast, a negative learning experience may cause phonetic notation to be perceived as something unattractive and even irrelevant to learning the foreign language. It is therefore essential to analyse the issue of how best to take advantage of phonetic symbols in the foreign language classroom.
International Phonetic Alphabet (IPA)
Origin
The IPA was first published in 1888 by the Association Phonétique Internationale (International Phonetic Association), a group of French language teachers founded by Paul Passy. The aim of the organisation was to devise a system for transcribing the sounds of speech which was independent of any particular language and applicable to all languages. A phonetic script for English created in 1847 by Isaac Pitman and Henry Ellis was used as a model for the IPA.Uses
- The IPA is used in dictionaries to indicate the pronunciation of words.
- The IPA has often been used as a basis for creating new writing systems for previously unwritten languages.
- The IPA is used in some foreign language text books and phrase books to transcribe the sounds of languages which are written with non-latin alphabets. It is also used by non-native speakers of English when learning to speak English.
Where symbols appear in pairs, the one on the right represents a voiced consonant, while the one on the left is unvoiced. Shaded areas denote articulations judged to be impossible.
Phonetics (from the Greek: φωνή, phōnē, "sound, voice") is a branch of linguistics that comprises the study of the sounds of human speech, or in the case of sign languages the equivalent aspects of sign. It is concerned with the physical properties of speech sounds or signs (phones): their physiological production, acoustic properties, auditory perception, and neurophysiological status. Phonology, on the other hand, is concerned with the abstract, grammatical characterization of systems of sounds or signs. The field of phonetics is a multiple layered subject of linguistics that focuses on speech. In the case of oral languages there are three basic areas of study:
- Articulatory phonetics: the study of the production of speech sounds by the articulatory and vocal tract by the speaker
- Acoustic phonetics: the study of the physical transmission of speech sounds from the speaker to the listener
- Auditory phonetics: the study of the reception and perception of speech sounds by the listener
Suscribirse a:
Entradas (Atom)