ONLINE Pageviewers

Friday, 28 August 2020

Critical Appreciation of Phonology of Meena Language : Professor Ram Lakhan Meena, D. Lit Media, Ph-D Linguistics

 

Critical Appreciation of Phonology of Meena (Mina) Language

Professor Ram Lakhan Meena, D. Lit Media, Ph-D Linguistics

Meena (Mina) as a speech community

The Mina (Meena) language is spoken in Eastern part of Rajasthan states of India by Indigenous Meena community. They are considered to be adivasi (aboriginal people) which are scheduled in the tribes of India also.  They are habitat in various parts of  Rajasthan as well as the country, but they are mainly found in the districts of Sawai Madhopur, Karauli, Kota, Bundi, Jhalawad, Udaipur, Alwar, Dausa, Jaipur, Tonk and some part of Bharatpur and Dholpur; there is also a small population living in Sikar, Jalore and Ajmer districts. See details in Appendix A, for a map of the Mina (Meena) Language survey region.

The Mina (Meena) language area is situated between 24° 12’-76°88, 25°14-76°36’, 26°03’-76°82 and 27°01’-76°20, latitude-longitude. In the past, the region was arid and barren, but today it is well watered and fertile due to modern technique of irrigation

and agricultural instrumentation.  It is likely that a significant number of Mina (Meena) speakers were subsumed under this category. Therefore, the population of Mina (Meena) speakers in Rajasthan is probably much higher. The Mina (Meena) language (ISO 639–3: myi) is classified as Indo–European, Indo–Iranian, Indo–Aryan, Central zone, Mina, Unclassified (Ethnologue 2013) but unfortunately it was deleted from list of world languages. The Devanagri script is used for writing the language.

The term ‘Mina (Meena)’ refers to both the people and their language. The Mina (Meena) are generally members of Scheduled Tribes. The researchers observed the people to be friendly, hospitable, and open to outsiders. According to the The 1991 census also listed 13,098,078 people as ‘Meena’ speakers. As per the ‘Dialect Geography of Rajasthan survey, Mina (Meena) speakers were found to be living mostly as rural communities.

Phonology of Meena (Mina) Language

Phonology deals with sound structure in individual languages: the way distinctions in sound are used to differentiate linguistic items, and the ways in which the sound structure of the ‘same’ element varies as a function of the other sounds in its context. Phonology and phonetics both involve sound in natural language, but differ in that phonetics deals with sounds from a language-independent point of view, while phonology studies the ways in which they are distributed and deployed within particular languages. Phonology originated with the insight that much observable phonetic detail is irrelevant or predictable within the system of a given language.

This led to the positing of phonemes as minimal contrastive sound units in language, each composed (according to many writers) of a collection of distinctive features of contrast. Later work showed that a focus on surface contrast ultimately was misguided, and generative phonology replaced this with a conception of phonology as an aspect of speakers' knowledge of linguistic structure. Important research problems have involved the relation between phonological and phonetic form; the mutual interaction of phonological regularities; the relation of phonological structure to other components of grammar; and the appropriateness of rules vs. constraints as formulations of phonological regularities.

1.1 CHANGING CONCEPTS IN THE NEUROLINGUISTIC ANALYSIS OF PHONETICS AND PHONOLOGY

Phonology is the rule system of language that constrains how sounds may be organized into syllables and words. Phonetics is the science investigating how sounds are produced across the world's many languages. Phonological processing is at work when abstract sound units (phonemes) are selected and sequenced prior to planning the articulatory gestures for speech. An impairment of phoneme processing is considered a linguistic problem and is often a symptom of aphasia, an acquired language disorder that frequently follows stroke.

Difficulty planning the articulatory movements for speech is considered a non-linguistic (motoric, phonetic) problem and is the central symptom of adult apraxia of speech, an acquired motor speech disorder that also frequently follows stroke. There is evidence that there are no clear cut linguistic and neurological divisions in these phonetic/phonemic systems and this in turn forces aphasiologists to consider their interactions in describing phonetic and phonological breakdowns subsequent to brain damage in the dominant (usually, the left) hemisphere. We begin with a brief outline in this section of new psychological models that focus upon connections between three levels of language: semantic/conceptual, words, and sounds.

1.2 The Effects of Connectionist Modeling of Phonetics and Phonology

Connectionist models of phonological processing usually operate with three levels of representation and computation: conceptual/semantic, lexical, and phonological (e.g., Dell et al), 1997). At times, a word shape level is added to this architecture (Dell & Kim, 2005). Bi-directional connections among all three levels of processing allow these models to simulate semantic and phonological slips-of-the-tongue in normal adults and semantic and phonemic paraphasias in adults with aphasia (e.g., Dell et al., 1997; Martin, 2005). Unfortunately, connectionist models have been generally unconcerned with accounting for finer phonetic detail, and consequently, they have been of little use in simulating speech disruptions that are apraxic (clearly motoric) in nature.

Connectionist systems learn by associative principles. They are dynamic (stable but flexible) and they appear to learn simply by virtue of continued active processing. They are responsive to inputs and are constrained by the patterns of the inputs that are fed into their systems. In language, word structures and their phonemic connections comprise most of the formal architecture of the lexicon. The language-specific restrictions on permitted sound sequences (phonotactics and the sonority sequencing principle) limit how phonemic units combine during normal and impaired phonological processing for any one human language. For example, if a language's preferred structure places sounds with least sonority at syllable boundaries, then even non-sensical or neologistic words produced in aphasia (or by a lesioned connectionist machine) will tend to reflect that preference. Metaphorically, the language system that the model learns is said to know these constraints and this is not overly different from the claim that in the human, the nervous system knows these constraints.

A significant question in neurolinguistics concerns how language processing systems in humans and in connectionistic machines come to know their constraints. Researchers who take a nativist stance argue that high-level system constraints are fairly universal and are inherent in the original, internal construction of the system (e.g., they are present at birth). Conversely, those who support an associationist position argue that system constraints are derived from exposure to external environments, therefore varying from community to community. Associative, connectionistic phonological learning systems use mechanisms (i.e., mathematical algorithms) that discover the sound patterns of the word inputs with which they are presented.

The statistical probabilities of the input patterns and forms as a whole are learned or abstracted by the model and, in turn, those probabilities govern or constrain the normal and disordered errors that the system produces. In this sense, by its simulations of human behavior, the model is claimed to be analogous to that of human language users. A new approach to phonology, optimality theory (Prince & Smolensky, 2004), integrates key concepts from both the nativist and associationist positions on language processing. Optimality theory is a complex linguistic model of constraint-based patterns and markedness hierarchies and it is billed clearly as nativist.

It should therefore have nothing to do with connectionism, although at the same time it is claimed to be … deeply connected to connectionism (Prince & Smolensky, 2004, pp. 234–235). Any evaluation of neurolinguistic theory must solve this conflict. We must ask whether and to what extent the tightly constrained paraphasias, which ultimately play into the phonetics, need any other explanation than that they emerge from the language architecture itself. This is troublesome to many neurolinguists who believe that language acquisition requires more than input frequency patterns. This controversy is likely to reappear as more neurolinguistic research considers constraint-based linguistic modeling, and explores how human and machine systems learn by linking input with output. We return to this issue later.

1.3 Can Phonetics and Phonology Be Precisely Dissociated?

It has been claimed that (physical) phonetics and (mental) phonology can be dissociated such that there must be some kind of interface straddling them. Historical linguistic change, for example, has often shown that physical motor-phonetic and sensory-perceptual processes underlie most abstract (mental) sound changes through time (Ohala, 2005), and, in fact, acoustic-phonetic processing has explained most everything known about synchronic sound patterns in languages (Ohala, 2005). Generative linguists such as Noam Chomsky and Morris Halle claim that articulatory gesture shaping at the phonetic level of description may be phonologized ultimately into language knowledge but for Ohala, there is such a marked degree of interaction between the phonetic and phonological aspects of the sound system that the presumed interface between them collapses. This mixture can be observed in the brain activation patterns of normal individuals listening to language and in the sound error patterns produced by adults with aphasia and with acquired apraxia of speech.

Classic neurolinguistic theory posits that phonetic (motor speech) errors arise from left inferior frontal lobe disruption whereas phonological (linguistic) errors arise from phoneme selection and ordering disruptions secondary to temporal lobe damage. Twenty-first century research findings are refuting this dichotomy. Phonemic substitution errors have long been observed from frontal lobe damage and subtle phonetic asynchronies have been reported from temporal lobe damage as well (Buckingham & Christman, 2006). Gandour et al. (2000) have shown that when speakers of tone languages are presented with rapidly changing pitch patterns that match tonemes in their language, there is a significantly greater degree of metabolic activity in left opercular (motor speech) cortex than in temporal (phonological) cortex.

This opercular region does not show metabolic activity, however, with the introduction of rapidly changing pitch patterns that are not contrastive in these same languages, nor is there any unilateral left opercular focal metabolism with, say, Thai tones, delivered to Chinese speakers (who have tonemes, but not those of Thai). Meena speakers predictably exhibit no opercular area metabolic activity for any set of tonemes whatsoever. Thus, paradoxical patterns of brain activation in healthy adults seem to support the integration, rather than the segregation, of phonetic and phonemic processing across language cortex.

Evidence from speakers with acquired apraxia of speech supports the fuzziness of boundaries in levels of language processing previously assumed to be distinct. Acquired apraxia of speech is a disorder whose phonetic symptoms (especially sound substitutions and omissions) are frequently misinterpreted as phonological (McNeil et al., 2000) even though errors arise from impairment of an intermediate stage of production, subsequent to the phonological selection and sequencing of phonemes, but prior to actual articulatory execution. According to Code (2005) there are four principle features of the syndrome.

First, vocalic and consonantal lengthening is observed in syllables, words, and phrases (regardless of whether those vowels and consonants have been correctly selected relative to their phonemic targets). Second, the junctures between segments may have lengthened durations. Third, movement transitions and trajectories from one articulatory posture to another during speech may be flawed, thus creating spectral distortions of perceived phonemes. Finally, distorted sound substitutions may appear. These substitutions are described as sound selection errors and they are not caused by difficulties in the production of properly timed anticipatory co-articulation (a frequent problem in acquired apraxia of speech).

Most of the features of acquired apraxia of speech described by Code are clearly motoric in nature and represent phonetic impairments in speech timing and movement planning. The sound selection errors are problematic, however, in that their source is phonological. Given that patients with pure acquired apraxia of speech typically have brain lesions in and around pre-motor Broca's area (McNeil et al., 2000) or, more controversially, in the anterior gyrus of the left hemisphere Island of Reil (the insula) (Dronkers, 1996; Ogar et al., 2005, but also see Hillis et al., 2004 for discussion of the insula), the evidence for phonological disturbance is even more intriguing.

In patients with frontal cortical damage and pure acquired apraxia of speech, it appears that phonetics and phonology are indeed inextricably bound and interwoven. Relevant brain areas appear to have seriously fuzzy boundaries when it comes to language comprehension and production. In sum, it appears that the neat theoretical dissection of brain and language into non-overlapping zones and levels is a paradigm that will need replacement as neurolinguistic science advances. Recent theories of phonology recognize the syllable as a phonological constituent (see Blevins, 1995, for review). Phonemic substitution errors appear to be sensitive to syllable-internal hierarchical branching structure. Few substitutions occur when the consonant is part of a consonant cluster (see Blumstein, 1990, for review).

The majority of consonant substitution errors occur when the consonant is preceded or followed by a vowel. These findings are inexplicable in a theoretical framework that treats the syllable as an unanalyzable whole. Analyses of phonemic paraphasias across word boundaries show that the affected phonemes occur in like syllable positions (e.g., onset, nucleus, coda). Not all constituents of the syllable are equally prone to disruption. An analysis of neologistic jargon produced by two Wernicke’s aphasics revealed that the coda is more susceptible to impairment than the onset; the nucleus is the most stable of the syllable-internal constituents (Stark & Stark, 1990).

Kaye, Lowenstamm, and Vergnaud’s (1985) theory of phonological government distinguishes two types of consonant clusters that have different syllabic representations and are characterized by different governing domains. Valdois (1990) found that aphasic errors support the claim that obstruent-liquid clusters and other cluster types have different syllabic representations and, moreover, suggest that segments in the governed position are more likely to be involved in the destruction or creation of clusters than segments in the governing position.

2.0 Introduction of Phonology

Phonology is typically defined as the study of speech sounds of a language or languages, and the laws governing them,1 particularly the laws governing the composition and combination of speech sounds in language. This definition reflects a segmental bias in the historical development of the field and we can offer a more general definition: the study of the knowledge and representations of the sound system of human languages. From a neurobiological or cognitive neuroscience perspective, one can consider phonology as the study of the mental model for human speech.

In this brief review, we restrict ourselves to spoken language, although analogous concerns hold for signed language (Brentari, 2011). Moreover, we limit the discussion to what we consider the most important aspects of phonology. These include: (i) the mappings between three systems of representation: action, perception, and long-term memory; (ii) the fundamental components of speech sounds (i.e., distinctive features); (iii) the laws of combinations of speech sounds, both adjacent and long-distance; and (iv) the chunking of speech sounds into larger units, especially syllables.

To begin, consider the word-form glark. Given this string of letters, native speakers of Meena will have an idea of how to pronounce it and what it would sound like if another person said it. They would have little idea, if any, of what it means.2 The meaning of a word is arbitrary given its form, and it could mean something else entirely. Consequently, we can have very specific knowledge about a word’s form from a single presentation and can recognize and repeat such word-forms without much effort, all without knowing its meaning. Phonology studies the regularities of form (i.e., rules without meaning) (Staal, 1990) and the laws of combination for speech sounds and their sub-parts.

Any account needs to address the fact that speech is produced by one anatomical system (the mouth) and perceived with another (the auditory system). Our ability to repeat new word-forms, such as glark, is evidence that people effortlessly map between these two systems. Moreover, new word-forms can be stored in both short-term and long-term memory. As a result, phonology must confront the conversion of representations (i.e., data structures) between three broad neural systems: memory, action, and perception (the MAP loop; Poeppel & Idsardi, 2011). Each system has further sub-systems that we ignore here. The basic proposal is that this is done through the use of phonological primitives (features), which are temporally organized (chunked, grouped, coordinated) on at least two fundamental time scales: the feature or segment and the syllable (Poeppel, 2003).

2.1 Phonology

Phonology includes the significant sounds of the language and the rules for their combination. The words of a language are divisible into sound sequences, and part of language knowledge is an understanding of the particular sounds used in a language, and the rules for how they can be combined and ordered. There are many speech sounds available to the world’s languages. Any single language employs a subset of these sounds, typically about 23 consonants and 9 vowels. There is, however, substantial diversity in the world’s languages, with numbers of consonants in a given language ranging between 6 and 95, and numbers of vowels ranging between 3 and 46. Meena language has 50 sounds, including 35 dishes, 13 vowels and 2 semi vowels. The language has developed three lexical tones: low, mid and high.

Phonology : Vowels of Mina / Meena Language

The distinctive sounds used by a language are its phonemes. Phonemes are contrastive; changing from one to another within a word produces either a change in meaning or a nonword. For example, the /p/ in pit serves to contrast pit from other Meena words, such as bit, sit, and kit, which are similar in all respects except that they begin with other Meena phonemes. Psycholinguistic research has shown why speakers often have difficulty trying to learn a second language that has a different phonemic inventory. For example, the initial sound of the Meena word this is rare in the world’s languages, and poses particular difficulties for those learning Meena as a second language, whereas Meena speakers have difficulty learning sounds not in Meena.

Phonology : Consonants of Mina / Meena Language

Sound level

1) The pronunciation of vowels in the Mina dialect is chaotic for example (a:) is pronounced (a) and (e)  and (a) is pronounced as (ә).

2) Most of the vowels, "the process of acceptance is shown most of the places."

3) At the same time, the pronunciation of 'sa' is 'H' in the pronunciation of the recipes, in some places the pronunciation of 'g' (b) is generally 's' (े), and

4) In the pronunciation of the recipes, the pronunciation of 's' (ә) is 'h' (iv) A. In some places, the pronunciation of 'six' (b) is generally 's' (े) and

5) In most of the survey sites, the pronunciation of 'Dental' (s) is found in place of the palatable 'sh' (s) 'A' the word 'sh'. 'A' has been written (A) A (A), but it becomes 'B' (E) everywhere.

6) This trend is targeted in Rajasthan. It has been found in the survey that the word "sound" in the speech is closely related to the sound (A) before self or long A, U, O, A and O, and "before (A) The tax seems closer. As long as (a) and (A) dishes are pure orthocentrical or intimate sounds, its pronunciation is affected by the subsequent tone.

7) The sound arises by pressing the upper teeth on the lower lip, it is a denture sound. And (v) is considered a pure voice, which is not caused by pressing the lips, but instead of breathing out between the two lips.

8) In addition, there is a multiplicity of silent sounds. 'L' (S) and 'L' (T) are quite popular here. During the survey, most of the compiled material was found to be the Marwari representation format of 'L' ('S') and 'No' (D), but the soundtrack is not found in the sermon (Chapepajpavadpad), in such a situation, Are very similar to

9) In 'B' (a) 'A' (in) of 'A' (k) in the words 'd' (k) and 'r' (x3), the plurality and prominence of 'd' is found.

Research studies using innovative techniques, such as nonnutritive sucking and conditioned head turn responses to stimuli, have shown that human infants can initially discriminate among the sounds of all languages, but around their first birthdays, as they acquire their native language, they begin to lose the ability to make fine distinctions among the phonemes of other languages. The importance of knowing the rules for the combination of sounds within one’s language (phonotactics) becomes apparent in ordinary conversation, because the identity of individual speech sounds is often unclear. Experimental research shows that listeners are often confronted with an auditory puzzle:

They hear some sounds, but not all, and must ‘fill in the blanks.’ The effortless and accurate solution of such puzzles is mediated by adult listeners’ knowledge of their phonemic inventory, phonotactic constraints, and lexicon, and also by inferences that can be drawn from the context. In addition, spoken language is characterized by a great deal of acoustic variability in the representation of phonemes, both within and across speakers. Research suggests that speech sound processing is characterized by categorical perception, in which acoustically variant tokens are forced into binary percepts. Infants’ abilities to use such skills in segmenting and decoding the ambient speech signal to induce the lexicon and grammar of their language is obviously less apparent, leading to large numbers of current studies and models that seek to understand how they ‘bootstrap’ and expand early linguistic discoveries.

The phonological system of a language also includes rules for the interpretation of prosody, or intonation and stress patterns. In Meena, prosodic cues can distinguish between grammatical contrasts, such as the difference between statements and questions. Prosody can also convey emphasis and emotion in language. The status of prosody as a separate system is upheld by the finding that some patients with right hemisphere brain damage lose the ability to distinguish between happy and sad productions of the same sentence.

Spoken word recognition is thus enabled by a number of perceptual biases that operate from the bottom up in analyzing the input signal, combined with the use of top-down knowledge of language-specific phonological and prosodic rules, vocabulary, and syntax, along with generalized use of context to derive an appropriate interpretation. Such rapid and complex integration of cues to meaning poses a continuing challenge to the development of computerized spoken word recognition systems that can successfully mimic human speech perception.

 

Wednesday, 24 June 2020

मूकनायक मीडिया : डॉ अंबेडकर-मिशन की बुलंद आवाज का दस्तावेज


                                          मूकनायक
बाबासाहेब डॉ अंबेडकर ने कहा था कि ‘किसी भी आंदोलन के सफल होने के लिए, उस आंदोलन का अखबार बनना होगा। एक आंदोलन बिना अखबार टूटे हुए पंखों वाली पार्टी की तरह होता है।’ 

‘मूकनायक’ ने उन अछूतों के बीच जागरूकता पैदा की और उन्हें उनके अधिकारों के बारे में जागरूक किया। उसका मुख्य उद्देश्य दलितों,  गरीबों और शोषित लोगों की शिकायतों को सरकार और अन्य लोगों तक पहुँचाना था।  
मूकनायक मीडिया न्यूज़  वेबसाइट उपलोड आलेखों की सूची निम्न है -

पेरियार की नजर में भविष्य की दुनिया

अछूत समाज की कलियाँ बहुत कम समय में अंकुरित होने लगीं ‘मनोगत’

'सांसद-विधायक को पेंशन, कर्मचारियों को क्यों नहीं'

ओपीएस पेंशन ऑप्ट का एक और तोहफा

कब टूटेगा राज्यों में नयी पेंशन स्कीम का तिलस्म

11 जून ओपीएस सर्कुलर आँखों में धूल झोंकने जैसा