tag:blogger.com,1999:blog-9048879464910781933.post6136767174883257975..comments2023-10-12T00:25:24.119-07:00Comments on Talking Brains: Phonemic segmentation in speech perception -- what's the evidence?Greg Hickokhttp://www.blogger.com/profile/16656473495682901613noreply@blogger.comBlogger38125tag:blogger.com,1999:blog-9048879464910781933.post-22780475291601922192010-09-06T22:50:59.675-07:002010-09-06T22:50:59.675-07:00My complaints about wiki are general. Too often wh...My complaints about wiki are general. Too often when I try to track down the cited sources they lead to no pages. I would darned if I would rely on one wiki article for knowledge about one language. OK, so the langauge has many words that are 'all obstruent'). Care to give us a count? The majority of words? A large minority? What?<br /><br />The problem with the usual phonological syllable is it is as static and segmented as other units, such as phonemes and features. Go to something dynamic and articulatory and you get something that is very hard to reference in discourse but comes closer to modelling controlled speech.CEJhttps://www.blogger.com/profile/14080778566145093851noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-81933824591794753242010-09-06T10:07:06.565-07:002010-09-06T10:07:06.565-07:00Re: CEJ:
I'm not sure what the complaint abou...Re: CEJ:<br /><br />I'm not sure what the complaint about Wikipedia is here. The Wikipedia examples are drawn directly from Bagemhil and Nater. Bagemihl is available on-line (but behind a paywall), Nater is not not available online. So the most expedient way to cite this seemed to be through the Wikipedia entry. I did give the original sources (which are given in full on the Wikipedia page).<br /><br />Nuxalk has many words without vowels. For morphological reasons it's more difficult to find all-obstruent words (those not containing any vowels, liquids or nasals). Very similar issues arise in Tashlhiyt Berber, an unrelated language (http://www.springer.com/education+%26+language/linguistics/book/978-1-4020-1076-7). The problem, however, is much more general as pointed out in earlier comments. There are many sub-syllabic morphemes in languages throughout the world. It remains completely unclear how to handle such cases if only syllables are allowed without any subsyllabic structures.Bill Idsardihttps://www.blogger.com/profile/10570926308058368183noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-74720391920045645572010-09-03T22:19:45.458-07:002010-09-03T22:19:45.458-07:00>>Nuxalk allows words without any resonants ...>>Nuxalk allows words without any resonants at all, such as [sxs] 'seal fat' and [xɬpʼχʷɬtʰɬpʰɬːskʷʰt͡sʼ] 'he had had in his possession a bunchberry plant.' (examples from the Wikipedia page, see Nater 1984 and Bagemihl 1991 for details and discussion, the full cites are also on the Wikipedia page).<<<br /><br />Well, first I know it's cliche'd to complain about Wiki for linguistics, but it does stink.<br />Two, this seems to be not typical of human languages. But more importantly, how prevalent is this in the language itself? Just because we have syllables and words without vowels, that doesn't mean the majority of the lexicon is like this.CEJhttps://www.blogger.com/profile/14080778566145093851noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-69135796677929179172010-09-03T21:50:04.399-07:002010-09-03T21:50:04.399-07:00The question is does speech perception parse heard...The question is does speech perception parse heard speech into sub-lexical units? Then the next question is, is that sub-lexical unit the phoneme or some other unit? I opt for some sort of hierarchy of syllable types integrated at the production end to controlled articulatory gestural routines.CEJhttps://www.blogger.com/profile/14080778566145093851noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-77903750143691338412010-06-25T01:03:26.801-07:002010-06-25T01:03:26.801-07:00I proposed the syllable, V, VC, and CV (where V=vo...I proposed the syllable, V, VC, and CV (where V=vowel and C=consonant or consonant cluster) as the unit of speech perception in 1972, and have supported this hypothesis in a series of experiments since that time.<br /><br />Massaro, D.W. (1972). Preperceptual Images, Processing Time, and Perceptual Units in Auditory Perception. Psychological Review, 79(2), 124-145.<br />http://mambo.ucsc.edu/papers/1972.htmlUnknownhttps://www.blogger.com/profile/08294205865090025816noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-3824356758828858452010-04-01T02:24:58.786-07:002010-04-01T02:24:58.786-07:00My colleague Sven Mattys at U of Bristol has some...My colleague Sven Mattys at U of Bristol has some evidence for the existence of phonemes (and syllables). Below is the reference and abstract of a paper that speaks to this issue, and a passage that refers to even more direct evidence for the role of phonemes in perception.<br /><br />Jeff Bowers<br /><br />Mattys, S.L. & Melhorn, J.F (2005). How do syllables contribute to the perception of spoken English? Evidence from the migration paradigm. Language and Speech, 48, 223-253. <br /><br /><br />The involvement of syllables in the perception of spoken English has traditionally been regarded as minimal because of ambiguous syllable boundaries and overriding rhythmic segmentation cues. The present experiments test the perceptual separability of syllables and vowels in spoken English using the migration paradigm. Experiments 1 and 2 show that syllables migrate considerably<br />more than full and reduced vowels, and this effect is not influenced<br />by the lexicality of the stimuli, their stress pattern, or the syllables’ position relative to the edge of the stimuli. Experiment 3 confirms the predominance of syllable migration against a pseudosyllable baseline, and provides some evidence that syllable migration depends on whether syllable boundaries are<br />clear or ambiguous. Consistent with this hypothesis, Experiment 4 demonstrates that CVC syllables migrate more in stimuli with a clear CVC-initial structure than in ambisyllabic stimuli. Together, the data suggest that syllables have a greater contribution to the perception of spoken English than previously assumed.<br /><br />And a relevant passage that highlights some evidence for phonemes:<br /><br />The migration paradigm is relevant to the quest for languages’ units of perception because its response patterns originate from auditory illusions rather than from<br />conscious decision processes, a recognized advance in the study of perception (Fodor & Pylyshyn, 1981; Marcel, 1983; Morais & Kolinsky, 1994; Treisman, 1979). Specifically, Morais (1985) contends that a task that bypasses access to conscious representations<br />in the production of a response provides greater insight into perceptual mechanisms<br />than one that does not. The migration paradigm suits this category quite well. For<br />instance, illiterate Portuguese speakers have no conscious awareness of phonemes,<br />as measured by phoneme detection, deletion, and addition tasks (e.g., Morais, Cary, Alegria, & Bertelson, 1979), but, yet, they experience phoneme migration to the<br />same extent as literate speakers do (Morais & Kolinsky, 1994). Thus, migrations involve speech properties that do not need to be accessible to conscious experience<br />to recombine into a new percept, which consigns the method to low processing levels compatible with the perception stage.Unknownhttps://www.blogger.com/profile/03126535136983117179noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-5207428453873842772010-03-28T09:57:39.336-07:002010-03-28T09:57:39.336-07:00Remember we are trying to explain an empirical res...Remember we are trying to explain an empirical result that babies preferred lists of words starting with the same phoneme to unrelated lists of words. We don't need to solve all problems to explain the result. We only need to show that the same-onset lists were more acoustically similar than the different-onset lists.<br /><br />Another issue to think about with respect to this finding is that we don't know what level of processing is driving the babies' preference. Suppose 9-month-olds are in the middle of learning a mapping between undecomposed syllables and the motor gestures that can reproduce them. Maybe it is the motor similarly that is driving the preference. Put differently, just because we present a stimulus perceptually doesn't mean that the response is directly output from perceptual computations.Greg Hickokhttps://www.blogger.com/profile/16656473495682901613noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-52053808432517345912010-03-27T08:29:47.397-07:002010-03-27T08:29:47.397-07:00No, I'm not denying that there are common aspe...No, I'm not denying that there are common aspects to bilabial voiced stop bursts. But the effect that Nina describes is phonemically specific (at least in that case). There are many ways to carve up the notion of "acoustic similarity" and very few of them will settle on an equivalent to phonemic similarity. Why, for example, does this effect not extend to all voiced stops [b,d,g] which are certainly acoustically similar (albeit under a different metric for similarity)? How do we pick out the right metric for similarity here from the many to choose from; the one that effectively defines phoneme identity?Bill Idsardihttps://www.blogger.com/profile/10570926308058368183noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-87711459794265744192010-03-26T22:45:30.763-07:002010-03-26T22:45:30.763-07:00Bill: I'm not an expert here, so just curious....Bill: I'm not an expert here, so just curious... are you denying that sounds generated by bilabial stops have no acoustic similarity?Greg Hickokhttps://www.blogger.com/profile/16656473495682901613noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-29007582162231122622010-03-26T17:42:36.252-07:002010-03-26T17:42:36.252-07:00Re:"Maybe they just like to hear sounds that ...Re:"Maybe they just like to hear sounds that start similarly." How is similarity defined for this purpose if not in specifically phonemic terms? The bursts and formant transitions are different for /b/ before /æ/ and /u/. What mechanism equates the sound pattern at the start of [bæ...] with that of [bu...] (and doesn't equate it with other patterns belonging to other phoneme sequences)?Bill Idsardihttps://www.blogger.com/profile/10570926308058368183noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-40869165551724085932010-03-25T16:18:26.270-07:002010-03-25T16:18:26.270-07:00Hi Nina,
Well, bow, boot, bat do have onsets in co...Hi Nina,<br />Well, bow, boot, bat do have onsets in common, but I don't see why that necessarily means the infants are analyzing the perceptual events into discrete phonemic units. Maybe they just like to hear sounds that start similarly.Greg Hickokhttps://www.blogger.com/profile/16656473495682901613noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-72282424274902033332010-03-25T16:12:09.242-07:002010-03-25T16:12:09.242-07:00Hi Greg,
Jusczyk, Goodman, and Baumann (1999) fou...Hi Greg,<br /><br />Jusczyk, Goodman, and Baumann (1999) found that 9-month-olds showed a preference for lists of words that shared an initial consonant (bow, boot, bat, …) over unrelated lists of words, which was taken as sensitivity to phonemes (more precisely: to the internal structure of syllables). Could this be a useful piece of evidence for the role of phonemes in perception?<br /><br />NinaNina Kazaninanoreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-1390719033487110462010-03-18T07:50:46.725-07:002010-03-18T07:50:46.725-07:00Hi Marc,
I'll have a look at those. Thanks. Ca...Hi Marc,<br />I'll have a look at those. Thanks. Care to provide a brief summary of what's in them?Greg Hickokhttps://www.blogger.com/profile/16656473495682901613noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-66777038327754435672010-03-17T12:59:02.947-07:002010-03-17T12:59:02.947-07:00Three references that show that phonology or phono...Three references that show that phonology or phonological mappings must also reside in the ventral stream:<br /><br />Marslen-Wilson, W. D., Nix, A., & Gaskell, G. (1995). <br />Phonological variation in lexical access: Abstractness, inference, and English place assimilation. Language and Cognitive Processes, 10, 285–308. <br /><br />Gow, D. W. (2001). Assimilation and anticipation in continuous spoken word recognition. Journal of Memory and Language, 45, 133–159. <br /><br />Meghan Sumner, Arthur G. Samuel, Perception and representation of regular variation: The case of final /t/, Journal of Memory and Language, Volume 52, Issue 3, April 2005, Pages 322-338,Marcnoreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-3567315511464656722010-03-17T11:02:41.462-07:002010-03-17T11:02:41.462-07:00Cues (acoustic, motoric, contextual) -> gestura...Cues (acoustic, motoric, contextual) -> gestural scores -> words.<br /><br />I buy the multiple cue part, but why the gestural score? And what is an abstract gestural score? Does a patient with pre-lingually acquired bilateral anterior operculum lesions and a resulting anarthria (can't speak) have abstract gestural scores? Does a chinchilla have them?<br /><br />My view is that as soon as you abstract away from the actual motor system, whether you call them gestural scores or "intended gestures" you are in the auditory system. That is, the commonality (parity) between auditory and motor speech is not in the gesture but in the auditory system. Put differently, the motor speech system is not aiming for an abstract gesture, it is aiming for a sound. <br /><br />Regarding rules, how about this: Phonological rules are a description of the sensory-to-motor mappings that allow us to transform an acoustic representation of speech into a motor representation of speech (HP's "dorsal stream"). They do not describe the mapping between acoustic representations of speech and conceptual structures (HP's "ventral stream").<br /><br />I'm not familiar with the green beans experiments. I'll have to look those up.Greg Hickokhttps://www.blogger.com/profile/16656473495682901613noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-34341577817264952262010-03-17T10:31:11.520-07:002010-03-17T10:31:11.520-07:00One can uncontroversially use an articulatory-phon...One can uncontroversially use an articulatory-phonology theory of representation with only a facilitatory impact of the motor system, rather than an essential role (i.e. do without motor-theory, which I agree is flawed). This also fits within the H&P model, but in lieu of cues->features->(phonemes)->words, you have cues (acoustic, motoric, contextual)->gestural scores (in the abstract sense)->words.<br /><br />Regarding phonological rules: Rules must be acquired, therefore recognized in our input. So, at some level, either perceptual or at a level used for generalization extraction, there must be a phonemic representation. I think the null hypothesis is the former and even if the latter is true, this phonemic information is available to and must ultimately be used for perception as in the [grim binz]-> <i>green beans</i> experiments or people's ability to go from <i>noncicity</i> to [nonsik]. (This is only true if you believe phonological generalizations are stated over phonemes.)<br /><br />I do realize that these two points work at cross purposes.Marc Ettlingernoreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-75022142914728547412010-03-17T10:30:34.046-07:002010-03-17T10:30:34.046-07:00So I haven't seen any knock down evidence in t...So I haven't seen any knock down evidence in this discussion that phoneme-size units are necessarily used in speech perception. <br /><br />I think slips of the tongue constitute good evidence that phonemes are separately represented in the speech planning process (darn bore for barn door...). Maybe there is evidence of this sort in so called slips of the ear. Does anybody know whether phoneme exchanges occur, for example, in slips of the ear?Greg Hickokhttps://www.blogger.com/profile/16656473495682901613noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-1049725510852427962010-03-16T09:42:02.621-07:002010-03-16T09:42:02.621-07:00The problem with any articulatory-based account of...The problem with any articulatory-based account of perception is that it fails empirically: it can't explain speech perception in people who have lost their ability to articulate speech, in people who have failed to acquire the ability to speech, or in animals who don't even have the potential to speak. Sooner or later, the field is going to have to come to terms with these facts.<br /><br />The fact that phonological rules apply to phonemes (I wouldn't dispute this) is not an argument for the involvement of phonemes in speech perception. After all, the phenomena that these rules capture hold of speech production -- it's therefore no surprise that theories stated over articulatory gestures seem to capture the relevant facts. It is a perfectly legit hypothesis that the same representations/processes apply for perception, but this hypothesis has been falsified by the sorts of data I mentioned above.Greg Hickokhttps://www.blogger.com/profile/16656473495682901613noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-73808778816373534382010-03-15T22:21:10.979-07:002010-03-15T22:21:10.979-07:00I think articulatory phonology (as contrasted with...I think articulatory phonology (as contrasted with motor theory) does present an alternative to the syllable and the phoneme: the gesture. This allows for temporally overlapping representations (phonemes don't) with the flexibility you're speaking of.<br /><br />Following up on Bill again, the re-syllabification point also gets at what I was trying to communicate earlier: Phonological rules tend to target (single) phonemes and not some other unit of representation. This is true of re-syllabification (only single phonemes re-syllabify), allomorphy (k->s due to English's -ity suffix), allophony (aspiration in English for initial voiceless plosives) and on and on. Across languages, processes like these generally target single phonemes. While the argument has been made that vowel harmony target syllables, for example, or that allophonic processes target larger units because of co-articulation, I think it's hard to get around the need for the phoneme unit in describing these processes.Marc Ettlingerhttp://faculty.wcas.northwestern.edu/~met179/noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-72840679482770829062010-03-15T12:18:23.947-07:002010-03-15T12:18:23.947-07:00I've been doing a lot of thinking on this issu...I've been doing a lot of thinking on this issue as part of my dissertation, and as far as I can gather, the best evidence for prelexical segmental representations of some grain size (probably bundles of features) being used in the normal course of speech recognition comes from the perceptual learning literature (e.g. McQueen et al. 2006). <br /><br />The basic argument is that the types of generalizations formed by listeners require some prelexical segmental units for those generalizations to operate on or retune. I don't think, however, that this work really differentiates between bundles of features, phonemes, syllables, or some combination as the locus of perceptual learning. <br /><br />Reference:<br />McQueen, J. M., Cutler, A., & Norris, D. (2006). Phonological Abstraction in the Mental Lexicon. Cognitive Science: A Multidisciplinary Journal, 30(6), 1113-1126.Unknownhttps://www.blogger.com/profile/10752498486702883652noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-8679370867294567202010-03-15T08:51:58.942-07:002010-03-15T08:51:58.942-07:00Hi Diogo,
Regarding words as bundles of features, ...Hi Diogo,<br />Regarding words as bundles of features, are you assuming articulatory features? If so, I would ask whether this might only be true of the representation of words on the production side. Or do you have in mind articulator-free features? In which case we might ask exactly what this buys us. <br /><br /><br />Regarding the relation between syllable # in perception and /k+ae+t/ in "storage"... what do you mean by "storage"? Why not store syllable # on the perception side and a bundle of articulatory features on the articulatory side. Lexical access happens by retrieving representations of the form "syllable #" which are linked to conceptual semantic representations. The link between perception and production is then a mapping between "#" and /k+ae+t/. <br /><br />I don't see why the different acoustic shapes that a word can take is such a problem in principle. Yes, we have to figure out how different acoustic patterns can be mapped onto the same higher level representation -- same as the problem in vision -- but the same problem exists, perhaps more dramatically at the phoneme level.Greg Hickokhttps://www.blogger.com/profile/16656473495682901613noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-73280892495459840572010-03-15T08:15:58.604-07:002010-03-15T08:15:58.604-07:00I think you guys have made a good point that the n...I think you guys have made a good point that the notion of the syllable is not going to do what we need it to do in all situations. So let me back up.<br /><br />When the Haskins folks first started looking at the acoustic features that drive speech perception. They found that there wasn't an acoustic pattern that uniquely mapped onto phonemes. Hence the motor theory was born. But what if phonemes aren't the relevant unit? Maybe there is a better mapping between acoustic patterns and something larger. Massaro has suggested it is the syllable (Oden & Massaro. Psychological Review. Vol 85(3), May 1978, 172-191), hence my suggestion. Maybe this is too restrictive though. Maybe it is a more flexible mapping between acoustic features and *something*, which may be morphemes, syllables, words -- whatever works in a given situation. <br /><br />So [kaet] is a spectrotemporal pattern that is mapped onto one morpheme, whereas [kaets] is a pattern that is mapped onto two.Greg Hickokhttps://www.blogger.com/profile/16656473495682901613noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-7113770697923460662010-03-15T02:45:08.108-07:002010-03-15T02:45:08.108-07:00Thanks for the clarification, Greg! I have another...Thanks for the clarification, Greg! I have another question though. You say that the only difference between "perception by phones" vs "perception by syllables" would be that the units of perception are bigger. So, is it safe to assume that while /p + a/ and unit "#" would not be hierarchically related (ie, "#" has no internal structure), "#" is still abstract enough that different /pa/ tokens would be perceived as the same "#" unit?<br /><br />The reason I am asking is that we do have an idea of what speech perception by segments would look like (even though it might turn out to be wrong), and this has to do with the end point of the process.<br /><br />We have good reason to believe that words are internally represented as bundles of features (which here I will pretend are the same thing as segments, for the sake of the argument), and so if I want to retrieve the word /k+ae+t/ and its related meaning, it makes sense that I would have to reconstruct the string [k+ae+t] from the speech stream. To what degree [k+ae+t] would have to look like /k+ae+t/ is an open question, of course, and it might be the case that an "incomplete" sub-segmental representation might do the trick. The point, however, is that in this kind of model, we are using the same code for perception and storage.<br /><br />However, if I am doing everything by "syllables" (in quotes because these would not be really syllables, but units with no internal structure), then what would I be retrieving? On the perceptual side, we have "syllable #", while on the storage side we have the entry /k+ae+t/. How do they relate to each other? <br /><br />It seems that we would have to posit that "syllable #" would somehow be able to make contact with the meaning related to the entry /k+ae+t/ in the mental lexicon. The only mechanism that I can imagine would be some sort of associative memory of acoustic events and lexical entries (ie, some sort of episodic model). The problem here of course is the stuff that Bill was alluding to, which is that words can change their shape according to the surrounding context (sometimes quite dramatically), so it is hard to see how acoustic similarity alone would do it.<br /><br />Of course, none of this is actually "evidence for" the role of segments in perception, I am just trying to figure out what the alternative model would look like.Diogonoreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-8029174190982699062010-03-14T09:55:17.113-07:002010-03-14T09:55:17.113-07:00Following up on my last comment, such examples can...Following up on my last comment, such examples can be extended to virtually all sub-syllabic morphology. For example in the Russian phrase "к Ивану" (= English "from Ivan") the preposition (in this case) is the single consonant [k], and the case marker is the vowel [u], so the three syllables are [kɯ] [va] [nu], but the syllables of "Иван" (= English "Ivan") are [i] [van]. So in recognition we have to have some way to relate [kɯ] [va] [nu] with the morpheme sequence of syllables (that's the hypothesis) [k]-[i][van]-[u]. Note that (1) [k] is not a pronounceable syllable in Russian (and so is an abstract syllable in this analysis; it's pronounced [ko] in pre-jer contexts, so we could instead recover [ko]-[i][van]-[u] somehow), (2) we need to equate the phonemes [i] and [ɯ] in this examples (due to a rule of Russian), in syllable terms we have to equate all words beginning with [i...] syllables (whatever that means without phonemes) with [Cɯ...] syllables where C is a hard consonant and the [...] is held constant (whatever that means in syllable terms).<br />Perhaps simpler examples are all English words with sub-syllables suffixes (-t, -d, -z, -θ, ...) so that we have to equate the syllable [dip] "deep" with that of [dɛpθ] "depth", or [kæt] "cat" with [kæts] "cats". Again parsing these into morphemic "syllables" would require abstract C-only syllables [dip][θ] and [kæt][s], and we're well on our way to reconstructing all of the individual phonemes as "abstract" syllables.Bill Idsardihttps://www.blogger.com/profile/10570926308058368183noreply@blogger.comtag:blogger.com,1999:blog-9048879464910781933.post-58800062372877657462010-03-14T09:16:31.775-07:002010-03-14T09:16:31.775-07:00If there's no internal structure within syllab...If there's no internal structure within syllables at all, and no phoneme-sized units, this will cause some major problems in recognizing words and morphemes in resyllabification contexts, e.g. "act" (one syllable) doesn't have the same syllable as the first syllable of "ac.tor" (two syllables). Now, one can try to have a metric of "syllable similarity" (i.e. for which "act" is similar to "ac") but I think that you will find that the multi-dimensional notion of similarity necessitated by this will re-introduce a phoneme-level of representation. <br />Examples of resyllabification across word boundaries abound in French (and Korean), for example (from Wikipedia article on French liason) "premier étage" (= English "first floor") = /pʁə.mjɛ.ʁ‿e.taʒ/ where the liason messes up two adjacent syllables: for recognition purposes [mjɛ] has to be equated with [mjɛʁ] for "premier" and [ʁe] has to be equated with [e] for "étage". The second case is the killer, as the [e] in "étage" can receive many different consonants as a result of liason, and so we would have to equate a large number of syllables with the [e] syllable ([le], [ze], ...).Bill Idsardihttps://www.blogger.com/profile/10570926308058368183noreply@blogger.com