This is the first in a series of posts centered around learning theoretical linguistics, specifically phonology, and learning how to use my new hearing implants through post-implantation rehabilitation therapy.
I would consider the nexus of these two events to be unique, in that I am experiencing linguistics from both sides: on the one hand learning about theoretical sound systems in the classroom, while at the same time learning a concrete application of those theories in therapy. And as much as I’m enjoying the experience, I also wanted to document it. Mostly for me, but I’m sure that other people will find it interesting, as well.
Not many people have the opportunity (or burden) to have to learn how to hear again, from the ground up, making sense of stimuli that were passively received and processed for the first 30 years of their life.
To provide some clarification, sound is now an incredibly non-precise stimulus for me, where I can tell that a sound is being produced but cannot discriminate between that sound and a similar sound. A good analogy is to imagine the PA announcements in the NYC subways in the 1980s/1990s. If you don’t know what I’m talking about, this video might help to clarify.
Essentially, all different sounds seem as if they’re coming through the most lo-fi speaker system available, which is essentially what happens when you have to rely on organs other than our incredibly sophisticated ears to do the hearing.
I’ve already seen a few parallels between the two sides. A couple that stand out:
- The International Phonetic Alphabet (IPA) is used to represent and categorize the sounds of an oral language. In each of my therapy sessions, we tackle a new category of sounds, such as fricatives or plosives, and learn to discriminate between the primary individual sounds. I love that an abstract rubric from my text book plays such an import role in re-learning how to hear.
- My therapist mentioned “minimal pairs” the day after we learned about them in class. I found minimal pairs so interesting because they are such an intuitive, but non-obvious part of categorizing sounds. It’s cool that these are so essential for hearing therapy, as well.
This post is just the kick off. I have very little idea what direction to take things from here. What questions do you have? What kinds of things should I document as I go through both processes? Feel free to leave questions in the comments section below.