Sound Processing to Sentence Processing
November 22, 2015
This is Part 2 of my series of articles chronicling the process of auditory rehabilitation therapy from the perspective of a linguist. Part 1, Learning Linguistics; Relearning to Hear, can be found here.
The past few weeks of therapy have provided a number of interesting experiences. I’m not sure I can tie them all into a cohesive narrative, so I’ll just focus on one area: sentence processing as distinct from sound processing.
Filling in the Blanks
I am fascinated by how the mind processes language. Further, I’m fascinated by how we build up a sentence, and go from a series of sound waves to building meaningful words.
A common exercise in rehabilitation therapy is for the therapist to say a sentence with their mouth covered (so I can’t get anything from lipreading) and have me (try to) repeat the sentence. But an interesting thing happened the other week. Let’s say the sentence was The teacher talked to the students. What I heard was:
___ ___ ___ ___ ___ students.
But once I heard/understood students I immediately could piece together the rest of what I had heard. If my therapist had stopped before students, I would have said that I hadn’t understood anything. But since I could piece together the sentence after I understood the final word, it means I clearly got something from the words I thought I completely missed. Clearly my mind was storing those sound patterns as “something.” Were they candidate words, each with an assigned probability weight? And once I understood students, the probability weights crossed a critical threshold and formed a meaningful sentence.
This experience is distinct from a very similar experience I’ve had, which reflects the well-known concept of “priming.” In that case, a certain stimulus restricts the domain of possibilities, and influences subsequent responses. For example, on the first repetition of a sentence, I got:
___ ___ ___ a pie
So I knew the domain of the sentence was food, baking, etc. This made it much easier to get the other words when the sentence was repeated. Similarly, I’ve experienced a form of syntactic priming, where I understood the logical or functional structure of the sentence initially, mostly from prosodic cues. In other words, I got:
[SOMEBODY] [DID AN ACTION] [TO ANOTHER SOMEBODY]
Upon repetition of the sentence, I could restrict the domain of each word to a noun, a verb and another noun, respectively. Not that it isn’t super cool that our minds can do this, but it’s not quite as mystifying as the first example.
In the first example, all of the processing occurred “on the fly.” I didn’t need a second repetition to understand the sentence, but rather backfilled it using some sort of semantic representations of word-forms, that initially I thought I had completely missed. But clearly some form of information was transmitted through those sounds.
All of this seems to be bundled under the rubric of sentence processing. I am fascinated by this process and the amazing things our mind naturally does, that my mind is currently relearning to do.
Stay tuned for Part 3, on relearning allophones.
New Article on Keystrokes, Linguistics and Cognition Published in IJHCS
October 21, 2015
Our new article on the linguistic and cognitive underpinnings of keystrokes is up. Here’s a link to a medium post explaining the article.
The Language and Cognition of Keystrokes | medium.com
And here’s the actual article:
Utilizing linguistically enhanced keystroke dynamics to predict typist cognition and demographics
Learning Linguistics; Relearning to Hear
October 15, 2015
This is the first in a series of posts centered around learning theoretical linguistics, specifically phonology, and learning how to use my new hearing implants through post-implantation rehabilitation therapy.
I would consider the nexus of these two events to be unique, in that I am experiencing linguistics from both sides: on the one hand learning about theoretical sound systems in the classroom, while at the same time learning a concrete application of those theories in therapy. And as much as I’m enjoying the experience, I also wanted to document it. Mostly for me, but I’m sure that other people will find it interesting, as well.
Not many people have the opportunity (or burden) to have to learn how to hear again, from the ground up, making sense of stimuli that were passively received and processed for the first 30 years of their life.
To provide some clarification, sound is now an incredibly non-precise stimulus for me, where I can tell that a sound is being produced but cannot discriminate between that sound and a similar sound. A good analogy is to imagine the PA announcements in the NYC subways in the 1980s/1990s. If you don’t know what I’m talking about, this video might help to clarify.
Essentially, all different sounds seem as if they’re coming through the most lo-fi speaker system available, which is essentially what happens when you have to rely on organs other than our incredibly sophisticated ears to do the hearing.
I’ve already seen a few parallels between the two sides. A couple that stand out:
- The International Phonetic Alphabet (IPA) is used to represent and categorize the sounds of an oral language. In each of my therapy sessions, we tackle a new category of sounds, such as fricatives or plosives, and learn to discriminate between the primary individual sounds. I love that an abstract rubric from my text book plays such an import role in re-learning how to hear.
- My therapist mentioned “minimal pairs” the day after we learned about them in class. I found minimal pairs so interesting because they are such an intuitive, but non-obvious part of categorizing sounds. It’s cool that these are so essential for hearing therapy, as well.
This post is just the kick off. I have very little idea what direction to take things from here. What questions do you have? What kinds of things should I document as I go through both processes? Feel free to leave questions in the comments section below.
NAACL ’15 Roundup
June 7, 2015
I just returned from NAACL 2015 in beautiful Denver, CO. This was my first “big” conference, so I didn’t know quite what to expect. Needless to say, I was blown away (for better or for worse).
First, a side note: I’d like to thank the NAACL and specifically the conference chair Rada Mihalcea for providing captions during the entirety of the conference. Although there were some technical hiccups, we all got through them. Moreover, Hal Daume and the rest of the NAACL board were extrememly receptive to expanding accessibility going forward. I look forward to working with all of them.
Since this was my first “big” conference, this is also my first “big” conference writeup. Let’s see how it goes.
Keynote #1: Lillian Lee Big Data Pragmatics etc….
- This was a really fun and insightful talk to open the conference. There were a few themes within Lillian’s talk, but my two favorite were why movie quotes become popular and why we use hedging. Regarding the first topic, my favorite quote was: “When Arnold says, ‘I’ll be back’, everyone talked about it. When I say ‘I’ll be back’, you guys are like ‘Well, don’t rush!'”
- The other theme I really enjoyed was “hedging” and why we do it. I find this topic fascinating, since it’s all around us. For instance, in saying “I’d claim it’s 200 yards away” we add no new information with I’d claim.” So why do we say it? I think this is also a hallmark of hipster-speak, e.g. “This is maybe the best bacon I’ve ever had.”
Ehsan Mohammady Ardehaly & Aron Culotta Inferring latent attributes of Twitter users with label regularization
- This paper uses a lightly-supervised method to infer attributes like age and political orientation. It therefore avoids the need for costly annotation. One way that they infer attributes is by determining which Twitter accounts are central to a certain class. Really interesting, and I need to read the paper in-depth to fully understand it.
One Minute Madness
- This was fun. Everyone who presented a poster had one minute to preview/explain their poster. Some “presentations” were funny and some really pushed the 60-second mark. Joel Tetreault did a nice job enforcing the time limit. Here’s a picture of the “lineup” of speakers.
Nathan Schneider & Noah Smith A Corpus and Model Integrating Multiword Expressions and Supersenses
- Nathan Schneider has been doing some really interesting semantic work, whether on FrameNet or MWEs. Here, the CMU folks did a ton of manual annotation of the “supersense” of words and MWEs. Not only do they manage to achieve some really impressive results on tagging of MWES, but they also have provided a really valuable resource to the MWE community in the form of their STREUSLE 2.0 corpus of annotated MWEs/supersenses.
Keynote #2: Fei-Fei Li A Quest for Visual Intelligence in Computers
- This was a fascinating talk. The idea here is to combine image recognition with semantics/NLP. For a computer to really “identify” something, it has to understand its meaning; pixel values are not “meaning.” I wish I had taken better notes, but Fei-Fei’s lab was able to achieve some incredibly impressive results. Of course, even the best image recognition makes some (adorable) mistakes.
Manaal Faruqui et al. Retrofitting Word Vectors to Semantic Lexicons
- This was one of the papers that won a Best Paper Award, and for good reason. It addresses a fundamental conflict in computational linguistics, specifically within computational semantics: distributional meaning representation vs. lexical semantics. The authors combine distributional vector representation with information from lexicons such as WordNet and FrameNet, and achieve significantly higher accuracy in semantic evaluation tasks from multiple languages. Moreover, their methods are highly modular, and they have made their tools available online. This is something I look forward to tinkering around with.
Some posters that I really enjoyed
- Oracle and Human Baselines for Native Language Identification – Shervin Malmasi, Joel Tetreault and Mark Dras
- Lexicon-Free Conversational Speech Recognition with Neural Networks – Andrew L. Maas, Ziang Xie, Dan Jurafsky, Andrew Y. Ng
- Using Zero-Resource Spoken Term Discovery for Ranked Retrieval – Jerome White et al.
- Recognizing Textual Entailment using Dependency Analysis and Machine Learning – Nidhi Sharma, Richa Sharma and Kanad K. Biswas
- Deep learning and neural nets are still breaking new ground in NLP. If you’re in the NLP domain, it would behoove you to gain a solid understanding of them, because they can achieve some incredibly impressive results.
- Word embeddings: The running joke throughout the conference was that if you wanted your paper to be accepted, it had to include “word embeddings” in the title. Embeddings were everywhere (I think I saw somewhere that ~30% of the posters included this is their title). Even Chris Manning felt the need to comment on this in his talk/on Twitter:
RT @aidotech: RT aidotech: Chris actually showing a tweet on his slides! #deeplearning #naacl2015 pic.twitter.com/GWI7rDiQVC
— StanfordCSLI (@StanfordCSLI) June 5, 2015
Takeaways for Future Conferences
- I should’ve read more of the papers beforehand. Then I would have been better prepared to ask good questions and get more out of the presentations.
- As Andrew warned me beforehand “You will burn out.” And he was right. There’s no way to fully absorb every paper at every talk you attend. At some point, it becomes beneficial to just take a breather and do nothing. I did this Wednesday morning, and I’m really glad I did it.
- Get to breakfast early. If you come downstairs 10 minutes before the first session, you’ll be scraping the (literal) bottom of the barrel on the buffet line.
Shameless self-citation: Here is the paper Andrew and I wrote for the conference.
On The Wikipedia #ArtAndFeminism Edit-a-Thon
March 7, 2015
For many of you reading this, you’re probably asking, “What was Adam Goodkind doing at an “arts and feminism” event? Isn’t this the same Adam Goodkind who, in 11th grade, took on a formal debate on why women are inherently inferior to men?” To my perplexed readers, Yes, it is one and the same.
Today was the 2nd annual art+feminism edit-a-thon. The idea is to fix the inherent male bias in wikipedia, Participants selected a female artist, and either bulked up her wikipedia page or created one. (My contribution: https://en.wikipedia.org/wiki/Jae_Rhim_Lee)
My inspiration to participate was partially to understand more about the wikipedia editing and creation process, but also partially to advance the feminist cause. Here’s why:
As someone with a disability, I have been the victim of discrimination on more than one occasion. It sucks, a lot. And my personal belief about how to change minds is, you need to change the ground truths. I do not believe that events like corporate-sponsored “Diversity & You” seminars are effective in making real change.
Rather, you need to change the fundamental truths, from the ground-source up. This is a painstakingly slow process, like drops of water slowly eroding a rock into a canyon, but I firmly believe it is the only way to make a real impact.
Had a great time at the Wikipedia #artandfeminism Edit-a-Thon w/ @robincamille. Learned a ton, good times were had pic.twitter.com/8J7EoDuykT
— Adam Goodkind (@adamgreatkind) March 7, 2015
Wikipedia edit-a-thon’s are exactly what we need to make these changes real, and I applaud the Wikimedia Foundation for supporting it.
[As a footnote, I really wanted to work on the page for Evelyn Glennie, a deaf percussionist, but her page is already pretty substantial.]
Would An Idiot Do That?
October 28, 2014
The Office bestowed many bits of wisdom upon us. My favorite gem is from Dwight Schrute, when he recounts the best advice he was ever given: Don’t be an idiot. He then expands upon this nugget:
Before I do anything, I ask myself, “Would an idiot do that?” And if the answer is yes, I do not do that thing.
Yes, this is Dwight being Dwight. But there is wisdom to be gained from this notion, which I am [proud/embarrassed] to admit I think about frequently.
When working — be it programming, linguistics, writing, etc. — it’s easy enough to burn out. If a program isn’t working as expected, I might try changing variables, at random, in a desperate attempt to get it to work. Or, worse, I’ll sit there staring at my screen.
This is how an idiot works
Good ideas rarely happen when you’re doing the same thing over and over again, e.g. flipping variables or staring blankly. If I catch myself in this sort of loop, I will, a la Dwight, stop doing That Thing. The stopping of That Thing can mean doing anything that is not That Thing, from stretching/walking around, to getting a cup of coffee, to going grocery shopping. The main idea is, The Thing I am doing, or my current approach to The Thing, is not working. Only an idiot would keep trying the same approach to The Thing, and expect a different, or more successful, outcome.
And the funny thing is, this works! If I’m stuck on a programming problem that I’m sure is unsolvable, it’s uncanny how often the solution presents itself 2 minutes after I return from my break. I guess it pays to not be an idiot.
Programmers and Pens
August 17, 2014
To be an outstanding novelist, a writer need not be intimately familiar with the ink-making or printing-press processes. This, however, is the exact state of computer programming today.
Although I only really use high-level languages like Java and Python on a day-to-day basis, I’m constantly employing the more fundamental knowledge I picked up when learning lower-level, more primitive languages, like C++ or Unix. Although knowledge of the latter languages is not “necessary” for programming in the former languages, familiarity with memory allocation and pointers allows me to write more efficient, more robust programs. And more robust programs allow me to test more hypotheses, and do better research.
In the abbots of Medieval monasteries, monks were intimately familiar with how to grind up berries for ink, and stretch out animal hides for parchment. This was necessary knowledge to be a “writer.” Even into the mid 20th century, writers had to know how a pen holds its ink, constantly refilling it, and preventing blotting.
In Daniel Lemire’s most recent blog post, he espouses the view that learning programming is not for everyone, since it’s hard. To be a good programmer requires a lot of work, and being comfortable with delving into technical minutiae.
I love Daniel’s blog, and have previous commented on his thoughts. I think his most recent post, though, shows exactly where we are in the progression of programming, and the dangers that lurk in the immediate future: Programmers are still Benedictine monks, cloistered in hilltop abbeys, showering gifts and knowledge unto the unlearned masses.
Perhaps an argument could be made that the Dark Ages were the result of just this type of schism in knowledge, with one class being scholarly and imbued in the transmission of ideas, while the peasant class was forced to do manual labor and till the fields. There is no reason to assume that a similar division of classes is not imminent, between programmers and non-programmers.
If we train more computer programmers, then we don’t need to worry about a future in which most jobs have been replaced by robots, and the majority of the population is on welfare, or doing mindless tasks. Concerns like these take a myopic view of the future, in which the occupations that currently exist will be the jobs that always exist. 100 years ago, we could not imagine half the jobs that exist today, like “Radiology Technician,” or “Software QA Engineer.” There is no reason to suppose that we haven’t even fathomed 1% of the jobs that will exist 100 years in the future.
However, we need to radically shift our point of view, and concentrate on the accessibility of programming, rather than its exclusiveness, in order to avoid this. There is no shortage of elitism in the programming world. But if we allow this elitism to run rampant, we could be looking at a future like this:
At least the singing and dancing will be good, though…
July 3, 2014
There are two big problems in American politics: gerrymandering and lobbying influence. Gerrymandering has caused us to elect more and more polarizing politicians, as seen in this great visualization from the Pew Center. And lobbying has created politicians who support bat-shit crazy policies that aren’t in the short- or long-term interests of anyone except the lobbying corporation.
So how do we cure these ills? Politicians have no incentive: Getting rid of gerrymandering and lobbying is like saying to a 6-year-old “I need you to voluntarily stop taking your weekly allowance from your parents, and also invite these new kids onto your jungle gym, who probably don’t like you.”
For a solution, we can look to incentive programs of start-ups and some large corporations. How do start-ups motivate employees and prevent them from jumping ship? Stock options that don’t vest for a certain period of time. And in large corporations (at least, in my experience), year-end bonuses, which often constitute the bulk of an employee’s earnings, and are tied to how the company performed that year.
So, what if a politician’s salary, similarly, was incentive-based? This might sound good, but we still have the issue of how you get a spoiled 6-year-old to change his/her mind, right? How do we get a congressperson to reject a comfy, guaranteed salary, and take on a risky idea? Well, we can make the risk really, really, really appealing.
Currently, a member of Congress, dependent upon positions held, earns a salary of between $174,000 and about $225,000. Obviously, they also take home a lot more in the form of gifts, kickbacks, etc. But what if we make the top possible salary for a Congressperson something like $10,000,000? I don’t know the details on how much a politician earns from “outside income,” but I’m guessing it’s usually less than this. So, as a result, we’re not asking politicians just to take sort of self-sacrificing penalty. We’re providing a real, and realistic, upside.
Of course, the tricky part is, how do we determine the bonus for a given year? We could tie it to GDP, but then this might only make corporations even more influential. We could also tie it to average household income, but again, this doesn’t necessarily reflect the success of the entire country, as a CEO earning 300x more than an employee can drag up the average.
Rather, I propose tying it to median household income. By taking the median, rather than mean, then the outlying salaries, e.g. the CEOs’ salaries, get discarded from the calculation. The important number becomes only what the truly mid-50% earn. And we then set some multiple, maybe something enormous, like 100x, and say, a Congressperson will get paid 100x the median household income.
In this scenario, politicians have a very real incentive to lift up the entire country, and further, this can be even more lucrative for the politician than current lobbying incentives.
The idea of start-up and corporate pay structure is, “If the company does well, the employee does well.” Let’s bring that idea into politics, as a version of the Gekko-esque philosophy, “Greed can be good.”
Living The Teddy Roosevelt Life
June 9, 2014
In one of the final episodes of How I Met Your Mother, we learn that Ted Mosby became obsessed with Teddy Roosevelt after reading a biography. The scene is obviously meant to demonstrate how ridiculous and obsessive Ted can be about history, art, etc.
However, I began reading Edmund Morris’ epic biography of Theodore Roosevelt last week, and have been completely absorbed since. TR’s life is incredible, and seems to be the perfect storm of a very self-aware, motivated individual, mixed with an upwardly-mobile, opportunity-rich life: a perfect confluence of luck and skill.
It’s easy to see how one could become obsessed. I think we’re constantly in search of role models, and TR provides so many shining examples. Further, Morris’ biography goes to great lengths to humanize Roosevelt, so that he [TR] doesn’t seem superhuman, or unrealistic. In addition, Morris allows the readers to see little bits of Roosevelt in themselves. It makes the reader say, “If Roosevelt could do this, so can I.”
This, so far, is my big takeaway: Roosevelt is so inspiring because of, not in spite of, the fact that he is so much like you and me. This is a great lesson in leadership, or at least biography-writing.
On The Importance of Books
May 29, 2014
My inspiration to write this post came from LeVar Burton’s incredible Kickstarter campaign to bring back “Reading Rainbow”. The fact that the campaign raised over $1 million in less than 24 hours is a powerful testament to how important the show was, and, more importantly, the power of books. Although reading is on the decline, and there are very real advantages to digital books, I believe that dead-tree books occupy an irreplaceable role in our lives.
Last week, in a fit of procrastination, I cataloged and alphabetized all of the books on my bookshelf. After 4 moves in 5 years, I’m down to a measly 39; in a previous life, I had a whole wall of books. If you’d like to peruse my virtual bookshelf, have fun.
As I was cataloging the books, though, the medium’s importance began to hit me. Each book brought back memories. For instance, Tom Robbins’ Fierce Invalids Return From Hot Climates was recommended to me at the perfect moment, as I was going through my college/adolescence-crisis-of-faith phase. Mr. Robbins’ canon was instrumental in supporting me through that journey. Bruce Chatwin’s The Songlines was given to me by a great college professor, and was the first book that made me think about linguistics, essentially shaping the trajectory of my life.
Physical books are coated in memories, in a way I doubt digital books will ever be. It isn’t just about which pages we dog-eared, and what marginalia we recorded, but rather, a much more visceral connection. My most poignant book memory came from John Steinbeck’s East of Eden. I read it when I was losing my hearing, when my world was being torn apart. The ending struck me deeply, and it was the first book that ever made me cry. A few months later, my grandfather died, and my grandmother let me have any books I wanted from his bookshelf. Among his long wall of books, I discovered a 1st Edition copy of East of Eden. It was a beautiful moment, when I felt the book connecting us through time.
None of this is to say that digitization is anything short of miraculous. My Kindle goes everywhere with me, and I would sooner give up my smartphone than give up my Kindle. Rather, books are a medium we can never wholly convert to bits and bytes, 1s and 0s. If we only focus on a book’s text, we’ll lose a critical link in our society: A book can sometimes be more than a book, and I never want to digitize that.