Featured Linguist: Michael Gamon

A job ad on LINGUIST List.

Featured Linguist: Michael Gamon

Featured Linguist: Michael Gamon

Language? Or Science?

I grew up in Bad Soden, a small town on the outskirts of Frankfurt, Germany. My parents always encouraged any interest of mine. Whether it was science (the chemistry lab in the basement, even the rockets and explosive experiments in the yard) or language and literature. My dad had a fairly extensive collection of world literature. He was in his 20s when WWII ended and could not get enough of the books and the modern art that became available after the barbarism of the Third Reich. The interest in reading rubbed off on me, allegedly I could read fluently by the time I entered first grade, having taught myself reading by asking adults (sometimes total strangers) to spell out letters and labels aloud, starting with the signs in the elevator of our apartment building. Once I had outgrown children’s books, I was allowed to pick any book I wanted from my dad’s shelves, as long as I would put it back after reading it – and I took full advantage of that. There was no notion of “age-appropriate” books in our house: if I could read it and enjoy it, it was considered appropriate. From those beginnings, language, literature and science never lost their appeal for me. In high school I focused on physics, math and English, and when the time came to decide on what to study, I narrowed down the choice to geophysics or German studies and it was my choice to make. My rationale at the time was: Go for the big and risky dream first (study literature to become a writer), and if that does not work out, science and engineering are still another interesting option.

Language and Science!

I did not know about Linguistics until I signed up for German studies at the Johann Wolfgang Goethe University in Frankfurt. It was one of the academic minors “Nebenfächer” offered in German studies –an interesting application of formal methods to the subject of language. All it took was an introductory generative syntax course (taught by the unforgettable Wolfgang Sternefeld) to get hooked; I studied under Helen Leuninger and Günther Grewendorf. Language and the mind/brain, the mathematics of language, and the distant prospect of computers analyzing language – this was incredibly exciting! A few years into the program, I applied for a Fulbright scholarship to study generative linguistics in the US. To my surprise I made it through round after round of the selection process until I was placed in the University of Washington’s linguistics program. When I received the happy news, I tried to find the university on a map – poring unsuccessfully over a DC area map — the only “Washington” I recognized.

I arrived in Seattle in the autumn of 1990 and fell in love with the beauty of the city, the lakes, the sea, the mountains, and the campus. Resources at the school were a world apart from what I had known in Frankfurt. There, the university library still had card catalogues. In order to get your materials you had to fill out a request form, return after two days to stand in line and find out if the book was available and hope the librarian had processed the request form properly. At the UW, you would go to a library computer terminal, find the library code, and pick up what you needed from the open shelves within minutes. UW faculty were accessible for questions or discussions at all times, the student body was very international, the place was vibrant.

A Degree and a Job.

I finished my MA at the UW by adding one more academic quarter to the three-quarter scholarship. By then I knew I wanted to continue as a linguist, inspired by wonderful teachers (Karen Zagona, Heles Contreras, Fritz Newmeyer, Ellen Kaisse, Sharon Hargus) and fellow grad students. I returned to Germany, only to find that the dusty educational bureaucracy there made it near impossible to have my brand new MA recognized. Fortunately, I got two nearly simultaneous offers to join a PhD program — one from the UW, the other from Nijmegen. I decided to return to the UW, for the Pacific Northwest’s natural beauty and for the UW’s academic program.

I was about to finish my PhD in 1996 when a job ad in the Linguist List caught my eye: Microsoft Research (MSR) was looking for a German grammarian (the archives still have the posting https://linguistlist.org/issues/7/7-33.html). At the time, the UW did not have a computational linguistics program; and while I had done a little bit of Prolog programming back in Germany, I could not possibly consider myself a computational linguist. But I figured that applying would help me practice resumé writing and cost me only a few hours and a stamp, so I sent off the application, with little hope of success. That application led to an internship in the Natural Language Processing group at MSR, and then to a job offer. In September 1996 I had both a PhD and a great job. And I could stay in the place I loved.

My early years at Microsoft Research were focused on writing a computational grammar for German in a grammar-authoring environment that was far ahead of its time. The grammar was written in a declarative language (called “G”, loosely based on LISP) and processed by a very efficient parsing engine. Authoring tools made it possible to test a grammar change over thousands of sentences within minutes and to highlight and aggregate each change in the analyses. At the time, other parsers would brood over moderately complex sentences for seconds, sometimes minutes, at a time.), For someone passionate about understanding the structure of language and tinkering with grammatical details this was the best playground one could imagine!

By the time the German computational grammar became part of Microsoft’s German grammar checker (every sentence that is grammar-checked in a German word document is parsed into a full syntactic tree!), the field moved in a new direction, away from grammar engineering and into the world of probabilities. It was time to discover the potential of machine learning. With some colleagues we found some interesting problems in natural language generation where we could combine knowledge engineering (no need to learn from data what we can code in a few hours) with machine-learned models for data-driven decisions. Soon, however, even the idea of a partially knowledge-engineered system fell out of favor, and the search was on for some new research areas. For me, the “fringe” areas (some of which have become mainstream now) held the most fascination: sentiment detection, the notion of “style”, using machine learning to detect and correct non-native writing, and language in social media. More recently, I made another little leap into a new branch of Microsoft Research where we work closer with product teams to bring language technology to market.

And Now?

So here I am, a few months shy of 20 years at MSR after having applied for that job on Linguist List in 1996. Along the way there have been some 60 papers, 30 patent applications, and many collaborations with wonderful colleagues, friends, and incredibly fun and talented research summer interns.

After high school, I had wanted to become a writer or a geophysicist. Instead, I became a linguist. I studied generative linguistics and landed a job as a computational linguist. I have never taken a computer science or programming class, but now work in a computer science research lab.

Along the way I have also became something of a contrarian, to the bemusement of enthusiastic up-and-coming researchers. So, by way of example, I feel I should conclude with at least a few potentially career-limiting remarks.

I believe that, over its long history, the term “Artificial Intelligence” has become intellectually useless –a term that has utility only as a grant-magnet or as a topic for the media circus and their insatiable appetite for the shiny and meaningless. There is “Apparent Intelligence,” which is a real and remarkable achievement: software that is so cleverly designed that a machine can appear intelligent within a well-defined and limited domain. But the notion that machines “understand” language in any meaningful sense of the word, for example, is preposterous at the current stage of our knowledge. Although the mantra “in five years, computers will be able to do xyz,” has been repeated for at least 60 years now, it has not come any closer to the truth. And while deep learning is truly a qualitative breakthrough, all of those “brain” metaphors we see bandied about, well they’re just metaphors, and pretty bad ones at that.

So, what’s next? Your guess is as good as mine!




Please support the LINGUIST List student editors and operations with a donation during the 2016 Fund Drive! The LINGUIST List needs your support!

Leave a Reply