This year we will be continuing our Rising Stars Series where we feature up and coming linguists ranging from impactful undergraduates to prolific PhD candidates. These rising stars have been nominated by their mentors for their exceptional interest in linguistics and eager participation in the global community of language researchers.
Selected nominees were asked to share their view of the field of linguistics: what topics they see emerging as important or especially interesting, what role they see the field filling in the coming decades, and how they plan to contribute. We hope you will enjoy the perspectives of these students, who represent the bright future of our field.
Today we happily present to you the perspective of Elizabeth Pankratz. She is currently an MA student at Humboldt University, Berlin. She has published a paper on digital lexicography for endangered languages in Canada, she has work published in the Journal “Morphology” and she is currently working on a thesis on the diachronic development of morphological productivity. That’s a lot of achievement! Her excellent track record even allowed her to work at Freie Universität, Berlin and the Leibniz-Zentrum Allgemeine Sprachwissenschaft (ZAS) simultaneously as a student assistant. Furthermore, her work with the good people at ZAS lead to another high profile publication in the Journal of Memory and Language. She has received the highest of praise from her mentors and probably has a long list of accolades about which we could continue writing but that might take all day! Without further delay here is her Rising Star piece…
I see the field of linguistics becoming increasingly relevant, largely because of its applicability in modern technology. Our society is constantly encountering more and more opportunities to converse with machines, and these machines have to be able to recognise what we’re saying and respond in kind. My current interests lie in how our research into language is applicable in tech, both in deep learning systems and in language revitalisation work, and I’ll talk about these two points here.
First, for instance, many linguists (myself among them) believe that cognitive language processing happens probabilistically, and most machine learning techniques are also based on probabilistic assumptions. But how comparable are the two sorts of processing? I think that we will be asking ourselves this more as work on deep learning with language progresses. Can we create machines that actually have the same intuitions about language that we do? Should we? If we make machines that can generate language that, to us, sounds just like language produced by another human, can the way these machines conceptualise and use language tell us anything about the way that we do?
Making machines that use language in a way that reflects human intuition means that we need to understand human intuition in the first place, which is where our work as linguists enters the bigger picture. Discovering and understanding systematic behaviour of phenomena that look arbitrary or unpredictable at first glance is naturally valuable for the science of linguistics as a whole, but I find it so exciting that there are also applications outside of our immediate field. Some of my research aims to discover this kind of underlying systematicity. For example, together with Roland Schäfer at the Freie Universität Berlin, I showed that conceptual plurality in a German compound word makes the appearance of a linking element with the same form as the plural suffix of the first noun more likely (e.g. Bild ‘picture’ in Bildersammlung ‘picture collection’ is conceptually plural – you can’t have a collection with only one picture – while Bild in Bildrahmen ‘picture frame’ is not, and the pluralic linking element -er- is more probable in the first type of compound than the second). This finding indicates that German linking elements do contribute something to the semantics of compounds, which has been a point of disagreement among morphologists of German. This work has been published in Morphology as Schäfer & Pankratz (2018), a paper I’m incredibly proud of. We combined the automatic processing of large amounts of data with linguistic theory-building supporting a probabilistic approach, moving linguistic methodology forward. Another current project of mine investigates the conditions under which anaphoric reference to non-head constituents of compound words in English and German can succeed (like in the sentence “It’s deodorant season, wear it!”).
These are tricky and very specific phenomena, like much of what linguists deal with. However, machine models will only be able to generate, say, fully natural-sounding compounds in German or correctly resolve non-standard anaphoric reference if they can deal with these borderline cases. This is why our research into the fine details of language is incredibly important, not just for our field but for all fields that build on the study of language. The modern tech world doesn’t just need software developers and engineers, it also needs linguists.
I’ll just briefly touch on the second point about tech in language revitalisation, since it was also recently discussed on this blog by Nils Hjortnaes. Developing an understanding of these tricky phenomena in large, well-researched languages opens methodological doors to pursuing them in smaller, lower-resource languages, where the importance of high-quality language resources for teaching and learning is even greater, especially if the language in question is endangered. Again, we can extend our gaze outside of the doors of our field and use our knowledge about language to fulfill social responsibilities, too.
I look forward to being part of this really exciting field for hopefully many years to come, and I’m grateful for this opportunity to share my thoughts here with you!
If you have not yet– please visit our Fund Drive page to learn more about us and why we need your help! The LINGUIST List relies on your generous donations to continue its support of linguists around the world.