Academia

Below you can find a selection of preprints of academic handouts, papers, slides and essays on Natural Language Processing, Machine Learning & Theoretical Linguistics.

On the Potential for Maximising Minimal Means in Transformer Language Models

Published in the Cambridge Occasional Papers in Linguistics

There is widespread interest in state-of-the-art Transformer-based Language Models (LMs), which underpin systems like Google Translate and ChatGPT. I argue that computational linguists can draw on the insights of neo-emergent linguistic models to address extant issues associated with the syntactic and typological capabilities of these models. In the first part of the talk, I offer a synthesis of the inductive biases of Transformer-based LMs that are reminiscent of the properties emphasised in Biberauer’s (2011 et seq) ‘maximise minimal means’ (MMM) model. Subsequently, I provide a detailed case study which indicates that Transformer-based LMs are unable to perform the crucial NO > ALL > SOME learning dynamics associated with this model. In light of these empirical findings, I offer a theoretical argument in the second part of the talk about how MMM and Dynamical Systems Theory (Bosch 2022, 2023) can be viewed as a linguistically-motivated goal for language models– in the sense of Emerson (2020). I outline how the predictions of this neo-emergentist approach can be operationalised to improve the syntactic capabilities of Transformer-based LMs. While these are only preliminary results, I hope that this can stimulate an interdisciplinary discussion on how linguistic theory can help improve the syntactic and typological capabilities of Transformer-based LMs.

This is a talk presented at Syntax Lab on 14th February 2023, a weekly departmental seminar on syntactic theory and organised by Dr Theresa Biberauer in the Section of Theoretical and Applied Linguistics (University of Cambridge)

Salhan, Liu & Collier (f.c./preprint) Multimodal Language Modelling across Languages and Cultures: Grounding Strategies for Concepts and Events

In Preparation. This research was supported, in part, by an award from Gonville & Caius College, University of Cambridge.

Recent advances in multimodal language modelling have seen state-of-the-art performance in downstream vision-language tasks achieved by models that employ contrastive semantic pre-training. While grounding linguistic embeddings is typically assumed to improve the quality of natural language representations, we undertake an intrinsic semantic evaluation of multimodal representations obtained in contrastive visual pretraining in CLIP (Radford et al., 2021) and its video-text equivalent Video-CLIP (Xu et al., 2021). The effects of image and video grounding on concrete and abstract nominal concepts and verbal events are compared to unimodal BERT (Devlin et al., 2019) and Mirror-BERT (Liu et al., 2021) baselines. The typological generalisability of our monolingual results is subsequently explored by evaluating the performance of Italian CLIP (Bianchi et al., 2021) and multilingual CLIP (Carlsson et al., 2022). Our findings are interpreted in the context of psycholinguistic and semantic research on verbal embodiment and suggest current grounding techniques provide a uniform advantage for processing nouns over verbs in image-text and video-text pre-training.

UROP Project Report 2021: Providing Automatic Feedback on Argumentation Quality to Learners of English

This research was supported by Cambridge University Press & Assessment.

In collaboration with the ALTA Insitute (Computer Laboratory, University of Cambridge), I helped develop a learning tool that aims to help students learn the skill of argumentation by providing high-level, automatic adaptive feedback on the quality of their argumentation in essays. I was the only first-year undergraduate at Cambridge University accepted to participate in the 2021 UROP in the Natural Language and Information Processing Group. The tool consists of a pre-trained Large Language Model that automatically analyses argumentation structure in written English and a front-end user interface. I helped establish a preliminary pilot study, which was later conducted by Wambsganss, Caines & Buttery (2022). You can find their 2022 ACL Conference Paper that expands on the project here: https://aclanthology.org/2022.bea-1.18/.