| layout | title | permalink |
|---|---|---|
page |
Program |
/program |
CMCL 2021 will have both oral presentations and poster presentations. The complete program is under definition.
Mexico City, Mexico
<iframe src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d481728.804584038!2d-99.42380635078402!3d19.390519038362424!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x85ce0026db097507%3A0x54061076265ee841!2sMexico%20City%2C%20CDMX%2C%20Mexico!5e0!3m2!1sen!2sit!4v1608593928004!5m2!1sen!2sit" width="600" height="450" frameborder="0" style="border:0;" allowfullscreen="" aria-hidden="false" tabindex="0"></iframe>June 10, 2021, Mexico City (GMT-5)
-
9:00–9:15 Introduction
-
9:15–10:15 Keynote Talk 1: Grounded Language Learning, from Sounds and Images to Meaning - Afra Alishahi
-
10:15–10:30 Break
-
10:30–12:00 Oral Presentations 1
Non-Complementarity of Information in Word-Embedding and Brain Representations in Distinguishing between Concrete and Abstract Words
Kalyan Ramakrishnan and Fatma Deniz
Human Sentence Processing: Recurrence or Attention?
Danny Merkx and Stefan L. Frank
Modeling Incremental Language Comprehension in the Brain with Combinatory Categorial Grammar
Miloš Stanojevi´c, Shohini Bhattasali, Donald Dunagan, Luca Campanelli, Mark Steedman, Jonathan Brennan and John Hale
-
12:00–13:00 Lunch break
-
13:00–14:30 Oral Presentations 2
A Multinomial Processing Tree Model of RC Attachment
Pavel Logacev and Noyan Dokudan
That Looks Hard: Characterizing Linguistic Complexity in Humans and Language Models
Gabriele Sarti, Dominique Brunato and Felice Dell’Orletta
Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
Soo Hyun Ryu and Richard Lewis
-
14:30–14:45 Break
-
14:45–15:00 Shared Task Presentation
CMCL 2021 Shared Task on Eye-Tracking Prediction
Nora Hollenstein, Emmanuele Chersoni, Cassandra L. Jacobs, Yohei Oseki, Laurent Prévot and Enrico Santus
-
15:00–16:30 Poster Session
LangResearchLab_NC at CMCL2021 Shared Task: Predicting Gaze Behaviour Using Linguistic Features and Tree Regressors
Raksha Agarwal and Niladri Chatterjee
TorontoCL at CMCL 2021 Shared Task: RoBERTa with Multi-Stage Fine-Tuning
for Eye-Tracking Prediction Bai Li and Frank Rudzicz
LAST at CMCL 2021 Shared Task: Predicting Gaze Data During Reading with a Gradient Boosting Decision Tree Approach
Yves Bestgen
Team Ohio State at CMCL 2021 Shared Task: Fine-Tuned RoBERTa for Eye-Tracking Data Prediction
Byung-Doh Oh
PIHKers at CMCL 2021 Shared Task: Cosine Similarity and Surprisal to Predict Human Reading Patterns
Lavinia Salicchi and Alessandro Lenci
TALEP at CMCL 2021 Shared Task: Non Linear Combination of Low and High-Level Features for Predicting Eye-Tracking Data
Franck Dary, Alexis Nasr and Abdellah Fourtassi
MTL782_IITD at CMCL 2021 Shared Task: Prediction of Eye-Tracking Features Using BERT Embeddings and Linguistic Features
Shivani Choudhary, Kushagri Tandon, Raksha Agarwal and Niladri Chatterjee
KonTra at CMCL 2021 Shared Task: Predicting Eye Movements by Combining BERT with Surface, Linguistic and Behavioral Information
Qi Yu, Aikaterini-Lida Kalouli and Diego Frassinelli
CogNLP-Sheffield at CMCL 2021 Shared Task: Blending Cognitively Inspired Features with Transformer-based Language Models for Predicting Eye Tracking Patterns
Peter Vickers, Rosa Wainwright, Harish Tayyar Madabushi and Aline Villavicencio
Team ReadMe at CMCL 2021 Shared Task: Predicting Human Reading Patterns by Traditional Oculomotor Control Models and Machine Learning
Alisan Balkoca, Abdullah Algan, Cengiz Acarturk and Ça˘grı Çöltekin
Enhancing Cognitive Models of Emotions with Representation Learning
Yuting Guo and Jinho D. Choi
Production vs Perception: The Role of Individuality in Usage-Based Grammar Induction
Jonathan Dunn and Andrea Nini
Clause Final Verb Prediction in Hindi: Evidence for Noisy Channel Model of Communication
Kartik Sharma, Niyati Bafna and Samar Husain
Dependency Locality and Neural Surprisal as Predictors of Processing Difficulty: Evidence from Reading Times
Neil Rathi
Modeling Sentence Comprehension Deficits in Aphasia: A Computational Evaluation of the Direct-access Model of Retrieval
Paula Lissón, Dorothea Pregla, Dario Paape, Frank Burchert, Nicole Stadie and Shravan Vasishth
Sentence Complexity in Context
Benedetta Iavarone, Dominique Brunato and Felice Dell’Orletta
Evaluating the Acquisition of Semantic Knowledge from Cross-situational Learning in Artificial Neural Networks
Mitja Nikolaus and Abdellah Fourtassi
Representation and Pre-Activation of Lexical-Semantic Knowledge in Neural Language Models
Steven Derby, Barry Devereux and Paul Miller
Relation Classification with Cognitive Attention Supervision
Erik McGuire and Noriko Tomuro
Graph-theoretic Properties of the Class of Phonological Neighbourhood Networks
Rory Turnbull
Contributions of Propositional Content and Syntactic Category Information in Sentence Processing
Byung-Doh Oh and William Schuler
The Effect of Efficient Messaging and Input Variability on Neural-Agent Iterated Language Learning
Yuchen Lian, Arianna Bisazza and Tessa Verhoef
Capturing Phonotactic Learning Biases with a Simple RNN
Max Nelson, Brandon Prickett and Joe Pater
-
16:30–17:30 Keynote Talk 2: The Importance of Individualized Text Formats for Readability - Zoya Bylinskii
-
17:30–17:45 Closing Remarks