Charles Lovering

Computer Science PhD Student (NLU).

[Google Scholar]
[Github]
[Resume]
[Lab page]

Representative Work

Charles Lovering*, Jessica Forde*, Ellie Pavlick, Michael Littman. Where, When & Which Concepts Does AlphaZero Learn? AAAI, RLG Workshop, 2022. (*Equal contribution.)
Charles Lovering, Rohan Jha, Tal Linzen, Ellie Pavlick. Predicting Inductive Biases of Pre-Trained Models. ICLR, 2021. [github] [video]
Rohan Jha, Charles Lovering, Ellie Pavlick. Does Data Augmentation Improve Generalization in NLP? 2020. PREPRINT.
Charles Lovering, Ellie Pavlick. Self-play for Data Efficient Language Acquisition. 2020. PREPRINT.

Talks

Where, When & Which Concepts Does AlphaZero Learn? @ Jane Street, Research Symposium. Winter, 2022. [Upcoming]
Predicting Inductive Biases of Pre-Trained Models. @ NLP & Fairness, Interpretability, and Robustness; Google. Fall, 2020.
Minimum Description Length Probing @ Language Understanding and Representations; Brown University. Summer, 2020.
Transformers @ Language Understand and Representations; Brown University. Summer, 2019.

Artifacts

Lindenmayer systems.
Interactive visualizations.
Introduction to byte-encoding representations.
Introduction to beam search.
Introduction to the transformer architecture.
Introduction to neural turing machines.

Acknowledgments

This site replicates the distill design.

Adobe XD CC for diagrams, D3 for visualizations, and PyTorch for deep learning.