Skip to main content

243 Mp4 Apr 2026

Recommended Paper: "Gender Biases in LLM-Generated Reference Letters"

This 2023 paper by Wan et al. investigates how large language models (LLMs) may perpetuate social biases when writing recommendation letters. It is highly regarded for its systematic approach to examining language style and lexical content. 243 mp4

In academic circles, "243" often refers to a paper's identifier in a specific conference track. Depending on your interest, you might also be looking for: In academic circles, "243" often refers to a

: "A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation" – A 2023 paper addressing gender bias specifically in machine translation. : Critically examines gender biases in reference letters

: "Crossroads, Buildings and Neighborhoods: A Dataset for Fine-grained Location Recognition" – A 2022 paper introducing a new dataset for improved location identification in text.

: Critically examines gender biases in reference letters generated by LLMs like GPT.