Hi, Iām DeBoris Leonard.
I am a PhD researcher in computer science focused on equitable and linguistically informed natural language processing, with particular emphasis on speech and writing produced by multilingual speakers and learners of English. My work sits at the intersection of NLP, sociolinguistics, and AI ethics, examining how contemporary language technologies encode prescriptive bias and how those biases manifest in automated correction, evaluation, and assessment systems.
My research investigates semantic preservation, over-correction, and dialectal mismatch in AI-mediated language feedback, especially in educational and clinical contexts. I am particularly interested in how large language models and speech recognition systems interact with non-standard English, code-switching, and learner varieties, and how evaluation metrics can be redesigned to better reflect human judgment, pedagogical intent, and linguistic validity.
Broadly, my goal is to develop methods, metrics, and frameworks that move language technologies away from one-size-fits-all norms and toward systems that are fairer, more transparent, and more aligned with real human language use. I am motivated by the belief that ethical AI is not only about preventing harm, but about actively designing systems that respect linguistic diversity and support learning rather than overwrite it.