Biases in Language Models
sonam
Biases in popular models like flair, bert and bytepairs and debising techniques:
--why debiasing is important? The use of AI in sensitive areas including for hiring, criminal justice and health- care has stirred a debate about bias and fairness. AI being shaped by flawed and societal biases data, where muslims are considered more violent, females are considered less smart for a job and blacks -- word embeddings -- contextual and non contextual -- non contextual word embeddings, types --debiasing --contextual --bert examples
sonam
Affiliation: saama technologies
Sonam is currently working as an AI Researcher at saama technologies mostly in the area of natural language processing and deep learning.
She have also been tech Speaker at PyData Global 2020 , Pycon India 2019 and 2020. Previously, She has worked as a visiting faculty for robotics and computer vision, and also have experience in entrepreneurship and startups, worked as a Lead software engineer in an IIT Madras incubated company for pipeline inspection and robotics solution where she implemented corrosion detection using ML. Have also built and released an App for movies and series recommendation using ML.
These days mostly interested in AI Fairness, Responsible AI in Language Models like Bert and GPT.