Implicit bias in English and Swedish texts
In this service, we have implemented English and Swedish large language models based on BERT. The models are trained on a large amount of representative texts and here we use them to measure biases in written text.
Click on the below examples to test
- () is boss.
- () is a lawyer.
- () is an assistant.
- () is a prominent researcher.
- () is a good lecturer.
- () has been awarded an honorary title.
- Make no mistake that () is the best when stakes are high.
- () stayed at home with the children when they were small.
- () is always working.
- () spends many hours before the mirror each day.
- () wants to work with people.
{{lang({"sv":"Vänta medan vi analyserar meningen...","default":"Please wait while we are analyzing the sentence..."})}}
{{lang({"sv":"Analys av meningen är klar. Vänta medan vi kontrollerar resultatet... ","default":"The analysis of the sentence is ready. Please wait while we check the result..."})}}
{{lang({"sv":"Vi kan tyvärr inte visa resultatet p.g.a. att det bedöms innehålla stötande innehåll.","default":"Unfortunately we cannot show the result since we consider it to contain offensive content."})}}
{{lang({"sv":"Skapa länk för att dela online","default":"Create link to share online"})}}
This service, developed in 2021, uses machine learning to measure gender biases based on representative texts from society. The large machine learning models used in this service are trained on such texts and thus serve as an approximation of what many people encounter in life.
The models use a method similar to that used by ChatGPT to generate responses - guessing word by word to produce texts. OpenAI has received billions of dollars to build astronomically large models, but smaller models are also effective when used for specific tasks such as guessing a single missing word in a sentence, which is how these models are used in this service.
With the help of the models, we estimate which word is most likely to be placed where the pronoun or subject that identifies gender should appear by allowing the models to evaluate the probability of different words from gendered word pairs (he/she, his/hers, woman/man, etc.) being in the sentence. The relationship between the feminine and masculine words in the highest probability pair is then displayed in a pie chart along with the probabilities and relationship between them. For more information about this service, please watch the video below.
More from HimToHer
Become our sponsor
Theories on equal opportunity
A social EU taxonomy?
© 2021-2024 HimToHer
Contact us: info@himtoher.com