HimToHer Initiative

Implicit bias in English and Swedish texts

In this service, we have implemented English and Swedish large language models based on BERT. The models are trained on a large amount of representative texts and here we use them to measure biases in written text.

Click on the below examples to test

Test yourself!
You need to state () once in the text (where he or she should be).
OK
{{mword}}: {{firstscore}} %
{{fword}}: {{first=='han'?secondscore:firstscore}} %
{{mword}}: {{secondscore}} %
Ratio: {{firstscore/secondscore | number:2}}:1

{{lang({"sv":"Vänta medan vi analyserar meningen...","default":"Please wait while we are analyzing the sentence..."})}}

{{lang({"sv":"Analys av meningen är klar. Vänta medan vi kontrollerar resultatet... ","default":"The analysis of the sentence is ready. Please wait while we check the result..."})}}

{{lang({"sv":"Vi kan tyvärr inte visa resultatet p.g.a. att det bedöms innehålla stötande innehåll.","default":"Unfortunately we cannot show the result since we consider it to contain offensive content."})}}

ios_share
The model could not make a probable prediction in relation to he/she. The most probable word is instead "{{response.word}}" ({{response.score}} %).
{{sentence}}

{{lang({"sv":"Skapa länk för att dela online","default":"Create link to share online"})}}

{{sent.sentence}}
{{lang({"sv":"Skapa och kopiera länk till urklipp","default":"Create and copy link to clipboard"})}}

This service, developed in 2021, uses machine learning to measure gender biases based on representative texts from society. The large machine learning models used in this service are trained on such texts and thus serve as an approximation of what many people encounter in life.

The models use a method similar to that used by ChatGPT to generate responses - guessing word by word to produce texts. OpenAI has received billions of dollars to build astronomically large models, but smaller models are also effective when used for specific tasks such as guessing a single missing word in a sentence, which is how these models are used in this service.

With the help of the models, we estimate which word is most likely to be placed where the pronoun or subject that identifies gender should appear by allowing the models to evaluate the probability of different words from gendered word pairs (he/she, his/hers, woman/man, etc.) being in the sentence. The relationship between the feminine and masculine words in the highest probability pair is then displayed in a pie chart along with the probabilities and relationship between them. For more information about this service, please watch the video below.

More from HimToHer

Become our sponsor

Theories on equal opportunity

A social EU taxonomy?

© 2021-2024 HimToHer
Contact us: info@himtoher.com