Social Impact Tech: The importance of integrating diverse perspectives

Data Friendly Space
3 min readJan 11, 2024

--

We could think that AI is neutral, but we need to remember that it is made by humans.

AI language models draw their material from already published information, meaning that they also inherit human flaws such as bias based on age, gender or race. For example, as noted by the Harvard Business Review, AI will often translate the term ‘nurse’ using a female-gendered word, and render ‘doctor’ as a male noun. This bias in AI not only replicates gender stereotypes, but it has the potential to widen the gender gap.

In the second contribution of our Social Impact Tech series we interviewed Linda Ratz, NLP & Machine Learning Engineer at Data Friendly Space (DFS). With a Master’s Degree in Artificial Intelligence from Johannes Kepler Universität Linz (AT), Linda’s academic prowess is matched by her practical innovation. Her passion for NLP and commitment to fair AI led her to conclude her master’s with the thesis titled “Measuring Gender Bias in Information Retrieval Models”.

We discussed with her how we can overcome AI bias and the importance of workplaces in ensuring more equitable and inclusive systems.

You are a NLP & Machine Learning Engineer. Is there any advice you would give to women trying to enter or transition into this field?

My advice to anyone entering or transitioning into the field is to accept the fact that you can’t know everything, and that’s perfectly normal given the rapid evolution of ML/NLP. Self-doubt is a common challenge, but it’s important to recognize that it’s not all your fault; it’s often a systemic problem (especially in male-dominated workplaces). Remember that diversity of perspectives and experiences is very valuable. Focus on areas that really interest you, and don’t feel pressured to stay on top of everything, as that is an impossible task in such a dynamic field. Even the most confident colleagues don’t know everything.

During your career, you have been studying gender bias in AI. How can we ensure more equitable and inclusive systems?

In my academic work and research contributions, I’ve focused on addressing gender bias in Information Retrieval (IR). Search engines based on the technology of IR, as “gatekeepers” to online information, significantly shape our knowledge and beliefs about our world. I consider this area — among many others — vital to investigate for bias and fairness. My research has primarily revolved around identifying and quantifying gender bias in IR, aiming to contribute to more equitable and inclusive systems.

As you previously mentioned, NLP is a field characterized by rapid evolution. What are your tools and means to stay up to date with the latest technologies?

I try to stay updated in the evolving field of NLP by reading research papers and blogs relevant to my interests and ongoing projects. I also talk with colleagues to gain insights into their projects and perspectives and I think it’s important to not be hesitant to ask questions when necessary. Additionally, I find hands-on experimentation with NLP frameworks and libraries helps a lot to gain a deeper understanding of the models and tools in practice — luckily there are many open-source projects and tutorials to help with that.

Ethics is crucial when developing AI systems, particularly NLP. How can we ensure responsible development?

Ethical considerations in AI (particularly NLP) encompass privacy protection, mitigation of societal biases, transparency and many more. I strongly believe in the importance of diverse development teams and involving a wide range of perspectives when designing such systems to help identify and minimize such risks.

We talked about the fact that technology is a male-dominated field, where women often face gender bias and discrimination. How can we avoid such behaviors in workplaces?

While I haven’t personally encountered open discrimination, I’ve observed that the tech industry (like many others) tends to favor confidence and assertiveness — character traits that are often not as pronounced in women. I’ve found support in workplaces where colleagues and superiors have made a deliberate effort to ensure diverse working styles, personalities and perspectives are valued.

Stay tuned for the next social impact tech story.

--

--