AI and Ethical design with Sonja Rattay

by | Mar 24, 2024 | Vodcast | 0 comments

Interviewer: Could you briefly introduce yourself to our readers?

Sonja: I am an interdisciplinary designer and researcher. Currently, I am pursuing a PhD in Human-Centered Computing at the University of Copenhagen. I am also a Co-founder of Block Zero AB, a design studio in Malmö. Prior to my PhD, I was the Head of Design and currently, I am active as a Board Member. In the past, I have led projects ranging from complex data visualizations to designing ecosystems such as smart homes and connected cities to establish communication strategies and design languages in the fields of e-health, AI, and biometrics. 

Interviewer: What led you to a career in ethics in AI system design?

Sonja: To be honest, it was a quite convoluted path. At first, I started with interaction design and then service design. Through my career in service design, I gradually became more interested and invested in automated systems. It means designing not only individual gadgets but also networks and ecosystems of things. I therefore began to study AI and cognition on the side. After a couple of years of working in system design, I realized that I needed to learn more about ethical design and how to design automated systems in a responsible way. That is the underlying reason why I decided to go into academia and dig into the research of ethical implications of automated system design.

Interviewer: As a researcher pursuing a PhD in Human-Centered Computing, what do you think is necessary to advance the existing research on AI? 

Sonja: AI currently basically describes machine learning algorithms, which recognize patterns from data automatically. However, the amount of data that goes into these systems creates an unprecedented impact. I believe that there needs to be more research and understanding about these broad and far-reaching consequences of AI systems. I think it is essential to advance the research on how these data-driven systems entangle with our social, political, and economic lives. 

Interviewer: A poor quality of data could lead to biased algorithms and AI systems. What other factors could lead to biased AI systems? 

Sonja: Absolutely! Poor data leads to harmfully biased AI systems. However, I would like to point out that all types of data-driven systems are going to be biased. Data is created by humans, and humans are inherently biased. Our whole infrastructure is entirely built on valuing certain norms over others. Hence, the notion that we can build a truly objective, neutral, and unbiased system is merely a dream that we might never be able to achieve. 

There is no doubt however that good data hygiene can improve our algorithms. It does not necessarily imply having remarkably diverse data but appropriate data for the task. Good data hygiene is mainly achieved at the data collection, processing, and cleaning stages because data is already processed information. Thus, it is important to make sure that the quality of phases of data analytics is high. It will largely contribute to the quality of systems and the ways how they perform. 

-Interviewer: In general, the majority of the population around the world is not aware of the implications of biased AI systems. Could you give us specific examples of biased AI systems? 

Sonja: For an outsider, it is difficult to speculate whether the system is biased because it uses machine learning or not. The most known case of such a system might be Google Search. Everybody knows that Google ranks our search results but we assume that we are presented with the best matching results. However, no one actually knows how Google ranks its search results, which means we are unable to understand which bias influences our search results. For instance, there could be some political bias in the algorithm which we might never know. Another example is AI systems that are used in the financial sector. There have been a lot of stories of bias against marginalized communities resulting in lower eligibility for loans and limited access to financial services. In terms of cases in Europe, the Dutch tax authorities used an algorithm to spot childcare benefits fraud in 2019. As a result of bias in the system, thousands of families with lower incomes and ethnic minorities became no longer eligible for childcare benefits. 

The worst part is that people truly do not know when these types of systems come into play. When we get an answer from a governmental institution or private company, we do not know how they reached their decision. It is almost impossible to detect the biases at play in the system if we do not know the reasoning of their algorithms. 

Interviewer: You are the Co-Founder and Board Member of Block Zero AB where you previously led your team of multidisciplinary designers to bring humans and technology closer. Could you share with us some of your recent work? 

Sonja: I would be happy to share some of the exciting recent work that has been done at Block Zero AB. Most recently, Block Zero collaborated with Google to develop the next generation of browsing experience that actually aligns with the idea of both convenience and intuitive interaction, while also staying in control and preserving agency. Moreover, Block Zero is quite active in the green energy sector. We designed and co-developed the in-cabin experience for the first Canadian zero-emission vehicle last year. We are also a long-term partner of E.ON, particularly working on the E.ON ectogrid™ project. It is a new grid that balances the energy between buildings with different heating and cooling needs before supplying new energy, thereby minimizing carbon emissions and making the entire energy consumption much more efficient. We have been honored to work with E.ON ectogrid™ which won in the energy efficiency category at the UN Climate Conference COP28 in Dubai. 

At Block Zero AB, we want to focus on maintaining long-term relationships with our existing partners and clients. Furthermore, I can tell you that our team will keep pushing towards an innovative and more sustainable future.  

Interviewer: You have studied neurobiology and behavior at Luleå University of Technology. How do you incorporate neurobiology and behavioral science into AI system design? 

Sonja: Neurobiology and behavioral science do come into play in the study of AI systems, but I studied the subject even before I went into this area of technology. The motivation behind it was very personal. In the past, I had burnout which turned into depression. To cope with it, I decided to learn how the brain works and understand what is happening inside my brain. The subject study indeed deepened my understanding of how humans work and relate to the technologies in our lives. As a matter of fact, understanding human behavior truly helps me understand how people create narratives around technologies such as AI. It becomes much more understandable why we do certain things when you start looking from a behavioral perspective. At the same time, it also makes it easier to identify how to change that behavior. 

Interviewer: It is crucial to develop new legal frameworks and regulations to address the unique issues arising from AI technologies, such as ethics in AI systems. If you could give any recommendations to policymakers, what would be your recommendation?

Sonja: It is a huge question. I would say that legislators are usually not technical experts in the field they are regulating. Thus, they rely on external experts who might be lobbyists presenting their research and facts about the issue. At the moment, people who are most prominently and loudly involved in the development of AI are very rich and very powerful men who are dominating the public discourse in a way that is not beneficial for the rest of us. To address that, it is critical to make sure that we are relying on the right kind of experts. Legislators need to hear the voices of the people who do not have financial or political power. In short, legislation should not be written in a way to benefit the richest but to protect the weakest. 

Interviewer: How do you ensure that your design practices are sustainable, inclusive, and democratic?  

Sonja: That is a great follow-up question. It calls me out on the things that I am asking for from the government. Similar to AI development, the design profession is not necessarily the most diverse field due to economic, educational, and cultural barriers. On the other hand, design professionals pride themselves on the involvement of users. I think that designers benefit more from involving potential users or potentially affected. In my research, I have been mostly focusing on “not affected” but on practitioners, developers, and designers who are creating AI systems. To elaborate, I am trying to engage the people who are normally in the driver’s seat and ask them uncomfortable questions as an affected user. Moreover, I try to constantly educate myself on the work that is being done by marginalized groups. As I am finishing my PhD, I want to do more community-driven work in my Postdoc to be able to contribute to ensuring that designs are more inclusive and democratic.

Interviewer: Almost everyone fears that AI will take over their jobs in the future. What needs to be done to ease that fear? 

Sonja: In general, I think it is not a bad thing that a lot of jobs are being automated. I think some jobs should be automated. It will allow us to work less and focus on care work such as caretaking, nursing, and child raising. That is a good thing. However, it could become a bad thing if we don’t ​​change our socio-economic system along with it. We might need to consider universal basic income and redefine what society has historically considered as work. Our economic systems have been changing over the past couple of hundred years. The current state of capitalism is a product of intentional politics that was based on conservative and traditionalist ideas pushed in the 50s and 60s. In other words, we can change the current economic system again through intentional politics.

Interviewer: Congratulations on being recognized as “Rising Stars in AI Ethics” by 100 Brilliant Women in AI Ethics™. What do you aspire to achieve in the AI Ethics in the future?

Sonja: Thank you so much! I was surprised that I was on the list. I hope I will make it to the list of the 100 Brilliant Women in AI Ethics™ in the next few years. In terms of my aspirations, I have been focusing my research on practitioners and I would like to bridge that more towards communities. In addition, as I have both academic and industry background, I strive to bridge industry with academia. I have colleagues who are doing radical work which might be much harder to get accepted as valuable in this field. For instance, one of my colleagues works with queer representation within AI systems. It is not only radical but also a very artistic and critical work. Another colleague of mine works with diversity and accessibility. I see my job as being the bridge and making sure that their work gets more integrated into existing AI systems. 

< / > related contents

Protecting Biodiversity Mette Boye

Protecting Biodiversity Mette Boye

Women in Tech Alliance spoke with Mette Boye, a Conservation Director at WWF Denmark, about why we need to pay more attention to biodiversity loss, how nature can be a solution for climate change and why we need to involve more women in climate adaptation.

Yuliia’s Journey in Tech

Yuliia’s Journey in Tech

Women in Tech Alliance spoke with Yuliia Panchuk, a software developer at Playtech, about her incredible journey pivoting into software development and successfully getting multiple job offers in Sweden. She does not only work as a software developer but also offers her contribution to society through her volunteering work at the Google Developer Group Malmö and Women Techmakers.