Women in AI: Anna Korhonen studies the intersection between linguistics and AI

0


To give AI-focused female academics and others their deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI ​​revolution. As the AI ​​boom continues, we’ll be publishing several pieces throughout the year, highlighting key work that often goes unrecognized. Read more profiles here.

Anna Korhonen is Professor of Natural Language Processing (NLP) University of Cambridge. he is He is also a Senior Research Fellow at Churchill CollegeA fellow of the Association for Computational Linguistics, and fellow at the European Laboratory for Learning and Intelligent Systems.

Korhonen previously served as a fellow Alan Turing Institute and he has a PhD in Computer Science and master’s degrees in both Computer Science and Linguistics. She researches NLP and how Develop, adapt, and apply computational techniques to meet the needs of AI. he has a special interest Responsible and “human-centered” NLP is – in their own words – “based on an understanding of human cognitive, social and creative intelligence.”

question and answer

Briefly, how did you get your start in AI? What attracted you to this field?

I was always fascinated by the beauty and complexity of human intelligence, especially in relation to human language. However, my interest in STEM subjects and practical applications led me to study engineering and computer science. I chose to specialize in AI because it is a field that allows me to combine all these interests.

What work in the AI ​​field are you most proud of?

While the science of building intelligent machines is fascinating, and one can easily get lost in the world of language modeling, the ultimate reason to build AI is its practical potential. I am most proud of the work where my fundamental research on natural language processing has led to the development of tools that can support social and global well-being. For example, tools that can help us better understand how diseases like cancer or dementia develop and how they can be treated, or apps that can support education.

Much of my current research is driven by the mission of developing AI that can improve human life. AI has huge positive potential for social and global good. A big part of my work as an educator is to encourage the next generation of AI scientists and leaders to focus on realizing that potential.

How do you deal with the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

I am fortunate to work in a field of AI where we have a large female population and established support networks. I have found these extremely helpful in dealing with career and personal challenges.

For me, the biggest problem is how the male-dominated industry sets the agenda for AI. The current arms race to develop large AI models at any cost is a great example. This has wide-ranging implications for the priorities of both academia and industry and broader socio-economic and environmental impacts. Do we need larger models, and what are their global costs and benefits? I think if we had better gender balance on the field, we would have asked these questions earlier in the game.

What advice would you give to women wanting to enter the AI ​​field?

AI desperately needs more women at all levels, especially leadership levels. The current leadership culture is not necessarily attractive to women, but active participation can change that culture – and ultimately the culture of AI. Women aren’t always adept at supporting each other. I would really like to see a change in approach in this regard: we need to actively network and help each other if we want to achieve a better gender balance in this sector.

What are some of the most pressing issues facing AI during its development?

AI has evolved incredibly rapidly: it has gone from an academic realm to a global phenomenon in less than a decade. During this time, most efforts have gone toward scaling through massive amounts of data and computation. Little effort has been put into thinking about how this technology should be developed so that it can best serve humanity. People have good reason to be concerned about the safety and reliability of AI and its impact on jobs, democracy, the environment, and other areas. We urgently need to put human needs and safety at the center of AI development.

What issues should AI users be aware of?

Current AI, even when appearing highly fluent, ultimately lacks humans’ world knowledge and ability to understand the complex social contexts and norms with which we work. Even the best technology today makes mistakes, and our ability to prevent or predict those mistakes is limited. AI can be a very useful tool for many tasks, but I wouldn’t rely on it to educate my children or make important decisions for me. We humans must remain in charge.

What’s the best way to create AI responsibly?

Developers of AI think about ethics as an afterthought after building the technology. The best way to think about it is First Any development begins. Questions like, “Do I have a diverse enough team to develop a fair system?” or “Is my data really free to use and representative of the population of all users?” or “Are my techniques strong?” In fact it should be asked in the beginning itself.

Although we can solve this problem through education, we can only enforce it through regulation. Recent developments in national and global AI regulations are important and needed to guarantee that future technologies will be safer and more trustworthy.

How can investors better push for responsible AI?

AI regulations are emerging and companies will eventually have to comply with them. We can think of responsible AI as sustainable AI that is actually worth the investment.



Source link

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *