AI Ethics Committee Interview

We introduce the story of the KRAFTON AI Ethics Committee.

PUBGM Contents Management Team

Minhyeon Park

1. Please briefly introduce which team you are from and what you do.

Hello, my name is Minhyeon Park. I’ve been working in game development and services for over 20 years. I joined Bluehole Ginno Games (PUBG), the predecessor to Krafton, in 2017. As a game content and production specialist, I’ve worked as a PUBG design dept. lead, unit lead, and supervisor of PUBG Mobile, and recently I’ve been helping to discover new studios and new games.

2. What was your biggest motivation for joining the AI Ethics Committee?

I’m always curious about new things, and I like to look at new technologies, middleware, and other related information when I’m working on games, so I’ve often been able to solve problems that certain professional programmers couldn’t by providing broad knowledge, albeit shallow. I’m also very interested in AI itself. I’ve been thinking a lot about gamified AI, that is, how to set up AI as part of an interaction that acts as an effectively enjoyable challenge, and how to employ a few small tricks to provide enjoyment using AI if you know what I mean. The AI that has been talked about recently seems to deal with more fundamental concepts, but there will be parts that need to be specialized and simplified for practical applications in the future. I think it would be fun to learn about such things together.

3. How does your education/experience/career influence your approach to AI ethics issues?

First, I think AI will be used a lot to help create games, and I think I can help in this area because I’ve been in game development for a long time. I think generative AI will redefine the process of game development that is invisible to the player, especially in the design process, which is usually where human intentions and personal thoughts come into play. That’s where AI ethics are needed. Secondly, as a global service, we often realize that we are stuck with a Western lens. For example, Christmas is a religious concept, so in many countries, it cannot be seen as a festival in its usual form. The same in-game element is often interpreted differently depending on the player’s cultural background. I can give advice on how to deal with different perspectives.

4 .What areas of AI ethics are you most interested in?

Unlike other technologies, ethics is becoming very important in AI. New technologies usually develop in the direction of broadening the base from specialized users to a large number of users, and I think AI technology develops from the concept of utility to the concept of a partner that supports and collaborates with humans. This will create a lot of new conflicts because technology is coming into equal partnership with humans, so I predict that AI ethics will have an integrated picture that covers all areas, such as copyright, discrimination issues, and user rights, which have been problematic areas for humans in the past, in the context of people who create AI, people who use AI, and people who benefit from AI.

5. What are your personal goals or expectations for what you hope to accomplish through the work of this committee?

Just as human perceptions have historically changed over time, so will our perceptions and fears of AI. I’m looking forward to seeing if we hit a plateau in the application process or if there will be a sudden singularity in the execution. Unlike other technologies in the past, AI produces outputs in the realm of human understanding, but I think it’s hard to understand the process, which is why AI ethics will be key in helping AI take the next step. I see a big wave of innovation in many parts of the game industry, creating new areas and disrupting old systems. I’m interested in the institutions needed to make the positive aspects of AI bigger as we watch the technology’s highs and lows. I would be honored to be a part of this and help smooth the waters a bit.

6. What do you think are the biggest roles and responsibilities of the AI Ethics Committee?

I hope that we can listen to the voices of various domains and create practical solutions through them. We need to make sure that AI technology is not abused and that it provides timely and necessary assistance to humans. On the other hand, I think it is also our duty to consider how to soften the damage that occurs for the sake of the overall technological development so that we can present the appropriate development direction of the technology.

BAO Team 2

Juwon Jang

1. Please briefly introduce which team you are from and what you do.

Hi, I am from the Business Assessment Office. Our role is to review and provide an objective opinion on whether risks that prevent members from achieving their goals are being managed appropriately.

2. What was your biggest motivation for joining the AI Ethics Committee?

I know Dokyun and Jaewoong, and I was interested in what the members of the Deep Learning Division (DLD) are involved in. I was curious about the Ethics Committee, but I was also curious about the DLD organization as a whole. On the business side, as a member of the Business Assessment Office, I thought I should look at AI ethics from a corporate risk management and internal control perspective.

3. How would you like to apply your expertise to AI ethics issues?

In the United States, there is an organization called GAO that conducts business assessments and audits, and it has stated regarding AI ethics that the use of AI technology requires monitoring from the perspective of accountability, oversight, prevention of negative impacts, exclusion of bias, governance, etc. I think it is my role to ask practical questions from this perspective. When I think about how I can contribute to AI ethics, I think it’s mainly in risk management because AI based on human data is likely to repeat the mistakes that people have made. Another risk is that, taking AI services such as virtual friends as an example, we need to examine how much we make people rely on our services, and what problems may arise from AI that is friendly to humans or AI that behaves like humans. These work areas are easy to leave as gray areas, and I think it is necessary to clearly designate the person in charge.

4. What areas of AI ethics are you most interested in?

I am interested in things like the bias that comes from training on biased data and the lack of transparency in algorithms. I’m also interested in virtual friends, considering how intangible services can contribute significantly to people’s lives and their psychological well-being.

5. What are your personal goals or expectations for what you hope to accomplish through the work of this committee?

The goal is to create strategies and frameworks to prevent and respond appropriately to ethical issues related to the use of AI technology. Don’t you feel more supportive of pharmaceutical and biotech companies that take bioethics seriously? I would like to say that we are trying to create technologies and games that are not only selling well, but also addressing ethical concerns, and I think this will gain support from consumers and the public.

6. What do you think are the biggest roles and responsibilities of the AI Ethics Committee?

I think it’s important to keep bringing up the topic and get together to talk about it, even if it’s just for a short time. It’s also important to bring in specific examples from practice and talk about them rather than just saying something that’s obvious and sounds good in terms of rules/code. For example, in the case of TTS (Text-to-Speech), we need to decide whether it is better to stutter in the middle to sound like a real person, or whether the language habits transmitted through TTS might be bad for children. I think we need to discuss more real-life cases like that.

Core Research Team

Jongho Park

1. Please briefly introduce which team you are from and what you do.

Hi, my name is Jongho Park, and I am from the Core Research Team at the Deep Learning Division. There are two main things we’re doing right now. One is researching why a model called the Transformer, which is the framework of language models, can learn languages well and how it can be improved. Secondly, we are working on chatbots, which have a slightly different nature depending on the project, but we are currently thinking about implementing NPC chatbots to add more fun to inZOI.

2. What was your biggest motivation for joining the AI Ethics Committee?

Since my work is in AI, it is closely related to AI ethics. I have been interested in philosophy/ethics since I was an undergraduate student, and I studied AI ethics because I wanted to have a fundamental foundation that could help me hold my ground in a field that is developing so fast. Now, as someone who works in deep learning, I joined the AI Ethics Committee to provide information on the current state of AI and ethical issues, and to listen to opinions from outside the DLD.

3. How would you like to apply your expertise to AI ethics issues?

Since I am developing AI models, I can think of many aspects, but if I limit the topic to language models, there are ways to preprocess data that show sexual content or discrimination. When training a model, you could train it to ignore poor-quality data or when using a trained language model, you could adjust its prompt so that the output is more accurate. If there are issues pointed out by the AI Ethics Committee, I can look at the latest research related to it and quickly try things that can be implemented.

4. What do you think are the biggest ethical concerns around AI technology today?

When I talk about AI-related topics with people around me, I think that the information about AI that most people know is very biased. There are various biases, including false information, noise marketing, and inflated information. Sometimes, I’ve also heard people underestimate AI so much that they say we don’t need to be prepared. So before discussing the problems of AI technology, I think we should always keep in mind what problems that misperceptions of AI technology can cause. Of course, in that case, it’s important to know where to get accurate information, and in the end, I think we should educate ourselves while also paying attention to the sources of information.

5. What are your personal goals or expectations for what you hope to accomplish through the work of this committee?

Through the AI Ethics Committee, I hope that we can create a set of tools on ethics that blend AI technology with the direction KRAFTON is headed towards and are relatable to our employees. Also, I hope that the Deep Learning Division can get into the habit of looking at models and products from a perspective outside of their field (from the perspective of game developers, game users, and AI users) through the activities of the committee. If I may be a little more ambitious, it would be ideal if we could eliminate the bias that ethics hinders the development of AI and present a vision that AI and ethics can go hand in hand.

6. What do you think are the biggest roles and responsibilities of the AI Ethics Committee?

It’s along the lines of what I said to the previous question. I think that the AI Ethics Committee should promote the development of AI technology in a way that is responsible and human/game user-centered. And since KRAFTON is a game company, I think that it should establish the relationship between AI ethics and the game business on its own terms. As a company, profit is important, but consideration for game consumers and AI users should be prioritized, so I think it is important to play a bridge role so that those voices can be heard by those who work in AI.