Naver said its AI Safety Framework (ASF) defines potential AI-related risks as serious disempowerment of the human species and misuse of the technology.

Under the framework, Naver will regularly assess the threat of its AI systems, with the assessment updated every three months for AI technologies, known as "frontier AI".

The company will conduct additional evaluation when the AI ​​system's capability increases more than six times in a short period of time, Yonhap news agency reports.

The company will also apply its AI risk assessment matrix to examine the potential for misuse of the technology, considering the purpose and risk level of the system before distribution.

Naver said it will continue to improve its ASF to reflect greater cultural diversity to help governments and companies at home and abroad develop their own sovereign AI.

“We will never cease to develop sovereign AI for the global market and advance our ASF to contribute to building a sustainable AI ecosystem, where many different AI models that reflect the culture and values ​​of different regions , can be used safely and tolerated biologically.” -Exists,” said CEO Choi Soo-yeon.