On January 21, 2025, South Korea enacted the Basic Act on Artificial Intelligence and Creation of a Trust Base, marking a major step toward structured AI governance. Set to take effect on January 22, 2026, the law establishes key definitions, development principles, and oversight requirements for high-impact AI systems. Its goal is to ensure AI technologies are developed safely, ethically, and transparently.
The Act emphasizes safety, reliability, and transparency:
South Korea’s AI law reflects a global shift toward stricter regulation. Companies leveraging AI must adapt quickly, balancing innovation with compliance. Redacto provides the tools and guidance needed to meet these requirements while building trust through ethical AI practices.
Organizations should review their AI systems to identify high-impact applications and conduct risk assessments, documenting measures to mitigate potential risks.
Organizations should clearly label AI-generated content, especially for generative AI systems, to maintain transparency and user trust. Additionally, they should develop strategies to provide meaningful explanations of AI outputs, including the reasoning and criteria behind automated decisions, ensuring users understand how the system operates.
Organizations should implement continuous risk management for all AI systems and maintain documentation of safety, reliability, and user protection for regulatory compliance.
Organizations should assign responsible personnel or a domestic representative to oversee compliance in South Korea and consider forming an autonomous AI ethics committee to guide ethical practices and ensure regulatory adherence.
Organizations should monitor updates from the Ministry of Science and ICT, including new decrees, the Basic AI Plan, and AI ethics publications, and leverage AI governance solutions to maintain ongoing compliance as regulations evolve.
Only high-impact AI systems that significantly affect human life, safety, or fundamental rights require formal impact assessments. Other AI systems should still undergo internal risk reviews to ensure ethical and safe use.
Generative AI includes systems that produce content, images, text, or other outputs without direct human input. Organizations must clearly label AI-generated outputs to maintain transparency and trust.
Organizations must appoint a domestic representative or responsible officer to oversee compliance, maintain documentation, and coordinate with authorities.
Yes, human oversight is mandatory for high-impact AI to ensure safety, reliability, and ethical operation.
Companies must keep records of risk assessments, safety measures, and AI system operations for audits or regulatory reviews.