I. INTRODUCTION
In January 2025, the Framework Act on the Promotion of Artificial Intelligence Development and the Establishment of Trust (the “Framework Act on AI”) was enacted with the aim of supporting the sound development of artificial intelligence (AI), building a foundation of trust, enhancing citizens’ quality of life, and strengthening national competitiveness. The Act will enter into force in January 2026.
The Framework Act on AI delegates to its Enforcement Decree matters including: the implementation of training data construction projects; methods of advance notification or labeling to ensure transparency; and procedures for designating and reviewing high-impact AI systems.
As a follow-up, on September 8, 2025, the Ministry of Science and ICT (MSIT) published a draft Enforcement Decree clarifying the standards and procedures mandated under the Framework Act. The key provisions of the draft and their implications are summarized below.
II. KEY PROVISIONS
The draft Enforcement Decree comprises 34 Articles in total (32 main provisions and 2 supplementary provisions). According to MSIT, the draft was prepared with reference to global regulatory trends and the state of the domestic AI industry, and emphasizes promotion rather than regulation, incorporating only the minimum regulatory requirements necessary.
1. Promotion of the AI Industry
1) Support for Training Data Construction Projects
Under Article 15(1) of the Framework Act on AI, the Minister of Science and ICT, in consultation with relevant central administrative agencies, must promote policies to encourage the production, collection, management, distribution, and use of data (“training data”) for AI development and application. Article 15(2) further provides that eligible projects may be selected and supported within the scope of the budget.
According to Article 12(1) of the draft Enforcement Decree, eligible projects include:
(i) R&D projects for training data production and processing technologies;
(ii) Projects concerning the production, collection, management, distribution, and utilization of training data for AI service development;
(iii) Projects for legislative research, development of standard contracts, and creation of standards and guidelines for training data; and
(iv) Other projects deemed necessary by the Minister of Science and ICT.
2) Support for Introduction and Utilization of AI Technologies
Article 16(1)5 of the Framework Act on AI authorizes central and local governments to provide support, as specified by Presidential Decree, to promote the adoption and wider use of AI technologies by enterprises and public institutions.
The draft specifies the following as eligible matters (Article 15(1) of the draft):
(i) Provision of information on AI technologies;
(ii) Education and technical support necessary for protecting users and affected persons;
(iii) Establishment and provision of AI computing infrastructure (hardware, networks, software, data storage, etc.); and
(iv) Other matters deemed necessary by central or local government authorities.
2. Obligations on Transparency and Safety
1) Transparency Obligations
a) Advance Notification
AI business operators providing products or services based on high-impact AI or generative AI must notify users in advance that such products or services are AI-driven (Article 31(1) of the Act; Article 22(1) of the draft). Notification may be made by:
(i) Stating it directly on the product/service, or in the contract, user manual, or terms of use;
(ii) Displaying it on the user’s screen or device;
(iii) Posting it in an easily recognizable manner at the place of provision (including related locations); or
(iv) Any other method recognized by the Minister of Science and ICT in light of product characteristics.
b) Labeling of Outputs from Generative AI
In addition, AI business operators that provide generative AI or products and services using it must indicate that the outputs were generated by generative AI (Article 31(2) of the AI Framework Act).
Under Article 22(2) of the draft Enforcement Decree, this indication must appear on the outputs in a format readable by either humans or machines, effectively recognizing invisible watermarking as a valid method.
c) Labeling of Deepfake Outputs
Where an AI business operator provides outputs generated by an AI system that are virtually indistinguishable from reality—such as synthetic sounds, images, or videos (“deepfake outputs”)—the operator must clearly notify or label that the outputs were AI-generated. If the outputs constitute or form part of artistic or creative works, the notification or labeling may be provided in a manner that does not interfere with their exhibition or enjoyment (Article 31(3) of the AI Framework Act). The specific labeling methods will be set out in the Enforcement Decree.
Deepfake outputs must be notified or labeled: (i) in a manner that allows users to easily verify the content through sight, hearing, or software; and (ii) in a manner clearly recognizable to users, taking into account the age, physical condition, and social circumstances of the primary user base (Article 22(3) of the draft Enforcement Decree).
d) Exemptions from Transparency Obligations
The transparency obligations do not apply where AI is used solely for the operator’s internal business purposes, or where it is already obvious that a product or service is based on high-impact AI or generative AI—for example, where this is evident from the product or service name, on-screen messages, or markings on the product’s exterior (Article 22(4) of the draft Enforcement Decree).
2) Safety Obligations
The AI Framework Act requires AI systems whose cumulative training computation exceeds a prescribed threshold to establish risk management systems to ensure safety. These systems must identify, assess, and mitigate risks throughout the AI lifecycle, and monitor and respond to AI-related safety incidents (Article 32(1)).
Under the draft Enforcement Decree, these obligations apply to AI systems performing at least 10^26 floating-point operations (FLOPs) and meeting criteria publicly notified by the Minister of Science and ICT, taking into account the level of AI development and degree of risk (Article 23(1)). This threshold is more lenient than the 10^25 FLOPs standard under the EU AI Act.
3) High-Impact AI
AI systems that may significantly affect or pose risks to human life, physical safety, or fundamental rights—and that are used in sectors such as energy, healthcare, nuclear power, transportation, or education—are classified as ‘high-impact AI’ (Framework Act on AI, Article 2(4)). Operators of high-impact AI must implement measures to ensure safety and reliability (Article 34(1)).
Where an AI business operator provides products or services using high-impact AI, the operator must publish the following on its website (excluding trade secrets) (draft Enforcement Decree, Article 26(1)):
(i) The main contents of its risk management policies and organizational structure (Framework Act on AI, Article 34(1)1);
(ii) The principal criteria used to generate final outputs, along with an overview of the training data used for AI development and application (Article 34(1)2);
(iii) Measures for user protection; and
(iv) The name and contact details of the person responsible for managing or overseeing the high-impact AI.”
Furthermore, operators providing products or services using high-impact AI must endeavor to conduct an impact assessment in advance to evaluate potential effects on fundamental rights (Framework Act on AI, Article 35(3)). The specific content of the assessment is delegated to the Enforcement Decree (draft Enforcement Decree, Article 27(1)). The assessment must include:
(i) Identification of the individuals or groups whose fundamental rights may be affected by the relevant high-impact AI (including groups with specific characteristics);
(ii) Identification of the types of fundamental rights that may be affected;
(iii) The content and scope of social and economic impacts on individuals’ fundamental rights;
(iv) The patterns of use of the relevant high-impact AI;
(v) The quantitative or qualitative evaluation indicators used in the assessment, and the methods of deriving results;
(vi) Measures to prevent risks and recover losses arising from the relevant high-impact AI; and
(vii) Where improvements are required based on the results of the impact assessment, the implementation plan for such improvements.
3. Grace Period for Administrative Fines
An operator may be fined up to KRW 30 million for (i) failing to provide the advance notification required for transparency, (ii) failing to designate a domestic representative despite exceeding the threshold as a foreign operator, or (iii) failing to comply with a corrective order issued for violations of the AI Framework Act (Article 43(1)). The Ministry of Science and ICT has announced that a grace period will be applied to the imposition of such fines to minimize confusion during the initial enforcement stage, with the specific duration to be determined in consultation with stakeholders.
III. IMPLICATIONS
Ahead of the enforcement of the AI Framework Act, the release of the draft Enforcement Decree is significant as it provides concrete details on key matters, including: the scope of projects and criteria for supporting AI industry promotion; the transparency and safety obligations of business operators and applicable exemptions; the standards for designating high-impact AI; and the responsibilities of operators using such AI.
The Ministry of Science and ICT plans to conduct a public consultation on the draft Enforcement Decree and announce the legislation in October. It also intends to issue more detailed guidelines on AI transparency and safety, as well as on the responsibilities of high-impact AI operators.
Ultimately, as the draft is expected to be further supplemented through the consultation process and overseas regulatory developments, ongoing monitoring of relevant trends will be essential.
* * *
BKL’s AI Team continuously monitors AI regulatory developments in Korea and worldwide and stands ready to provide comprehensive advisory services and practical support, helping companies leverage AI effectively while minimizing legal risks.
[Korean Version]