Auto-translation used

Kazakhstan forms the rules of the game for artificial intelligence: review of the Mazhilis meeting

On May 14, 2025, the Mazhilis of the Parliament of the Republic of Kazakhstan approved in the first reading the draft Law "On Artificial Intelligence" together with related amendments. This means that the final version of the law has not yet been adopted and the document continues to be actively discussed.

We have carefully studied the meeting at which the bill was considered, and below we have prepared for you an overview of the key provisions and discussions. The meeting was an important step towards the creation of a system of regulation in Kazakhstan, which already affects the economy, education, media and the daily lives of citizens.

The essence of the bill.

DeputyEkaterina Smylshyaeva presented the key provisions of the project:

  1. Safety of citizens – priority of user rights and freedoms, protection of personal data, transparency in the use of technology.
  2. Legal certainty – the law should create predictable "rules of the game" for developers and investors, which will increase confidence in AI and attract capital.
  3. The role of the State – defining the powers of the government and government agencies in the field of AI implementation.
  4. Ethical principles and copyrights – basic norms on the acceptable use of technology, approaches to protection intellectual property.
  5. National Artificial Intelligence Platform Intellect is an infrastructure that unites computing power, data libraries, and services for developers.

The draft law consists of 7 chapters and 28 articles aimed at organizational regulation, security and clarifying the rights of participants.

Related amendments.

At the same time, amendments to other acts were approved.:

  • The Law on Consumer Protection and personal data – application to AI products.
  • Access to electronic resources – for the functioning of the national platform.
  • AI content control – prevention of manipulation, fakes, deepfakes.
  • Administrative Code – administrative responsibility for:
  • lack of labeling synthetic content;
  • risk management AI systems that have led to damage to the health or well-being of citizens.

Questions from deputies and government responses

  • Regulation vs innovation.

The EU experience has shown that excessive rigidity leads to an outflow of research. In Kazakhstan, only high–risk systems will be regulated, while the rest will be in a flexible mode.

  • Risks of data leaks. The fear: concentration of data on one platform.

Answer: access to anonymized sets only, strict security rules.

  • Fraud and deepfakes. Question: are we needed changes to the Criminal Code?

Answer: administrative responsibility and labeling are still being finalized, while criminal norms are being finalized.

  • Education. The deputies noted the active the use of ChatGPT by students.

Answer: the law regulates not the educational process, but the legal environment; the issue of education will be discussed separately.

  • Copyright. Question: to whom do they belong works created by AI?

Answer: rights they are assigned to the owner of the system, but the topic requires further study.

The main approaches of Kazakhstan based on the discussion and the draft law:

  1. Flexibility – framework standards with the ability to adjustments as technology evolves.
  2. Infrastructure – creation of a national AI platforms as a center of competence and trust.
  3. Protection of society – mandatory labeling content, prohibition of social scoring, restriction of autonomous systems.
  4. Attracting investments is predictable a legal environment instead of excessive barriers.
  5. Open issues – copyright and criminal liability will be clarified later.

International experience. Kazakhstan is moving in line with global trends, but adjusted for the local context:

  • The European Union I have already launched the AI Act, the first comprehensive law based on classification by risk levels. But his harshness has drawn criticism: large companies have begun to consider moving research outside the EU.
  • The United States has not yet adopted a single federal law. Regulation is based on the state level (Colorado adopted the AI Act on High-risk systems), as well as through recommendations and guidelines for government agencies.
  • Great Britain She chose a soft "pro-innovation" path: regulation through industry experts. regulators, without a single law.
  • Asia is actively moving forward:
  • China introduced rules on algorithms and deepfake with mandatory labeling;
  • Japan and South Korea have adopted framework laws on AI, emphasizing balance between security and innovation;
  • Singapore develops its own framework for generative AI and a test system AI Verify.
  • Canada and Brazil Comprehensive laws are also being worked on, but they are still under discussion.

The general trend is that regulation is based on a risk-based principle (the higher the risk, the stricter the rules). Mandatory elements almost everywhere: labeling synthetic content, protection of personal data, transparency of algorithms, responsibility of system owners.

Conclusions: Kazakhstan is shaping its own path: to regulate risky systems, but not to limit innovation. At the same time, the emphasis is on national infrastructure and trust between the government and business.

Thus, the country strives for a balance between security and development, positioning itself as a jurisdiction with systemic but flexible regulation. For businesses, this means that they should already prepare internal policies on AI risk management, content labeling, and data protection in order to be prepared for new requirements by the second reading of the law.

Sources and materials for review

News and comments on the results of the meeting: Zakon.kz , other Kazakhstani SM

Comments 1

Login to leave a comment

В свете последних новостей о законопроекте об ИИ в Казахстане, у меня сложилось мнение, что этот документ хоть и выглядит как важный шаг вперед, несет в себе ряд серьезных рисков. Главные из них — это потенциальное замедление инноваций, так как строгие правила могут отпугнуть небольшие стартапы. Также есть сомнения в реализуемости закона на практике, ведь технологии развиваются гораздо быстрее, чем законодательство, и, что важно, наши правоохранительные органы пока не готовы в цифровом отношении эффективно контролировать исполнение такого законодательства. И, конечно, вопросы ответственности и этики остаются открытыми: кто будет отвечать за ошибки ИИ, и как это будет работать в реальности? Я думаю, что к этому вопросу нужно подходить очень осторожно.

Reply