Involuti

Legal

Chapter 1: General Provisions

Article 1 (Purpose)

Involuti (hereinafter referred to as "the Company") establishes and implements this Youth Protection Policy based on the "Act on Promotion of Information and Communications Network Utilization and Information Protection, etc." and the "Youth Protection Act" to protect youth from harmful environments and provide beneficial information so they can grow into healthy individuals. This policy aims to define administrative and technical measures to ensure that harmful information for youth is not exposed within the AI-based role-playing service (hereinafter referred to as "the Service") provided by the Company.

Article 2 (Basic Principles of Youth Protection)

  1. The Company operates Safety Guardrails at all times within the service to ensure that youth are protected from mentally and physically harmful environments and can grow healthily.
  2. The Company implements technical and administrative measures as prescribed by relevant laws to protect youth from harmful information and seeks the cooperation of users for youth protection.
  3. The Company makes its best efforts to ensure that youth are not exposed to inappropriate responses that may occur due to the nature of AI technology and takes immediate action following a zero-tolerance principle upon the discovery of any violations.

Chapter 2: Blocking and Management Measures for Youth-Harmful Information

Article 3 (Technical Protection Measures: AI Safety Filtering)

  1. Blocking Harmful Prompts: The Company operates technical devices that detect and block user prompts in real-time that fall under youth-harmful media or induce inappropriate sexual descriptions, violence, drugs, or gambling activities.
  2. Filtering Generated Content: The Company prevents exposure by monitoring in real-time all content generated by the AI, such as text and images, that may be harmful to youth.
  3. Continuous Learning: The Company continuously tunes the AI models to respond to irregular attempts to generate harmful information and advances the performance of safety filters.

Article 4 (Administrative Protection Measures and Monitoring)

  1. Constant Monitoring: The Company monitors conversation logs and shared content within the service at all times to prevent the distribution of youth-harmful information and takes immediate action if inappropriate activity is detected.
  2. Usage Restrictions: For users who intentionally generate or distribute youth-harmful information, the Company permanently restricts the use of the service or deletes the account in accordance with this policy and the Terms of Use.
  3. Report System Operation: The Company operates a function that allow users to immediately report any youth-harmful information found while using the service. Received reports are reviewed and processed without delay.

Chapter 3: Prohibitions and Zero-Tolerance Principle

Article 5 (Prohibited Acts Against Youth and Legal Response)

Users must not perform the following acts within the service. In case of violation of this article, the Company may delete the account and report it to judicial authorities without prior warning.

  1. Related to Child and Youth Sexual Exploitation Material (CSAM):
    • Entering prompts or inducing conversations that describe minor as sexual objects, or involve abuse or exploitation.
    • Using AI to generate sexual images or related text content of minors.
  2. Infringement Against Existing Youth and Students:
    • Stealing information of existing minors or students to create characters or placing such characters in sexual or violent situations.
  3. Encouraging Harmful Activities:
    • Educating or recommending the use of harmful substances such as alcohol, tobacco, and drugs to youth.
    • Glorifying or providing methods for acts that seriously harm the physical and mental health of youth, such as suicide, self-harm, and running away from home.

Chapter 4: Limitation of Liability and Indemnification of the Operator

Article 6 (Indemnification for Technical Limitations)

  1. Autonomy of AI: Since AI models generate responses through probabilistic calculations, unexpected harmful responses (obscene and suggestive descriptions, violence, hate speech, etc.) may be generated despite the Company's strong filtering measures. Users acknowledge these technical limitations when using the service, and the Company shall not be liable for compensation for mental damage to youth caused by this.
  2. User's Responsibility for Input: If a user generates harmful content by using irregular language (slang, symbols, etc.) to bypass the filtering system, all legal responsibility for it shall belong to the user.

Article 7 (Obligations of Guardians and Users)

  1. Management Obligation of Guardians: Legal representatives of minor users have the obligation to manage and supervise whether the children comply with this policy when using the service.
  2. Obligation to Report: Users must cooperate in youth protection by reporting any harmful information that violates this policy found while using the service to the Company immediately.

Chapter 5: Designation of Youth Protection Officer

Article 8 (Youth Protection Officer and Person in Charge)

The Company designates and operates a Youth Protection Officer and a person in charge as follows to protect youth from youth-harmful information.

Youth Protection Officer

Chapter 6: Miscellaneous

Article 9 (Revision and Notification Obligation)

The Company reserves the right to modify or supplement this Youth Protection Policy at any time due to operational necessity or changes in relevant laws. The modified policy will be notified through notices within the service, etc.

Enforcement: 2026-02-12