Content Moderation Policy

Latest Update: 03/13/2026

1. Introduction

This Content Moderation Policy is provided by Sparknet Limited (hereinafter referred to as "we", "our", or "us"), the operator of Soul Love.
At Sparknet Limited, we are committed to fostering a respectful, safe, and creative environment for all users of soulove.ai (the "Website"). This policy outlines how we monitor, restrict, and respond to content that violates our community standards or legal obligations.
All interactions on the platform involve AI-generated characters and media. While we support a wide range of creative expression, we do not tolerate misuse of the Services to create or share harmful, illegal, or exploitative content.

2. Scope of Moderation

This policy applies to all aspects of our Services, including but not limited to: AI Outputs: Generated messages, images, videos, and other media. User Inputs: Prompts and instructions submitted by users. Metadata: Character names, descriptions, and tags. Profile Content: Usernames and profile descriptions.

3. Prohibited Content

You may not use the Services to generate, promote, or simulate any of the following:

3.1. Illegal and Harmful Content

Real-world harm: Promotion of violence, including sexual violence. Exploitation: Human trafficking or any form of exploitation. CSAM:* Child sexual abuse material, including any implied depictions of minors.

3.2. Harassment and Hate Speech

Impersonation: Non-consensual impersonation or harassment of real individuals. Hate Speech: Content that promotes hate or extremism against protected groups. Self-Harm:* Promotion or encouragement of self-harm or suicide.

3.3. Malicious Activity

Fraud: Scams, phishing, or illegal financial activities. System Abuse: Fully automated decision-making designed to circumvent safety filters.

4. Moderation Systems

To ensure compliance, we employ a multi-layered moderation approach: 1. Automated Filters: Real-time AI scanning of prompts and outputs to block prohibited categories. 2. Human Review: Periodic audits and review of flagged content by our trust and safety team. 3. User Reporting: Tools that allow users to report content they believe violates these policies.

5. Enforcement and Account Actions

Depending on the severity and frequency of violations, we may take the following actions: Content Removal: Removing specific characters or generated content. Warnings: Issuing formal warnings or requiring prompt edits to content. Suspension: Temporarily suspending account access. Termination: Permanently banning an account for severe or repeated violations. Legal Escalation:* Reporting activities to law enforcement where required by law.

6. User Responsibility and Appeals

6.1. Responsibility

Users are solely responsible for their prompts and any content generated through their interaction with the platform. Attempting to circumvent moderation systems is a violation of our Terms of Service.

6.2. Appeals

If you believe a moderation decision was made in error, you may submit an appeal to [email protected]. Each appeal will be reviewed by a member of our trust and safety team.

7. Continuous Improvement

We are committed to the ongoing refinement of our safety systems through: Regular audits and technical updates to our filters. Collaboration with safety experts and legal advisors. * Transparent updates to our community regarding major enforcement changes.

8. Contact Us

If you have any questions about our Services, or to report any violations of this policy, please contact us at:
  • Email: [email protected]
  • Address: ROOM H28,BLK EH, GOLDEN BEAR IND.CTR 66-82 CHAI WAN KOK ST.,TSUEN WAN HONG KONG