Content Moderation and Investigation Policy

Last Updated: April 2026
To ensure a safe and compliant environment, soulove.ai employs a proactive moderation and investigation framework.

1. Monitoring and Review

We use a combination of automated and manual systems to monitor all content: Pre-Screening: AI-driven filters scan for prohibited content (CSAM, non-consensual content, extreme violence) before publication. Proactive Monitoring: Our safety team performs daily manual audits of trending and newly generated content.

2. Investigation Procedure

Upon receiving a report or identifying a potential violation, we initiate a formal investigation: 1. Immediate Quarantine: The flagged content is temporarily hidden from public view during the investigation. 2. Data Review: We review the creator's identity records, IP logs, and the specific prompts/data used to generate the content. 3. Internal Audit: Our safety team determines if the content violates the Prohibited Content Policy or 18 U.S.C. § 2257 standards.

3. Reporting and Enforcement

Removal: If non-compliant, content is permanently deleted within 24 hours of the investigation's conclusion. Account Action: Violators face immediate and permanent account suspension with forfeiture of all credits. External Reporting: Law Enforcement: Suspected illegal activity (e.g., CSAM or trafficking) is reported to NCMEC and relevant authorities. Payment Networks:* We maintain records of prohibited content incidents to provide to payment processors and card networks (Visa/Mastercard) upon audit request to demonstrate our compliance efforts.

4. Compliance Appeals

Users whose content was removed or accounts suspended may appeal the decision within 14 days by contacting [email protected]. A separate investigation will be conducted to ensure the fairness of the original decision.