We use multiple layers of moderation to help ensure CapCut remains a safe, positive and respectful place to create and share. We use a combination of automated tools and human moderation to identify, review, and enforce our Community Guidelines on content hosted on our platform. This includes content such as:
- Content made or uploaded by you, such as templates
- User interactions, such as comments in our "Template" and "Workspace" features
- AI-Generated Content (AIGC) generated by our editing features, such as AI effects and AI Lab
Detecting Violations, and Enforcement
We proactively monitor for violative content and also enable people to report content to us for review. We use both technology and human moderation experts to find and assess potential violations against our Community Guidelines. Our policies outline a range of potential actions, depending on the context, severity, and rules that the violation breached. These include:
- Removing content, such as templates, comments, and effects
- Suspending accounts who severely or repeatedly violate our policies
- Limiting/restricting the reach of content
Contextual Exceptions
Context is an important part of our moderation process. As outlined in our Community Guidelines, we may allow limited exceptions for content that is educational, documentary, scientific, artistic, satirical, fictional, counterspeech, newsworthy, or contributes to individual expression on topics of social importance.
Appeals
For every action taken on user content, we send an in-app notification explaining which content was affected and why. We believe in transparency and fairness in our moderation decisions. If you believe we have made an error in our moderation decisions, you may file an appeal. Appeals are reviewed by a different moderator than the original reviewer to ensure an impartial reassessment.