Skip to main content

Safety & Enforcement

How we review reports, make enforcement decisions, and handle policy violations.

Updated: January 16, 2026

Overview

Mor is committed to maintaining a safe and trustworthy platform. This document explains how we review reports, what enforcement actions we may take, and how our processes work.

For purposes of this page, "Mor" refers to our website, application, and related services. This page supplements our Terms of Service, Content Policy, and other applicable policies. Enforcement may apply to individual content, accounts, or patterns of behavior across multiple items or interactions.

Our goal is to enforce our policies consistently and fairly while providing transparency about our processes. Consistency means applying the same standards and criteria to similar situations—not necessarily identical outcomes, as individual circumstances differ. We continuously evaluate and improve our enforcement systems. Enforcement decisions may occasionally be incorrect; our processes include mechanisms to address errors when identified.

This page describes Mor's general approach and does not create contractual rights or obligations.

Report Review Process

When a report is submitted through our reporting system, it enters our review queue. Reports are processed based on several factors:

  • Severity: Reports involving child safety, imminent harm, or critical security vulnerabilities are prioritized.
  • Report type: Different report categories may follow different review workflows.
  • Available context: Reports with clear evidence and context can be evaluated more efficiently.

Inputs we may review

When evaluating a report, we may consider:

  • The reported content and nearby context
  • Account history and prior enforcement actions
  • Information submitted by the reporter or affected user
  • Technical signals, logs, and metadata necessary to operate, secure, and prevent abuse of Mor
  • Publicly available information where relevant
  • Reports from trusted or official sources, where applicable

Report abuse

False, spam, or harassing reports may be deprioritized or disregarded. Repeated or systematic abuse of the reporting system may result in restrictions on the ability to submit reports or other account-level actions.

No guaranteed timelines. We do not commit to specific response times. Review timelines vary based on report complexity, queue volume, and the investigation required. We process reports as efficiently as possible while maintaining thoroughness.

Enforcement Actions

When we determine that content or behavior violates our policies, we may take one or more of the following actions, generally progressing in severity:

  1. Labels or context: Content may be labeled with warnings, informational notices, or additional context.
  2. Warnings: Users may receive warnings or educational notices about policy violations.
  3. Feature-level restrictions: Specific capabilities may be restricted (e.g., posting, commenting, messaging, or reporting).
  4. Content removal: Content that violates our policies may be removed from the platform.
  5. Temporary suspension: Account access may be suspended for a defined period.
  6. Permanent suspension or termination: Severe or repeated violations may result in permanent account termination.

The specific action taken depends on the nature of the violation, the user's history, and other contextual factors. We aim to apply enforcement proportionally to the severity of the violation. Severe or repeated violations may result in escalation to more serious actions without prior steps.

Enforcement actions may be reversed if we determine an error was made or circumstances have changed.

Decision Making

Our enforcement decisions are guided by our published policies:

When evaluating potential violations, we consider:

  • Whether the content or behavior clearly violates our stated policies
  • The context in which the content appeared
  • The potential for harm
  • Whether the violation was intentional or accidental
  • The user's prior history on the platform

Where policies overlap or conflict, the provision most protective of safety or legal compliance generally governs. Context may be considered (including educational, journalistic, or satirical purposes), but context does not create blanket exemptions.

Appeals

Users who believe an enforcement action was made in error may submit an appeal. Our appeals process provides a pathway to request reconsideration of enforcement decisions.

When reviewing an appeal, we re-examine the original decision in light of:

  • Any new information or context provided by the appellant
  • Whether the original decision was consistent with our policies
  • Whether circumstances have changed

Upon review, Mor may uphold, modify, or reverse the original decision. Certain urgent or legally required actions may not be eligible for appeal. In some cases, Mor may provide only limited detail about the basis for a decision in order to protect safety, privacy, or abuse prevention mechanisms.

Appeals are not guaranteed to succeed. Submitting an appeal does not guarantee a different outcome. If the original decision was correct based on our policies, the appeal may be denied.

Role of Automation

Mor uses automated systems to help identify potential policy violations and prioritize reports. These systems assist in processing the volume of content and reports on our platform.

What automation does

  • Identifies content that may require review
  • Prioritizes reports based on severity signals
  • Detects known policy-violating patterns
  • Filters obvious spam and abuse

In some cases, automated systems may take limited immediate action (for example, suppressing obvious spam) without human review. In other cases, automation surfaces signals and routes items for human evaluation.

What we do not claim

  • We do not guarantee human review of every report
  • We do not claim our automated systems are perfect or error-free
  • We do not provide specific details about our detection methods to prevent gaming

We are transparent that our systems involve both automated and procedural components. The specific mix depends on the type of content and severity of potential violations. Our practices may evolve as Mor grows or as applicable laws and circumstances change.

Contact the Safety Center

For safety concerns, enforcement questions, policy clarifications, or reports related to safety and enforcement, reach out to the Safety Center.

If you are unable to use this form, you may email safety@themorapp.com.