🚀 Preview Site — Launching April 1, 2026

Security & Privacy

Your Data
Is Sacred.

Most AI tools use your data to improve their models. MindBacklog doesn't. AES-256 encryption at rest. TLS 1.3 in transit. Strict tenant isolation at the database level. Your product intelligence stays yours — exclusively.

Infrastructure Security

Built Secure
From Day One

🔒
Encryption at Rest

All data is encrypted at rest using AES-256 encryption. Database, file storage, and backups — every byte is protected even when idle.

🔐
Encryption in Transit

All data transmitted between your browser and our servers uses TLS 1.3 with perfect forward secrecy. No unencrypted connections, ever.

🏗️
Tenant Isolation

Strict data isolation between organizations. Your product data, feedback signals, and MIND context are completely separated from other tenants at the database level.

👤
Role-Based Access

Granular role-based access control: Organization-level roles (Admin, User, Viewer) and Product-level roles. Team members see only what they should.

📋
Audit Logging

Comprehensive audit trail of all administrative actions — invitations, role changes, product modifications. Know who did what and when.

💾
Regular Backups

Automated daily backups with point-in-time recovery capability. Your data is durably stored across multiple availability zones.

Compliance & Standards

Meeting the Bar

We're building MindBacklog to meet enterprise compliance standards from the ground up.

🛡️
SOC 2 Type II

In Progress

🇪🇺
GDPR Compliant

Operational

📜
CCPA Compliant

Operational

🔑
OAuth 2.0

Supported

AI & Your Data

Your Data
is Not
Training Data.

We understand the concern. Here's our commitment on how AI interacts with your product data.

✓ No Model Training

Your feedback, roadmap data, and documents are never used to train any AI models. Your product intelligence stays yours exclusively.

✓ Ephemeral Processing

When AI classifies your feedback or generates a PRD, the data is processed and returned. No customer data is retained in AI model memory or logs beyond the processing window.

✓ Context Isolation

Your MIND context engine is completely isolated per organization. One customer's product intelligence never leaks into another's AI responses or classifications.

✓ Provider Agreements

Our AI infrastructure providers (Google Gemini and locally-hosted LLMs) have data processing agreements that prohibit training on customer data. Locally-hosted models process data entirely within our own infrastructure.

Infrastructure

Where Your
Data Lives

☁️
Cloud Hosted

Hosted on Railway App with enterprise-grade infrastructure and 99.9% uptime SLA. Automated deployments, managed databases, and built-in redundancy.

🌐
CDN Protected

Cloudflare CDN with DDoS protection, WAF, and rate limiting to prevent abuse and ensure availability.

📡
Monitoring

24/7 infrastructure monitoring with automated alerting. We detect and respond to issues before they impact your experience.

Access & API Security

Every Session.
Every Request.
Protected.

⏱️
Session Management

Authenticated sessions expire after inactivity. Password changes and role revocations force immediate logout across all devices. Session tokens are cryptographically signed and HttpOnly.

🔗
API Security

The feedback widget and integrations use token-based authentication — no user credentials are exposed. All API endpoints enforce rate limiting and input validation to prevent abuse and injection attacks.

🔄
Concurrent Sessions

Organization admins have visibility into active sessions. Suspicious login activity triggers notification alerts. OAuth tokens are scoped to the minimum permissions required.

Responsible Disclosure

Found a
Vulnerability?

We take security reports seriously. If you discover a vulnerability, we want to hear about it.

How to Report

Email security@mindbacklog.com with a detailed description of the vulnerability, steps to reproduce, and any supporting evidence. Please do not publicly disclose the issue until we've had a chance to address it.

Our Commitment

We will acknowledge your report within 48 hours, provide an initial assessment within 5 business days, and keep you informed of our remediation progress. We will not take legal action against researchers who report vulnerabilities in good faith.

What Qualifies

Authentication bypasses, data exposure between tenants, XSS, CSRF, SQL injection, and any issue that compromises customer data confidentiality, integrity, or availability.

Out of Scope

Social engineering, physical attacks, denial-of-service testing, automated scanning without prior approval, and issues in third-party services we don't control (e.g., Cloudflare, Paddle).

Security
Questions?
We're Here.

For security inquiries, vulnerability reports, or compliance documentation requests, contact our team directly.

security@mindbacklog.com Start Free Trial