Is Cursor-Generated Code Secure? We Scanned 100 Repos to Find Out
We ran ShipSafe on 100 real Cursor-built apps. 67% had at least one critical vulnerability. Here's what we found and how to fix it.
Cursor has become the go-to AI coding environment for thousands of developers. It writes entire functions, scaffolds full-stack apps, and can feel like pair programming with a senior engineer. But there is a critical question most developers skip: is the code it generates actually secure?
We decided to find out. Over three weeks, we ran ShipSafe security scans on 100 real, publicly available repositories built primarily with Cursor. These were not toy projects. They included SaaS apps, internal tools, e-commerce platforms, and API backends deployed to production.
The results were sobering: 67% of Cursor-built apps had at least one critical vulnerability. This aligns with a Stanford University study from 2024 which found that developers using AI code assistants produce significantly less secure code, with roughly 45% of AI-assisted code containing vulnerabilities (Perry et al., Stanford 2024).
Our Methodology
We selected 100 repositories from GitHub that met the following criteria: the project used Cursor as the primary development tool (identified via .cursorrules files, Cursor-style commit patterns, or explicit mentions in READMEs), had been updated within the last 90 days, and contained a deployable web application.
The breakdown: 38 Next.js apps, 24 React + Express backends, 19 Supabase-backed apps, 11 Python/FastAPI services, and 8 miscellaneous stacks. We ran each through ShipSafe, which performs static analysis, secret detection, authentication flow analysis, and authorization boundary checking.
Every finding was manually verified by our team to eliminate false positives. What follows are the four most dangerous patterns we found.
1. Insecure Direct Object References (IDOR) — Found in 43% of Apps
The single most common vulnerability we found was IDOR (CWE-639). This happens when an API endpoint accepts a user-supplied ID and returns or modifies data without verifying that the requesting user actually owns that resource.
Cursor generates this pattern constantly because it optimizes for functionality, not authorization. When you ask it to “create an API to get user profile by ID,” it does exactly that — without scoping the query to the authenticated user.
Vulnerable Code (Cursor-generated)
// API route: /api/invoices/[id]
export async function GET(req, { params }) {
const invoice = await db.invoice.findUnique({
where: { id: params.id },
});
return Response.json(invoice);
}
// Any authenticated user can access ANY invoice
// by simply changing the ID in the URLSecure Version
// API route: /api/invoices/[id]
export async function GET(req, { params }) {
const session = await getServerSession();
if (!session) return Response.json({ error: "Unauthorized" }, { status: 401 });
const invoice = await db.invoice.findUnique({
where: {
id: params.id,
userId: session.user.id, // Scope to current user
},
});
if (!invoice) return Response.json({ error: "Not found" }, { status: 404 });
return Response.json(invoice);
}The fix is straightforward: always include an ownership check in the database query. Every findUnique, update, and delete call should include a userId or organizationId constraint tied to the authenticated session. This is listed as OWASP Top 10 #1: Broken Access Control.
2. Inverted Authentication Conditions — Found in 31% of Apps
This is a subtle but devastating bug. Cursor sometimes generates authentication guards with the conditional logic reversed — granting access when the user is not authenticated and blocking them when they are.
We found this in middleware files, API route handlers, and React component guards. The issue stems from how LLMs handle negation in natural language. When you say “only allow authenticated users,” the model sometimes interprets this as checking for the absence of a session rather than its presence.
Vulnerable Code (Cursor-generated)
// middleware.ts
export function middleware(req) {
const token = req.cookies.get("session");
// BUG: This allows unauthenticated users through!
if (token) {
return NextResponse.redirect("/login");
}
return NextResponse.next();
}Secure Version
// middleware.ts
export function middleware(req) {
const token = req.cookies.get("session");
// Correct: redirect if NO token
if (!token) {
return NextResponse.redirect("/login");
}
return NextResponse.next();
}A single missing !operator can expose every protected route in your application. This is especially dangerous because it passes basic manual testing — if you test while logged in, everything works fine. The vulnerability only manifests when accessed without authentication.
3. Frontend-Only Role Checks — Found in 28% of Apps
Cursor frequently generates role-based access control entirely in the frontend. It will conditionally render admin panels, hide buttons, or redirect users based on a rolefield stored in the client-side session — but the underlying API endpoints accept requests from anyone.
Vulnerable Pattern
// Frontend component - role check only here
function AdminPanel() {
const { user } = useAuth();
if (user.role !== "admin") return <Redirect to="/" />;
return <DangerousAdminTools />;
}
// API route - NO role check!
export async function DELETE(req, { params }) {
await db.user.delete({ where: { id: params.id } });
return Response.json({ success: true });
}
// Anyone can call DELETE /api/users/[id] directlySecure Version
// API route - enforce role server-side
export async function DELETE(req, { params }) {
const session = await getServerSession();
if (!session) return Response.json({ error: "Unauthorized" }, { status: 401 });
if (session.user.role !== "admin") {
return Response.json({ error: "Forbidden" }, { status: 403 });
}
await db.user.delete({ where: { id: params.id } });
return Response.json({ success: true });
}The rule is simple: the frontend controls what users see, the backend controls what users can do. Every privileged action must be verified server-side. This maps to CWE-862: Missing Authorization.
4. Hardcoded Secrets — Found in 22% of Apps
When you ask Cursor to set up an integration — Stripe, SendGrid, Supabase, OpenAI — it often generates code with placeholder secrets that developers then replace with real keys and forget to move to environment variables. Even worse, sometimes Cursor pulls actual API keys from the context window if they appear in other files.
Vulnerable Code
// lib/stripe.ts
import Stripe from "stripe";
export const stripe = new Stripe(
"sk_live_51N8x...", // Hardcoded secret key
{ apiVersion: "2024-04-10" }
);
// lib/supabase.ts
const supabase = createClient(
"https://abc.supabase.co",
"eyJhbGciOiJIUzI1NiI..." // Service role key in client code!
);Secure Version
// lib/stripe.ts
import Stripe from "stripe";
export const stripe = new Stripe(
process.env.STRIPE_SECRET_KEY!,
{ apiVersion: "2024-04-10" }
);
// lib/supabase-server.ts (server-only!)
import "server-only";
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!
);If these secrets are committed to a Git repository — even a private one — they become a permanent part of the Git history. According to GitGuardian's 2024 State of Secrets Sprawl report, over 12.8 million new secret occurrences were detected on public GitHub repos in a single year.
Full Results Breakdown
67%
Had critical vulns
43%
IDOR issues
31%
Auth logic bugs
22%
Hardcoded secrets
Beyond the top four, we also found: SQL injection vectors in 11% of apps using raw queries, Cross-Site Scripting (XSS) via dangerouslySetInnerHTML in 9%, missing rate limiting on authentication endpoints in 54%, and absent CSRF protection in 39%.
The average Cursor-built app in our dataset had 3.2 security issues, with the most vulnerable app containing 14 distinct findings.
Why Cursor Generates Insecure Code
This is not a Cursor-specific problem. It is an LLM problem. Large language models are trained on millions of code repositories, and the majority of open-source code does not follow security best practices. The model learns to produce code that works — not code that is secure.
There are three structural reasons this happens:
- Training data bias: Most GitHub code prioritizes functionality over security. The model replicates this bias, generating the “happy path” and skipping defensive checks.
- Missing security context: Unless your prompt explicitly mentions security requirements, Cursor has no reason to add authorization checks, rate limiting, or input validation.
- Context window fragmentation: Cursor works file-by-file. It may generate a secure API route but miss that the middleware in another file already handles (or fails to handle) authentication differently.
For a deeper look at why all AI code assistants share this problem, read our guide on AI-generated code security risks.
How to Secure Your Cursor-Generated Code
You do not have to stop using Cursor. But you do need to add a security step to your workflow. Here are five concrete actions:
- Add security instructions to your .cursorrules file. Tell Cursor to always add authentication checks, use parameterized queries, and load secrets from environment variables. This significantly improves output quality.
- Review every auth-related code path manually. Middleware, API route guards, and role checks deserve line-by-line review. Automated code generation and manual auth review is a strong combination.
- Run a security scan before every deploy. Use ShipSafe to scan your repository. It catches IDOR, auth logic bugs, exposed secrets, and injection vulnerabilities in under two minutes.
- Use server-only imports for sensitive operations. In Next.js, import
"server-only"at the top of files containing secrets or admin logic. This prevents accidental client-side bundling. - Follow our security checklist. We created a 20-item security checklist for vibe coding that covers secrets, auth, injection, XSS, and configuration. Pin it next to your monitor.
Conclusion
Cursor is a remarkable tool. It makes developers dramatically faster. But speed without security is a liability. The 67% critical vulnerability rate we found is not a reason to abandon AI coding — it is a reason to add automated security checks to your workflow.
The developers who will win in the AI era are not the ones who reject AI tools or the ones who blindly trust them. They are the ones who use AI to write code fast and then verify it before shipping. That is exactly what ShipSafe was built for.
Want to check your own app?
Paste your GitHub URL and get a security report in under 2 minutes. Free scan, no credit card required.
Scan My App Free