What are Cryptographic Failures?
Cryptographic failures — formerly called "Sensitive Data Exposure" in OWASP 2021 — cover a broad set of weaknesses that all share one outcome: sensitive data ends up readable by someone who shouldn't have access to it. The rename to "Cryptographic Failures" in 2021 (and retained in 2025) is intentional: it forces developers to think about the root cause (the crypto is wrong or absent) rather than just the symptom (data got exposed).
At its core, this category breaks down into three failure modes. First, no encryption at all — data sent over HTTP, passwords stored in plaintext, PII written to unencrypted disk. Second, broken encryption — using algorithms that were retired years ago because they've been cracked. Third, correct algorithms used incorrectly — proper AES-256 but with a hardcoded key in the repository, or HTTPS everywhere but with certificate validation disabled in the mobile app.
Any one of these is sufficient to expose your users' data. The OWASP data shows this category appearing in roughly 45% of tested applications, which shouldn't be surprising — cryptography is hard, the defaults in many frameworks are insecure, and developers often don't realize what they've misconfigured until a breach happens.
- Millions of user passwords cracked after weak hashing leaks
- Payment card data intercepted from cleartext HTTP APIs
- Medical records readable via misconfigured TLS downgrade
- API keys extracted from client-side JavaScript bundles
- Session tokens readable on shared networks without HSTS
Weak and Broken Algorithms
The most clear-cut cryptographic failure is using an algorithm the security community broke decades ago. These don't quietly become less secure over time — they fail hard. MD5 and SHA-1 as password hashes are completely broken. DES has a 56-bit key that can be brute-forced in hours on commodity hardware. RC4 has statistical biases that make ciphertext recoverable. Yet all four still show up regularly in production code.
The Password Hashing Problem
In 2013, Adobe suffered one of the most instructive breaches in history. Attackers exfiltrated around 153 million user records. Adobe had encrypted (not hashed) the passwords with 3DES, used the same key for every password, and used ECB mode — meaning identical passwords produced identical ciphertext. Security researchers reconstructed password hints from the data within days, recovering the most common passwords almost immediately. The root cause wasn't just a bad algorithm choice; it was a fundamental misunderstanding of the difference between encryption and hashing for passwords, combined with a catastrophically insecure encryption mode.
153 million accounts compromised. Passwords "encrypted" with 3DES in ECB mode — identical passwords produced identical ciphertext, making them trivially reversible via frequency analysis. Estimated cost: $1.1M settlement + incalculable reputational damage. The lesson: password hashing and password encryption are fundamentally different operations.
Passwords require slow, adaptive, one-way functions specifically designed to resist brute force. MD5 and SHA-1 are fast — that's the problem. A modern GPU can compute billions of MD5 hashes per second. A 6-character alphanumeric password hashed with MD5 takes seconds to crack exhaustively. bcrypt, scrypt, Argon2, and PBKDF2 are designed to be computationally expensive, with a tunable cost factor you can increase as hardware gets faster.
import hashlib # MD5 — broken, fast, trivially brute-forced password_hash = hashlib.md5(password.encode()).hexdigest() # SHA-1 — still fast, also broken for security use password_hash = hashlib.sha1(password.encode()).hexdigest() # SHA-256 — better but still wrong for passwords (too fast) password_hash = hashlib.sha256(password.encode()).hexdigest()
import bcrypt # bcrypt — slow by design, auto-generates salt, cost tunable salt = bcrypt.gensalt(rounds=12) # higher rounds = slower password_hash = bcrypt.hashpw(password.encode(), salt) # Verify is_valid = bcrypt.checkpw(password.encode(), stored_hash) # Or use Argon2 (winner of Password Hashing Competition) from argon2 import PasswordHasher ph = PasswordHasher(time_cost=2, memory_cost=65536, parallelism=2) password_hash = ph.hash(password)
Algorithm Reference: What's Safe and What Isn't
| Algorithm | Use Case | Status | Replace With |
|---|---|---|---|
| MD5 | Hashing | Broken | SHA-256 / Argon2 (passwords) |
| SHA-1 | Hashing | Broken | SHA-256 / SHA-3 |
| DES | Encryption | Broken | AES-256-GCM |
| RC4 | Stream cipher | Broken | ChaCha20-Poly1305 |
| 3DES | Encryption | Deprecated | AES-256-GCM |
| RSA-1024 | Key exchange | Weak | RSA-2048+ / ECDH P-256 |
| AES-128-CBC | Encryption | Acceptable | AES-256-GCM (authenticated) |
| AES-256-GCM | Encryption | Recommended | — |
| Argon2id | Password hashing | Recommended | — |
| ChaCha20-Poly1305 | Encryption | Recommended | — |
Cleartext Transmission
Using HTTP instead of HTTPS for anything involving user data is the simplest and most preventable cryptographic failure. It's not subtle — every byte of the request and response is readable by anyone on the network path: the user's ISP, coffee shop Wi-Fi, corporate proxy, VPN provider, or an attacker running a simple network capture.
The problem isn't just forms with method="POST" over HTTP. Login pages served over HTTP let attackers steal the credentials before they even reach the server. Mixed content — an HTTPS page loading scripts or making API calls over HTTP — defeats the encryption entirely. Mobile apps that disable certificate validation (often done to pass testing with a self-signed cert and then accidentally shipped) let anyone with a proxy tool intercept traffic without any visual warning to the user.
The most impactful TLS vulnerability in history. A buffer over-read bug in OpenSSL allowed attackers to read 64KB of server memory per request — repeatedly, without leaving logs. Private keys, session tokens, passwords from in-flight requests — all readable. An estimated 500,000+ servers were vulnerable at peak. The bug had existed for two years. It didn't break the TLS algorithm — it exploited the implementation, proving that correct algorithm choices alone aren't sufficient.
TLS Configuration Matters
Using HTTPS is necessary but not sufficient. Old TLS protocol versions have known vulnerabilities: SSLv3 is affected by POODLE (2014), TLS 1.0 and 1.1 are deprecated by all major browsers and RFCs. The cipher suite selection matters — some ciphers enabled for legacy compatibility (EXPORT ciphers, RC4, CBC mode without authentication) have known attacks. BEAST, LUCKY13, BREACH, and CRIME are all attacks against specific cipher/compression combinations in older TLS configurations.
# Enforce minimum TLS 1.2, prefer 1.3 ssl_protocols TLSv1.2 TLSv1.3; # Modern cipher suites only — no RC4, no export, no CBC for TLS 1.2 ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305; ssl_prefer_server_ciphers off; # HSTS — tell browsers to never use HTTP for this domain add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always; # OCSP stapling — faster, more private cert validation ssl_stapling on; ssl_stapling_verify on;
Poor Key Management
You can use AES-256-GCM perfectly and still have completely insecure encryption if the key is hardcoded in source code, committed to a public repository, or derived from a weak passphrase. Key management is where most real-world cryptographic failures occur in otherwise well-intentioned codebases.
Hardcoded Secrets
GitHub's secret scanning feature found over 1 million leaked secrets in 2023 alone. The pattern is always the same: a developer hardcodes an API key or encryption key to "test something quickly," commits it, and it sits in git history forever even after it's deleted from the latest version. Tools like truffleHog, gitleaks, and Semgrep can find these — but so can attackers with automated scrapers watching every public commit.
# Never do this — key is in source code, in git history, in every clone SECRET_KEY = "my-super-secret-key-123" DB_PASSWORD = "postgres123" AWS_SECRET = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" # Also bad — weak key derivation from guessable input key = hashlib.md5(b"companyname2024").digest()
import os from cryptography.fernet import Fernet # Load from environment, fail loudly if missing SECRET_KEY = os.environ["SECRET_KEY"] # raises KeyError if not set # Or use a secrets manager (AWS Secrets Manager, HashiCorp Vault) # For local dev, .env files loaded by python-dotenv (never committed) # Generate proper random keys, don't derive from passwords key = Fernet.generate_key() # cryptographically random 32 bytes # Add .env to .gitignore # echo ".env" >> .gitignore
Initialization Vectors and Nonces
Even with a strong key, AES-CBC or AES-GCM encryption becomes unsafe if you reuse initialization vectors (IVs) or nonces. AES-GCM nonce reuse with the same key is catastrophic — it breaks both confidentiality and message authentication. The correct approach is to generate a fresh cryptographically random IV/nonce for every encryption operation and store it alongside the ciphertext (it doesn't need to be secret, just unique).
from cryptography.hazmat.primitives.ciphers.aead import AESGCM import os key = AESGCM.generate_key(bit_length=256) aesgcm = AESGCM(key) # Fresh 12-byte nonce every time — never reuse nonce = os.urandom(12) ciphertext = aesgcm.encrypt(nonce, plaintext, associated_data) # Store nonce alongside ciphertext — it's not secret stored = nonce + ciphertext # Decrypt nonce, ciphertext = stored[:12], stored[12:] plaintext = aesgcm.decrypt(nonce, ciphertext, associated_data)
Certificate and PKI Issues
TLS certificates are the trust anchor for HTTPS. When certificate validation is skipped, bypassed, or misconfigured, the entire security model collapses — an attacker can intercept all traffic even over a "secure" HTTPS connection.
Disabled Certificate Validation
This shows up constantly in mobile apps and internal tooling. A developer runs into certificate errors during testing (self-signed certs, staging environments, corporate proxies) and adds a one-liner to disable validation. The fix gets committed, the code ships to production, and now every user's traffic is interceptable with any man-in-the-middle proxy.
import requests # Disables all certificate validation — NEVER do this in production response = requests.get(url, verify=False) # This silences the InsecureRequestWarning but doesn't fix the problem requests.packages.urllib3.disable_warnings()
import requests # Default — verify=True is the default but explicit is better response = requests.get(url, verify=True) # For internal CAs: provide the CA bundle response = requests.get(url, verify="/path/to/internal-ca-bundle.pem") # For mobile apps: implement certificate pinning # Pin the expected public key hash — fail if it doesn't match EXPECTED_PIN = "sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="
Expired and Misconfigured Certificates
Certificate expiry is embarrassing and increasingly preventable with tools like Let's Encrypt and certbot, but it still causes outages and security gaps. More subtle are misconfigured certificate chains — missing intermediate certificates that cause validation errors in some clients but not others, or certificates issued for the wrong domain names that require you to rely on wildcard certs you shouldn't be using.
Certificate Transparency (CT) logs are worth knowing about: every publicly trusted certificate is logged to CT servers, which means attackers can enumerate all certificates ever issued for your domains — including ones you issued for internal subdomains and forgot about. Run crt.sh searches against your domains regularly to see what attackers can see.
Sensitive Data in Transit and at Rest
Beyond passwords, many applications transmit or store sensitive data without considering whether it needs encryption — or without applying it correctly. PII (names, addresses, SSNs, birthdates), payment data (card numbers, CVVs), health records, authentication tokens, private keys. Any of these stored in plaintext or transmitted over HTTP constitute a cryptographic failure.
Database Encryption
Encrypting sensitive fields in the database protects against SQL injection that reads the raw data, insider threats with database access, and breaches via backup files or snapshots. The distinction between column-level encryption (you control the keys, data is encrypted application-side before hitting the DB) and transparent data encryption (TDE — the DB engine encrypts at rest, but DB admins and anyone with DB access can still read the data) matters: TDE protects physical disk theft but not application-layer breaches.
A common gap: developers encrypt database columns but forget about logs. Error logs that capture full request bodies, analytics events that include user data, or debug logs that dump object state can all leak sensitive data in plaintext. Audit every location where sensitive data might end up — not just the database.
Cookies and Session Tokens
Session tokens transmitted over HTTP are readable. But even over HTTPS, cookies without the Secure flag can be sent over HTTP if the browser makes an HTTP request — which can happen via redirect chains or HTTP subresources. The Secure flag tells the browser to only send the cookie over HTTPS. HttpOnly prevents JavaScript from reading it. Both should be set on every session cookie, every auth token cookie, every sensitive cookie.
res.cookie('session', token, { httpOnly: true, // no JS access secure: true, // HTTPS only sameSite: 'Strict', // CSRF protection maxAge: 3600000 // 1 hour in ms });
Common Mistakes That Fly Under the Radar
Using Encryption When You Need Signing
Encryption provides confidentiality — only the intended recipient can read the data. Signing provides integrity and authenticity — anyone can verify the data came from who you say and wasn't tampered with. JWTs are often used as session tokens, but if you sign them with a weak key or use alg: none (a real attack that worked against numerous JWT libraries), they're useless. If you need both confidentiality and integrity, use authenticated encryption (AES-GCM, ChaCha20-Poly1305) — unauthenticated encryption like AES-CBC provides confidentiality without integrity, leaving you vulnerable to padding oracle and bit-flipping attacks.
Random Number Generation
Not all randomness is created equal. Math.random() in JavaScript, Python's random module, PHP's rand() — these are pseudo-random number generators suitable for games and simulations, not for generating passwords, session tokens, CSRF tokens, or cryptographic keys. Use crypto.randomBytes() in Node.js, secrets module in Python, random_bytes() in PHP, SecureRandom in Java.
import random import string # random module is NOT cryptographically secure token = ''.join(random.choices(string.ascii_letters, k=32)) otp = str(random.randint(100000, 999999)) # predictable with seed
import secrets # secrets module uses OS CSPRNG — safe for tokens, passwords, OTPs token = secrets.token_hex(32) # 64 hex chars, 256 bits token_url = secrets.token_urlsafe(32) # URL-safe base64 otp = str(secrets.randbelow(900000) + 100000) # 6-digit OTP
Caching Sensitive Responses
HTTP caching is great for performance but a disaster for sensitive data. Browsers cache pages and API responses unless explicitly told not to. Proxies and CDNs can cache authenticated responses and serve them to unauthenticated users. Shared computers cache pages in browser history accessible after logout. API responses containing account details, payment info, health data, or tokens should carry Cache-Control: no-store.
How to Find Cryptographic Failures
Some cryptographic failures are easy to spot — checking TLS version and cipher suites is a one-command operation. Others require application-level context that automated tools miss: a login form over HTTPS is fine, but does the authentication token get transmitted in a URL parameter (which ends up in server logs, browser history, and Referer headers)?
Manual review of authentication flows, password storage mechanisms, and data handling code catches what scanners miss. Automated tools can identify: weak TLS configuration, missing HSTS, mixed content, missing Secure/HttpOnly cookie flags, plaintext HTTP endpoints accepting credentials, and known vulnerable algorithm usage in HTTP headers (e.g. WWW-Authenticate headers advertising weak cipher suites).
For code-level detection, static analysis tools like Semgrep with security-focused rulesets can find MD5/SHA1 usage in security contexts, hardcoded strings matching key patterns, missing salt in hash functions, and disabled certificate validation.
Prevention Checklist
- Classify data — identify what's sensitive before deciding how to protect it. Not all data needs the same protection level.
- No sensitive data in URLs — query parameters end up in server logs, browser history, proxy logs, and Referer headers.
- Enforce HTTPS everywhere — HSTS with
includeSubDomainsand preload, HTTP → HTTPS redirect, no mixed content. - TLS 1.2 minimum — disable TLS 1.0/1.1/SSLv3. Use modern cipher suites, disable export ciphers and RC4.
- Proper password hashing — bcrypt, Argon2id, or scrypt. Never MD5/SHA-1/SHA-256 for passwords.
- Never hardcode secrets — environment variables, secrets managers, or vault systems. Rotate any secret that may have been exposed.
- Authenticated encryption — AES-256-GCM or ChaCha20-Poly1305. Never unauthenticated modes like AES-CBC alone.
- Fresh IVs/nonces — generate a new random IV for every encrypt operation. Never reuse with the same key.
- Use OS CSPRNG —
secretsin Python,crypto.randomBytesin Node.js. Neverrandom/Math.random()for security values. - Secure cookie flags —
Secure,HttpOnly,SameSite=Stricton session and auth cookies. - Cache-Control: no-store — on all responses containing sensitive data.
- Enable certificate validation — never ship code with
verify=Falseor equivalent. Use proper CA bundles for internal services.
Test your encryption
Run an automated scan to check your TLS configuration, cipher suites, missing security headers, and exposed sensitive endpoints.
Test your encryption →