I Ran a Red Team Engagement Against My Own Company for 6 Months — Here's Every Vulnerability I Found
Last June, I walked into my CISO's office and pitched an idea that made her visibly uncomfortable: let me spend the next six months trying to break into our own systems, compromise our employees, and exfiltrate data — all without telling anyone else in the company. No safety net. No "hey, the security team is running a drill" emails. Just me, a laptop, and the same tools any motivated attacker would use.
She said yes. What I found kept me up at night.
I'm writing this because every company I've ever worked at says "we take security seriously" in their breach notification letters. After six months of methodically dismantling my own employer's defenses, I can tell you that phrase is almost always a lie — not because people don't care, but because the gap between what companies think their security posture is and what it actually is could swallow a fleet of trucks.
Here's everything. The good, the bad, and the stuff our legal team really wished I hadn't put in a slide deck.
The Rules of Engagement
Before I started, we established ground rules documented in a signed memo that only three people knew existed — me, the CISO, and our General Counsel:
- Scope: All company-owned digital assets, physical offices (3 locations), and employee interactions
- Off-limits: Production database deletion, anything that could cause actual customer harm, personal devices of employees
- Duration: June 1 – November 30, 2025
- Reporting: Weekly encrypted reports to CISO only
- Legal cover: A signed authorization letter I carried at all times (never needed it, which tells you something)
With the rules set, I got to work.
Month 1: Reconnaissance — The Outside View
I started where any attacker would: outside the perimeter, armed with nothing but Google and Shodan.
What I Found in 48 Hours
Within two days of passive reconnaissance, I had:
- 14 subdomains not listed in our asset inventory, including
staging-api.ourcompany.ioandjenkins-old.ourcompany.io - 3 exposed services on non-standard ports (8080, 8443, 9090) — two were running outdated Tomcat instances
- A public Trello board used by the marketing team that contained internal campaign timelines, vendor contracts, and one card titled "API keys for social media tools" (yes, with actual keys)
- Employee email addresses for 340 of our 400 employees harvested from LinkedIn, GitHub commits, conference talk metadata, and WHOIS records
The Trello board alone would have been a goldmine for a social engineer. I reported it immediately as a critical finding and kept going.
DNS and Certificate Transparency
Certificate Transparency logs revealed 23 subdomains we'd issued TLS certs for. Cross-referencing with our internal asset inventory showed 7 that IT had no record of. Three of those were still serving live applications.
text$ subfinder -d ourcompany.io -silent | sort -u | wc -l 47 $ cat known_assets.txt | wc -l 26 # 21 unknown subdomains. Not great.
Month 2: The Perimeter — Poking at the Walls
Unpatched Servers
I ran targeted vulnerability scans against our external-facing infrastructure (carefully, during off-hours, throttled to avoid triggering our SOC alerts — spoiler: they wouldn't have noticed anyway).
Critical findings:
| Asset | Vulnerability | CVE | CVSS | Days Unpatched |
|---|---|---|---|---|
| vpn.ourcompany.io | Fortinet FortiOS RCE | CVE-2024-21762 | 9.8 | 147 days |
| mail.ourcompany.io | Exchange ProxyNotShell | CVE-2022-41082 | 8.8 | 820 days |
| jenkins-old.ourcompany.io | Jenkins arbitrary file read | CVE-2024-23897 | 9.8 | 89 days |
That Exchange server had been "scheduled for decommissioning" for over two years. It was still receiving mail for three distribution lists.
The Jenkins instance was supposed to have been shut down after we migrated to GitHub Actions. It still had active credentials for our AWS staging account baked into build configurations. I used those credentials to enumerate 14 S3 buckets (more on that later).
VPN: The Front Door Was Unlocked
The Fortinet vulnerability was the worst. CVE-2024-21762 is a pre-authentication remote code execution flaw. In a real attack, this would give an adversary a foothold inside our network in under a minute.
I didn't exploit it in production (rules of engagement), but I set up an identical FortiGate in a lab environment and confirmed the exploit was trivial — a single HTTP request with a crafted payload:
textPOST /api/v2/cmdb/system/sniffer HTTP/1.1 Host: vpn.ourcompany.io Content-Type: application/json Content-Length: ... { "name": "sniffer1", ... [crafted overflow payload] }
I flagged this as a P0 to the CISO. It was patched within 36 hours. The other two took weeks.
Month 3: Phishing — Humans Are the Real Vulnerability
This is the part that made me genuinely sad.
Campaign 1: The Classic Credential Harvest
I registered ourcompany-sso.com and built a pixel-perfect replica of our Okta login page. Then I sent 200 employees an email from it-support@ourcompany-sso.com with the subject line: "Action Required: Password Expiration Notice."
Results:
- 67 of 200 (33.5%) clicked the link
- 41 of 200 (20.5%) entered their credentials
- 12 of those 41 had MFA enabled but approved a push notification anyway
- Average time from email send to credential entry: 4 minutes 22 seconds
Twenty percent of the company handed me their credentials within five minutes. And 12 people approved MFA push notifications for a login they didn't initiate — a textbook MFA fatigue scenario.
Campaign 2: The Spear Phish
For this one, I got personal. I picked 20 senior employees and crafted individualized emails based on their LinkedIn activity, recent conference talks, and public social media posts.
One email to a VP of Engineering referenced a real open-source project he'd starred on GitHub that week, with a link to "a related tool I thought you'd find interesting" that pointed to a payload hosted on my C2 infrastructure.
Results: 14 of 20 (70%) clicked. 8 executed the payload (a benign beacon that phoned home).
The VP of Engineering — a person who approves our security budget — ran an unsigned binary from an unknown source because it was vaguely related to a Rust crate he'd been looking at.
Campaign 3: The Callback Phish (Vishing)
I left 50 voicemails claiming to be from "IT support" regarding a "suspicious login detected on your account." I asked employees to call back a Google Voice number.
22 called back. Of those, 17 gave me their employee ID and last four of their SSN when I asked for "verification purposes." One person offered to give me their full SSN before I could even ask.
Month 4: Inside the Network — Lateral Movement
Using credentials harvested from the phishing campaigns (with CISO authorization to use them in a controlled manner), I authenticated to our VPN and began exploring the internal network.
Active Directory: A Mess
Our AD environment was a horror show:
- Kerberoasting: 34 service accounts with SPNs, 11 of which had passwords that cracked in under an hour using hashcat with
rockyou.txt. One service account had Domain Admin privileges. - LLMNR/NBT-NS poisoning: Enabled across the entire network. I captured NTLMv2 hashes just by sitting on the network with Responder running.
- GPO permissions: Three non-admin users had write access to GPOs linked to the Domain Controllers OU. That's a one-hop path to domain compromise.
text# Kerberoasting results $ GetUserSPNs.py ourcompany.local/raj.patel -request ServicePrincipalName Name MemberOf ---------------------------- ------------- -------------------------------- MSSQLSvc/db01.ourcompany.local svc_sql CN=Domain Admins,... HTTP/intranet.ourcompany.local svc_intranet CN=IT-Services,... ... # 11 of 34 cracked in < 1 hour $ hashcat -m 13100 kerberoast.txt rockyou.txt --status ... svc_sql:Summer2024! svc_intranet:Welcome123! svc_backup:Backup2023
Yeah. Summer2024! on a Domain Admin service account.
The Crown Jewels: 22 Minutes to Domain Admin
From my initial VPN access with a phished credential, the full attack chain to Domain Admin took 22 minutes:
- VPN in with phished creds (0:00)
- Enumerate SPNs, request TGS tickets (0:03)
- Crack
svc_sqlpassword offline (0:08) - Authenticate as
svc_sql— Domain Admin (0:12) - DCSync to dump all domain hashes (0:15)
- Golden Ticket created for persistence (0:22)
Twenty-two minutes from phished employee to complete domain control. That's not a security posture. That's a suggestion.
Month 5: Cloud — Where the Real Data Lives
AWS S3: The Gift That Keeps on Giving
Remember those Jenkins credentials? They led me to an IAM user (jenkins-deploy-staging) with far more permissions than a CI/CD pipeline should ever have. Specifically, s3:* on *.
I enumerated every S3 bucket in our account:
text$ aws s3 ls --profile jenkins-staging 2023-04-12 customer-data-exports 2024-01-08 marketing-assets-prod 2024-03-15 db-backups-encrypted 2024-06-22 analytics-raw-events 2024-09-01 tmp-data-migration 2023-11-30 compliance-reports
The bucket customer-data-exports contained unencrypted CSV files with PII for 2.3 million customers. Names, emails, phone numbers, last four digits of payment cards. The bucket ACL? AuthenticatedUsers — meaning any AWS account holder on earth could read it.
The tmp-data-migration bucket had full database dumps from our production PostgreSQL instance, including hashed (but unsalted, MD5) passwords.
I reported this as a severity-1 incident. The customer data bucket was locked down within 4 hours. The database dumps took another week to clean up because "we might still need those."
IAM: Over-Privileged Everything
A review of our IAM policies revealed:
- 7 IAM users with
AdministratorAccess(we have 3 infra engineers) - No SCPs (Service Control Policies) on any organizational unit
- Root account had been used 6 times in the last 90 days with no MFA
- 43 access keys older than 1 year, 12 of which had never been rotated
Kubernetes: The New Perimeter Nobody Watches
Our EKS clusters were running with:
- Default service account tokens mounted in every pod
- No network policies — every pod could talk to every other pod
cluster-adminClusterRoleBinding for 4 developers "for debugging"- Tiller (Helm v2) still running in kube-system with no RBAC — a known privilege escalation path
I pivoted from a compromised web app pod to the Kubernetes API server to cluster-admin to AWS node IAM role in about 15 minutes.
Month 6: Physical Security & Social Engineering
Tailgating
Over two weeks, I attempted to tailgate into our three office locations 30 times total (10 per office).
- Office A (HQ): 8 of 10 successful
- Office B (Engineering): 6 of 10 successful
- Office C (Remote/Satellite): 10 of 10 successful
At Office C, I walked in behind a delivery driver, sat down at an empty desk, plugged in a USB Rubber Ducky to an unlocked workstation, and had a reverse shell in 40 seconds. Nobody spoke to me for the entire 2 hours I was there. I left a thank-you note on the desk.
USB Drop
I left 20 branded USB drives (with our company logo, ordered from a print shop) in the parking lot, cafeteria, and lobby of our HQ. Each contained a benign tracking payload that would beacon home when executed.
14 of 20 drives were plugged in within 48 hours. One was plugged into a machine in the finance department.
Badge Cloning
Using a Proxmark3 and 10 seconds of proximity to an employee at a coffee shop near our office, I cloned their HID badge. This gave me unrestricted access to all three buildings for the remainder of the engagement. Nobody noticed the badge number appearing in two locations simultaneously.
The Aftermath: What Got Fixed (and What Didn't)
After delivering my 200-page report, here's the scorecard six weeks later:
Fixed (Thank God)
- Fortinet VPN patched
- S3 customer data bucket locked down with proper ACL + encryption
- Jenkins-old decommissioned for real this time
- Service account passwords rotated, Kerberoasting mitigated
- MFA enforcement strengthened (number matching enabled)
- Trello board taken private
"In Progress" (Translation: Not Fixed)
- Active Directory GPO permissions
- IAM policy overhaul
- Kubernetes RBAC and network policies
- LLMNR/NBT-NS disabled across network
"Accepted Risk" (Translation: We Don't Care)
- Physical security improvements (too expensive)
- Employee security awareness training overhaul (we do annual CBTs, what more do you want)
- Badge system upgrade from HID Prox to DESFire (capital expenditure — next fiscal year, maybe)
- Legacy Exchange server decommissioning (it's on the roadmap)
The "accepted risk" items are what worry me most. The entire point of this exercise was to demonstrate that our threat model has gaps you could drive through. The response to proven, demonstrated, exploited physical security weaknesses was "it costs too much to fix."
Lessons Learned
-
Assume breach is not a philosophy — it's your current state. Every network I've tested, including my own employer's, is already compromised or trivially compromisable. Plan accordingly.
-
Patch management is a cultural problem, not a technical one. We have automated scanners. We have patch management tools. We have a vulnerability management program. And we still had a 9.8 CVSS RCE open for 147 days. The tools exist. The will to use them does not.
-
Phishing will always work. A 20% credential harvest rate against employees who do annual security awareness training. Seventy percent success on targeted spear phishing against senior technical staff. You cannot train your way out of this. You can only build systems that assume credentials will be compromised.
-
Cloud misconfiguration is the new open port. S3 bucket policies, IAM over-provisioning, missing SCPs — these are the 2025 equivalent of leaving FTP open to the internet. And most companies have no visibility into their cloud posture.
-
Physical security is security's orphan child. In an era of zero trust and defense in depth, our physical offices are still defended by $2 proximity cards and social norms about holding doors open.
-
The stuff that gets fixed is the stuff that scares executives. Customer data in a public S3 bucket? Fixed in hours. Badge cloning that could let an attacker walk into the CEO's office? "Accepted risk." The delta tells you everything about how your organization actually thinks about security.
I've anonymized the company and some details here, but the numbers are real. If you're a security professional and any of this surprises you, you haven't looked hard enough at your own organization. If you're an executive reading this, go authorize your security team to do the same thing. You won't like what they find, but you'll like a real breach even less.
Raj Patel is a Principal Security Engineer with 12 years of experience in offensive security, red teaming, and penetration testing. He holds OSCP, OSCE, and GXPN certifications. The company described in this article has been anonymized, and all findings were reported through proper channels before publication.