- Roche-Review | #1 AI, Safety & Strategy For Leaders
- Posts
- The Soham Parekh Problem
The Soham Parekh Problem
What Serial Moonlighting Reveals About AI Startups and Leadership Blind Spots
When a Moonlighter Exposes Silicon Valley's Governance Gap
In early July 2025, Soham Parekh became Silicon Valley's most talked-about engineer—for all the wrong reasons. The Mumbai-based software engineer, who holds degrees from the University of Mumbai and Georgia Tech, reportedly worked simultaneously for multiple high-profile AI startups over several years without the companies knowing.
In an interview on the daily tech show TBPN, Parekh confirmed the claims he was holding down multiple jobs at the same time, saying: "I'm not proud of what I've done. That's not something I endorse either. But no one really likes to work 140 hours a week, I had to do it out of necessity."
The controversy erupted when Suhail Doshi, founder of Playground AI and co-founder of Mixpanel, publicly exposed Parekh on social media, alleging that "90%" of his resume looked falsified and warning other companies in the ecosystem.
For executive leaders—especially those overseeing high-growth, remote-first AI teams—this isn't just a scandal. It's a strategic wake-up call.
The Scale of the Problem
Parekh is alleged to have worked at up to four or five startups—many of them backed by Y Combinator—at the same time. The companies reportedly included Playground AI, Lindy, Sync Labs, and Antimetal. In some cases, he contributed to product development and even appeared in company videos. In others, he disappeared after short stints when inconsistencies surfaced.
But Parekh isn't an outlier in a vacuum. The broader context reveals systemic vulnerabilities in how startups manage remote talent. Nearly eight in 10—or 79%—remote employees have worked at least two jobs at the same time in the past year, according to ResumeBuilder research. More alarming, there was a 25-30% increase in moonlighting seen between 2020 and 2023, according to Randstad India, with workers citing factors such as low pay and remote work.
How Systemic Failures Enabled Serial Moonlighting
The root causes extend far beyond one individual's choices—they reveal three critical governance failures endemic to fast-scaling AI startups.
1. Speed Over Structure: The Due Diligence Deficit
Y Combinator and venture-backed startups are conditioned to prioritize rapid hiring to capture market opportunities. But when early engineers are building core AI systems—the foundation of your entire product—basic verification becomes existential, not optional.
Without employment verification procedures, even your most sensitive roles can be filled by someone simultaneously employed elsewhere. The Parekh case demonstrates how easily location requirements can be bypassed through VPNs and IP spoofing during video calls.
2. Remote-First Without Risk-First Thinking
Over the course of 2024, the rate of new, fully in-office jobs continued to decline, solidifying that flexible work arrangements are here to stay, according to Robert Half research. However, many young companies adopted distributed teams without implementing corresponding security, monitoring, and verification systems.
Remote-first doesn't mean trust-only. It demands resilient governance structures that support genuine flexibility while detecting abuse.
3. The Human Observability Gap
Ironically, many AI startups build sophisticated tools to monitor, automate, and optimize systems—yet lack basic visibility into human contributions. When an engineer is central to your product roadmap, you need to understand not just what they deliver, but how, when, and where they work.
Human observability—monitoring how people interact with systems, data, and teammates—is as critical as monitoring your AI models themselves.
The Compensation Ethics Trap
Parekh's case reveals a deeper ethical hazard embedded in startup compensation strategies. Parekh said he began juggling multiple jobs in 2022, re-emphasizing that he did so out of financial necessity, adding that he had deferred an offer to graduate school and opted for an online degree.
Startups often offer cash-light, equity-heavy contracts to preserve runway and attract talent willing to bet on upside. While fiscally efficient, this model can unintentionally create survival pressures—especially for international contractors—that encourage overcommitment, burnout, and ultimately, deception.
When compensation models ignore basic financial viability, they risk creating ethical hazards where talent feels forced to choose between sustainability and survival.
Strategic Leadership Response Framework
This isn't just a human resources problem—it's a leadership and governance problem. Here's what boards, CEOs, and senior teams should implement immediately:
1. Build Resilient Verification Systems
Create policies that support trust while verifying critical details—identity, location, concurrent employment—especially for core technical roles. Require full disclosure of employment status with periodic verification, not just initial screening.
2. Institute Ethical Human Workload Monitoring
Deploy internal systems (with clear boundaries and consent) to understand how and when engineers contribute—without creating surveillance culture. Monitor for patterns that suggest overcommitment or burnout as early warning systems for ethical issues.
3. Appoint Dedicated Trust and Ethics Leadership
Every AI company should designate a Chief Trust Officer or AI Ethics Oversight Lead responsible for:
Establishing transparent hiring and verification policies
Reviewing incentive structures for ethical misalignment
Overseeing ethical AI development and deployment
Ensuring people practices reflect stated company values
This isn't just risk management—it's brand protection and stakeholder trust-building.
4. Redesign Compensation for Sustainability
Avoid contracts that overload early hires with equity and minimal livable pay. Even at pre-revenue stages, offering sustainable compensation prevents talent from overcommitting across multiple jobs and enables focus, creativity, and genuine commitment.
5. Normalize Ethical Speak-Up Culture
Engineers at multiple companies suspected inconsistencies with Parekh but felt uncomfortable raising concerns. This cultural failure is as dangerous as any technical vulnerability. Leaders must normalize speaking up about ethical concerns, inconsistencies, or burnout without fear of reprisal.
From Cautionary Tale to Course Correction
Remarkably, Parekh has already joined a new AI company called Darwin Studios as a founding engineer, this time with what he says is a one-job-only commitment. As one founder noted, "Everybody deserves a second chance. Let's be part of his redemption arc."
The willingness to offer second chances reflects Silicon Valley's optimistic culture. But the real question is whether the ecosystem will learn from systemic failures or simply move on to the next viral moment.
Startups are building tools meant to transform industries. But if they fail to apply basic oversight to their own human operations, they risk turning those powerful tools into liabilities.
The Executive Leadership Audit
Every AI startup leader should ask themselves and their teams:
Would we have detected Parekh's moonlighting?
Would we have acted decisively when red flags emerged?
Are we incentivizing people in ways that sustain or strain their ethics?
Do we have the right leadership roles to oversee trust—both human and machine?
Are our remote work policies robust enough to prevent abuse while supporting genuine flexibility?
If the answer to any of these questions is uncertain, you now have a strategic imperative. The Parekh case isn't just about one engineer's choices—it's about whether the leaders building AI's future can govern the present.
Leadership in the age of AI isn't just about building breakthrough technology. It's about ensuring that how we build is as trustworthy as what we build.