Founders Insight

Fenil Suchak
Nov 11, 2025
What Is a Personalized GTM Database?
Most "GTM databases" are just contact directories flexing "50M companies"
→ but how many are actually active?
Why account tiering matters:
Horizontal SaaS serves everyone. You need to prioritize based on timing, activity, and traits - not surface filters.
The problem:
Techno-graphics work for qualification, not timing. They don't show what's changing, why it matters, or who's involved.
Static databases → wasted credits on inactive accounts.
Companies move in time.
Last month's non-priority could be Tier 1 today (like adopting usage-based billing).

Aditya Lahiri
Oct 29, 2025
Is the Data Layer for Agent Consumption Just Text-to-SQL?
Agents don't think in filters - they think in meaning.
They combine world knowledge with user context to craft semantic queries based on intent, not keywords.
The problem: The industry is stuck translating natural language into pre-set SQL filters. Rigid schemas. Static filters.
Our agent-native data layer: Vector embeddings + semantic matching for company activities:
What functions are they building?
What migrations are happening?
Who's joining and leaving? Static filters for deterministic data:
Headcount, location, funding stage The bet: Agents are smart enough to query both static and semantic fields. We're building for Agents that reason around meaning.

Fenil Suchak
Oct 22, 2025
What Does GTM Data Layer 2.0 Look Like for Humans and GTM Agents?
We're studying GTM timing, movements, and real-time signals that reveal pain-points.
One thing became obvious: People and Company APIs are still CRUD operations.
They weren't built for GTM search or insight discovery.
Getting to meaningful insights is slow, inefficient, and fragmented.
The shift: Many are refactoring code-bases to make them agent-ready. The same shift is coming to GTM.
A data layer for agentic search looks nothing like CRUD APIs.
Converting natural language to API params repeatedly is wildly ineffective. We're just masking text-to-SQL as search.

Aditya Lahiri
Oct 15, 2025
How Are Fast, Cheap Open Source Models Changing How We Build?
Inference providers like Groq unlock two shifts:
LLM-native architecture We replaced deterministic code with LLMs from day 0.
Example: Natural language blocklists. Instead of hardcoded lists, we use Groq + web search in real-time. "Exclude marketing agencies" just works.
Rapid prototyping with model swaps
OSS models for intermediate reasoning
SOTA models for complex reasoning
v1 fast > perfection. Optimize later based on usage.
The insight: Orchestrate a hierarchy of models - cheap/fast for most flows, expensive/smart only when needed.

Fenil Suchak
Oct 12, 2025
Are Speed and Insight the Only Moats in GTM?
With competitors appearing weekly, timely and precise outreach wins.
If you're using static filters - even technographics - you're too late or off on timing.
Funding signals are commoditized. Everyone has them, reacts, and it's signal slop.
You need creative timing and insight - leading indicators of pain with nuance.
Great GTM teams spot live, nuanced movements:
Job posts revealing plans
Website/pricing changes
Hyper-specific layoffs
Sub-departments static too long
Find leading indicators of pain.
That's where timing wins and GTM teams succeed.

Aditya Lahiri
May 12, 2025
What Are the Dumb Ways to Die as an AI Startup in 2025?
Assume models won't get better Building your moat around "GPT-5 can't do X yet" is a death sentence.
Your defensibility must be orthogonal to model capabilities.
Treat evals as performative Without rigorous evaluation frameworks, you're flying blind. Evals are your early warning system.
Vibe code production features Ship fast, not recklessly. In AI products, trust compounds slowly and evaporates instantly.
Skip in-person at SF The density of AI talent, customers, and capital in SF is unmatched. Serendipity still matters.
Never build a data moat Proprietary data and feedback loops are real differentiators.
Treat non-AI infrastructure as secondary Brilliant LLM + broken integrations = customers leave.
Which mistake do you see most often?





