Three months after Series A, the same pattern repeats across nearly every B2B SaaS I audit: analytics is installed but it isn’t answering questions. Founders can’t tell investors where their activation actually happens. Product teams ship features without knowing whether they moved the needle. Growth conversations stall on guesswork.
One developer captures “Sign Up”, another captures “user_signup”, a third uses autocapture and never names anything. Six months in, your funnel data is unreadable.
Without proper identity stitching, your real conversion rate is hidden behind a fake drop-off between marketing site and authenticated app. You’re reading the wrong number every time.
A board-ready chart isn’t the same as a decision tool. If your analytics doesn’t answer the three questions your team actually argues about, it’s wallpaper.
A working document mapping every event to the question it answers — not a generic checklist. 10–20 events, properties, naming conventions, identity strategy. Yours to keep and iterate on.
PostHog or GA4 (or both, layered correctly) installed properly. Custom events firing where they matter, with QA in staging before they hit production.
Acquisition funnel, activation & retention, and feature usage. Built against your real events, with the breakdowns and cohorts that surface real signal.
I record a screen-share explaining what your data is showing you, where the leaks are, and what I’d focus on next. Send it to your team or your investors.
Honesty is a feature. Here’s what falls outside this sprint:
If you’re not happy with the deliverables on Day 5, I refund the engagement in full.
No clawback clauses, no scope arguments. The work either earns its fee or it doesn’t.
60-minute kickoff call. I work with you to map the user journey, identify the three to five questions your data needs to answer, and audit whatever instrumentation exists today.
Output: short diagnostic memo + question list.
I draft the full event taxonomy: names, properties, where each fires, identity strategy, anonymous vs identified handling. We review on a 30-minute call before any code is written.
Output: tracking-plan.md, version 1.0.
I write the instrumentation against your codebase — frontend events, server-side events for high-value actions, identity stitching, reverse proxy if needed. Verified in staging.
Output: pull request(s), QA notes, events live in your PostHog or GA4 project.
Three dashboards built against the new events: acquisition funnel, activation & retention, feature usage. Cohorts defined for your key user segments.
Output: live dashboards + saved insights, ready to share.
I record a 30-minute walkthrough Loom showing what the data says, where I’d look first, and the three things I’d instrument next quarter. Final call to answer questions.
Output: handoff Loom, 30-day recommendations doc, all source materials.
Drawn from projects I’ve shipped. New examples added as the practice grows.
I’d rather spend two minutes showing you a real PostHog dashboard I’ve built and how I’d approach your product specifically than make claims on a sales page.

I started Edgeworth because I kept watching technical founders raise serious money and then run their product decisions on intuition — because their analytics were either missing, broken, or unreadable.
My approach is engineer-first: clean event taxonomies, server-side reliability where it matters, and dashboards that exist to answer specific questions rather than to look impressive in a deck. I run sprints solo, which means you talk to the person doing the work, not an account manager.
Edgeworth Analytics is a single-operator consultancy. No agency overhead, no account managers, no offshore handoffs. You work with me directly, start to finish.
Many clients want help running experiments, defining activation metrics, or instrumenting the next quarter’s features.
Optional retainer from $1,500 USD/mo. Available after the sprint, never required.
By scoping aggressively. The sprint covers tracking plan, instrumentation, and three dashboards — deliberately not your whole data stack. Anything outside that scope is flagged on Day 1 and quoted separately. The constraint is what makes the timeline real.
PostHog is my primary tool because it covers product analytics, session replay, feature flags, and experiments in one stack. I also implement GA4 alongside it for marketing teams, and I can work with Mixpanel or Amplitude if you’re already invested. I won’t recommend a tool migration during the sprint — the work is built on what you have.
Hourly billing rewards the wrong behaviour and creates friction every time scope shifts. A flat fee aligns my incentive with delivering inside the window. If a sprint takes longer than five days, that’s on me, not you.
There’s a 20-minute audit call before any commitment. If the work doesn’t fit a standard sprint, I’ll tell you, and either quote a custom engagement or refer you to someone better suited. About one in three calls ends without a booking, which is the point.
The deliverables are concrete and the guarantee is real. You see the tracking plan before any code is written. You see the dashboards before final payment. If the work doesn’t hold up on Day 5, I refund the engagement in full — no clawback clauses, no scope arguments.
Read access to your codebase or a willing engineer for the implementation phase, admin access to your PostHog or GA4 project (or permission to set one up), and roughly 90 minutes of your time across the week for the kickoff, tracking-plan review, and final handoff.
Book a no-pitch audit call. I’ll open your product, show you what your analytics is and isn’t telling you, and you decide if a sprint makes sense.