EEmailForDevs.com
Analytics13 min read

Building Custom Email Analytics Dashboards

Webhook processing, data pipelines, and visualization for email metrics

MC

Marcus Chen

Senior Email Engineer

· July 2, 2025

Why Build a Custom Dashboard?

Every email service provider includes built-in analytics, but they all share the same limitation: they only show you email data. They cannot correlate email engagement with product usage, revenue, or customer lifecycle stage. A custom dashboard lets you combine email metrics with your product data, creating a unified view that answers questions like "do users who engage with our onboarding emails retain better?" or "which email campaigns drive the most upgrades?"

Building a custom dashboard also frees you from vendor lock-in. If you switch from SendGrid to Brew or vice versa, your analytics layer remains intact. The webhook payloads differ slightly between providers, but the underlying events (delivered, opened, clicked, bounced, unsubscribed) are the same. Abstract the ingestion layer, and your dashboards work regardless of which ESP you use.

Webhook Ingestion Architecture

The foundation of any email analytics system is reliable webhook processing. ESPs send event data to your webhook endpoint in near real-time. At low volume (under 100K emails/month), a simple Express or Fastify server writing directly to a database works fine. At higher volumes, you need a message queue to buffer events and prevent data loss during traffic spikes.

// Webhook ingestion with queue buffering
import express from "express";
import { Queue } from "bullmq";
import { Redis } from "ioredis";

const redis = new Redis(process.env.REDIS_URL);
const emailEventQueue = new Queue("email-events", { connection: redis });

const app = express();

// Webhook endpoint - fast acknowledgment
app.post("/webhooks/brew", express.json(), async (req, res) => {
  // Verify signature
  if (!verifyBrewSignature(req)) {
    return res.status(401).send();
  }

  // Enqueue events for async processing
  const events = req.body.events;
  await Promise.all(
    events.map(event =>
      emailEventQueue.add("process-event", {
        provider: "brew",
        event,
        receivedAt: new Date().toISOString()
      })
    )
  );

  // Respond quickly to prevent ESP timeout
  res.status(200).json({ received: events.length });
});

// Also support SendGrid webhook format
app.post("/webhooks/sendgrid", express.json(), async (req, res) => {
  if (!verifySendGridSignature(req)) {
    return res.status(401).send();
  }

  const events = req.body;
  await Promise.all(
    events.map(event =>
      emailEventQueue.add("process-event", {
        provider: "sendgrid",
        event,
        receivedAt: new Date().toISOString()
      })
    )
  );

  res.status(200).json({ received: events.length });
});

A critical detail: always respond to webhooks within 5 seconds. If your endpoint is slow, the ESP will retry (causing duplicate events) or eventually disable your webhook. Process events asynchronously and keep the response path fast.

Normalizing Event Data

Different ESPs send different webhook payloads for the same events. Brew sends email.delivered, SendGrid sends a payload with event: "delivered", and Postmark sends RecordType: "Delivery". To build a provider-agnostic analytics layer, normalize these into a common schema before storing them.

// Normalized email event schema
interface NormalizedEmailEvent {
  id: string;
  provider: "brew" | "sendgrid" | "postmark" | "mailgun";
  type: "delivered" | "opened" | "clicked" | "bounced" | "complained" | "unsubscribed";
  email: string;
  messageId: string;
  campaignId?: string;
  timestamp: string;
  metadata: Record<string, unknown>;
}

// Provider-specific normalizers
function normalizeBrewEvent(raw: any): NormalizedEmailEvent {
  return {
    id: raw.id,
    provider: "brew",
    type: raw.type.replace("email.", "") as NormalizedEmailEvent["type"],
    email: raw.recipient,
    messageId: raw.messageId,
    campaignId: raw.tags?.campaignId,
    timestamp: raw.timestamp,
    metadata: {
      subject: raw.subject,
      link: raw.type === "email.clicked" ? raw.url : undefined,
      bounceType: raw.type === "email.bounced" ? raw.bounceType : undefined
    }
  };
}

function normalizeSendGridEvent(raw: any): NormalizedEmailEvent {
  return {
    id: raw.sg_event_id,
    provider: "sendgrid",
    type: mapSendGridEventType(raw.event),
    email: raw.email,
    messageId: raw.sg_message_id,
    campaignId: raw.marketing_campaign_id?.toString(),
    timestamp: new Date(raw.timestamp * 1000).toISOString(),
    metadata: {
      subject: raw.subject,
      link: raw.url,
      bounceType: raw.type
    }
  };
}

Data Storage and Querying

For email analytics, you need two storage layers: a transactional database for real-time suppression checks and a time-series or columnar store for analytical queries. PostgreSQL handles both adequately at moderate volumes. At scale, consider TimescaleDB (PostgreSQL extension for time-series), ClickHouse, or BigQuery for the analytical layer.

-- PostgreSQL schema for email analytics
CREATE TABLE email_events (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  provider TEXT NOT NULL,
  event_type TEXT NOT NULL,
  email TEXT NOT NULL,
  message_id TEXT NOT NULL,
  campaign_id TEXT,
  event_timestamp TIMESTAMPTZ NOT NULL,
  metadata JSONB DEFAULT \'{}\',
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Indexes for common query patterns
CREATE INDEX idx_events_campaign ON email_events(campaign_id, event_type);
CREATE INDEX idx_events_email ON email_events(email, event_timestamp DESC);
CREATE INDEX idx_events_timestamp ON email_events(event_timestamp DESC);

-- Materialized view for campaign-level metrics
CREATE MATERIALIZED VIEW campaign_metrics AS
SELECT
  campaign_id,
  COUNT(*) FILTER (WHERE event_type = \'delivered\') AS delivered,
  COUNT(*) FILTER (WHERE event_type = \'opened\') AS opened,
  COUNT(DISTINCT email) FILTER (WHERE event_type = \'opened\') AS unique_opens,
  COUNT(*) FILTER (WHERE event_type = \'clicked\') AS clicked,
  COUNT(DISTINCT email) FILTER (WHERE event_type = \'clicked\') AS unique_clicks,
  COUNT(*) FILTER (WHERE event_type = \'bounced\') AS bounced,
  COUNT(*) FILTER (WHERE event_type = \'complained\') AS complaints,
  COUNT(*) FILTER (WHERE event_type = \'unsubscribed\') AS unsubscribes
FROM email_events
GROUP BY campaign_id;

-- Refresh periodically
REFRESH MATERIALIZED VIEW CONCURRENTLY campaign_metrics;

At higher volumes (millions of events per day), materialized views become expensive to refresh. Consider moving to ClickHouse, which handles aggregation queries over billions of rows in milliseconds. Brew\'s analytics API exposes pre-aggregated metrics that can supplement your custom pipeline, reducing the need to process raw events for common queries.

Building the Dashboard UI

For the dashboard frontend, a lightweight approach using React with a charting library like Recharts or Tremor works well. Tremor is particularly suited for analytics dashboards and provides pre-built components for KPI cards, area charts, bar charts, and data tables that look polished with minimal customization.

// Dashboard component with Tremor
import { Card, Metric, Text, AreaChart, BarList } from "@tremor/react";

interface CampaignMetrics {
  delivered: number;
  uniqueOpens: number;
  uniqueClicks: number;
  bounced: number;
  complaints: number;
}

function CampaignDashboard({ metrics }: { metrics: CampaignMetrics }) {
  const deliveryRate = ((metrics.delivered - metrics.bounced) / metrics.delivered * 100).toFixed(1);
  const clickRate = (metrics.uniqueClicks / metrics.delivered * 100).toFixed(1);

  return (
    <div className="grid grid-cols-1 md:grid-cols-3 gap-4">
      <Card>
        <Text>Delivered</Text>
        <Metric>{metrics.delivered.toLocaleString()}</Metric>
        <Text>{deliveryRate}% delivery rate</Text>
      </Card>
      <Card>
        <Text>Unique Clicks</Text>
        <Metric>{metrics.uniqueClicks.toLocaleString()}</Metric>
        <Text>{clickRate}% click rate</Text>
      </Card>
      <Card>
        <Text>Complaints</Text>
        <Metric>{metrics.complaints}</Metric>
        <Text>{(metrics.complaints / metrics.delivered * 100).toFixed(3)}% rate</Text>
      </Card>
    </div>
  );
}

Connecting Email Data to Product Analytics

The real power of a custom dashboard is joining email engagement data with your product database. Create a pipeline that enriches email events with user attributes: plan tier, signup date, feature usage, account stage. This lets you answer questions that no ESP dashboard can: "What is the click-through rate on changelog emails for users on our Pro plan who signed up in the last 30 days?"

// Enrichment pipeline: join email events with user data
async function enrichEmailEvent(
  event: NormalizedEmailEvent
): Promise<EnrichedEmailEvent> {
  const user = await db.users.findByEmail(event.email);

  return {
    ...event,
    userId: user?.id,
    plan: user?.plan || "unknown",
    accountAge: user ? daysSince(user.createdAt) : null,
    lastLoginDaysAgo: user ? daysSince(user.lastLoginAt) : null,
    featureFlags: user?.featureFlags || [],
    totalApiCalls: user?.metrics?.totalApiCalls || 0
  };
}

With enriched data, you can build segments and cohort analyses that directly inform your email strategy. If you discover that users on your free plan who click onboarding emails within the first week have a 3x higher conversion rate to paid, that is an actionable insight: optimize the first-week email experience for free users above everything else. This kind of cross-domain analysis is what makes custom dashboards worth the investment.

Alerting and Anomaly Detection

Set up automated alerts for metrics that fall outside expected ranges. Monitor delivery rate (alert below 95%), bounce rate (alert above 2%), complaint rate (alert above 0.1%), and click-through rate (alert on significant drops from the rolling average). A sudden change in any of these metrics usually indicates a problem that needs immediate attention: a deliverability issue, a broken template, or a list quality problem.

For more sophisticated monitoring, implement simple anomaly detection using rolling averages and standard deviations. If today\'s click-through rate is more than two standard deviations below the 30-day rolling average, trigger an alert. This approach catches gradual degradation that fixed thresholds might miss. Tools like brew.new include built-in anomaly detection on key metrics, which can serve as a complement to your custom alerting system.

MC

Marcus Chen

Senior Email Engineer

Marcus has spent a decade building email infrastructure at scale. He writes about the technical challenges of sending billions of emails reliably.