Developer Documentation

Build with ProLayer

Everything you need to send internal team alerts and monitoring notifications across Slack, Telegram, Email, Push, WhatsApp, and In-App channels through a single, unified API. All recipients must be approved, opted-in team members.

Quick Start Guide

Go from zero to your first delivered team alert in under five minutes. Follow these steps to integrate ProLayer into your monitoring and alerting pipeline.

1
Create your account
Sign up for a free ProLayer account at prolayer.io/register. No credit card required. Add your team members as Approved Contacts to start sending alerts.
browser
https://prolayer.io/register

After signing up, you'll land on the dashboard where you can manage API keys, add Approved Contacts (team members), view alert analytics, and configure channels.

2
Get your API key
Navigate to Settings → API Keys and generate a new key. You'll get both production and sandbox keys.
api-keys.txt
Production:  pl_live_sk_a1b2c3d4e5f6g7h8i9j0...
Sandbox:     pl_test_sk_x9y8w7v6u5t4s3r2q1p0...

Security note: Store your API keys in environment variables. Never commit them to version control or expose them in client-side code.

3
Install the SDK
Use your preferred package manager to install the ProLayer SDK.
npm
npm install @prolayer/sdk
pip
pip install prolayer
go
go get github.com/prolayer/prolayer-go
4
Send your first team alert
Initialize the client and send an alert to any approved team member on any supported channel.
send.ts
import ProLayer from "@prolayer/sdk";

const client = new ProLayer({
  apiKey: process.env.PROLAYER_API_KEY,
});

const alert = await client.notifications.send({
  channel: "slack",
  to: "#engineering",
  message: "Deploy v2.4.1 completed successfully on production",
  metadata: { service: "api-gateway", version: "2.4.1" },
});

console.log(alert.id);     // "ntf_a1b2c3d4e5f6"
console.log(alert.status); // "sent"
curl
curl -X POST https://api.prolayer.io/v1/notifications/send \
  -H "Authorization: Bearer pl_live_sk_a1b2c3d4e5..." \
  -H "Content-Type: application/json" \
  -d '{
    "channel": "slack",
    "to": "team_engineering",
    "message": "Deploy v2.4.1 completed successfully on production"
  }'
5
Verify delivery
Check the alert delivery status via the API or in your dashboard.
check-status.ts
const status = await client.notifications.get("ntf_a1b2c3d4e5f6");

console.log(status);
// {
//   id: "ntf_a1b2c3d4e5f6",
//   channel: "slack",
//   to: "team_engineering",
//   status: "delivered",
//   sent_at: "2025-03-15T14:30:00Z",
//   delivered_at: "2025-03-15T14:30:02Z",
//   acknowledged: true,
//   latency_ms: 2140,
//   credits_used: 1
// }
curl
curl https://api.prolayer.io/v1/notifications/ntf_a1b2c3d4e5f6 \
  -H "Authorization: Bearer pl_live_sk_a1b2c3d4e5..."

Authentication

All API requests must be authenticated using a Bearer token in theAuthorization header. Unauthenticated requests return a 401 Unauthorized error.

http-header
Authorization: Bearer pl_live_sk_a1b2c3d4e5f6g7h8i9j0
full-request-example.sh
curl -X POST https://api.prolayer.io/v1/notifications/send \
  -H "Authorization: Bearer pl_live_sk_a1b2c3d4e5f6g7h8i9j0" \
  -H "Content-Type: application/json" \
  -H "X-Request-Id: req_unique_123" \
  -d '{ "channel": "slack", "to": "team_engineering", "message": "Deploy v2.4.1 completed on production" }'
Production Keys

Prefixed with pl_live_. Alerts are delivered to approved team members. Use these in your production environment only.

example
pl_live_sk_a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6
Sandbox Keys

Prefixed with pl_test_. Alerts are simulated — nothing is actually sent. Use these for development and testing.

example
pl_test_sk_x9y8w7v6u5t4s3r2q1p0o9n8m7l6k5j4
Key Rotation Best Practices
  • Rotate production keys every 90 days. Generate a new key before revoking the old one to avoid downtime.
  • Use separate keys for each environment (staging, production) and each service that accesses the API.
  • Enable IP whitelisting in your dashboard to restrict API key usage to known server IPs.
  • Monitor the API key audit log for unexpected usage patterns or requests from unknown IPs.
  • If a key is compromised, revoke it immediately from the dashboard. Revocation takes effect within 30 seconds globally.

SDKs & Libraries

Official client libraries for popular languages. Each SDK wraps the REST API with idiomatic helpers, automatic retries, type safety, and built-in request signing.

Node.js / TypeScript
v3.2.0 · Full TypeScript support
install
npm install @prolayer/sdk
usage.ts
import ProLayer from "@prolayer/sdk";

const pl = new ProLayer({
  apiKey: process.env.PROLAYER_API_KEY,
});

await pl.notifications.send({
  channel: "slack",
  to: "#incidents",
  message: "CPU usage above 90% on prod-server-3",
  metadata: { server: "prod-3", metric: "cpu", value: 92 },
});
Python
v2.8.0 · async/await support
install
pip install prolayer
usage.py
import prolayer

client = prolayer.Client(
    api_key="pl_live_sk_..."
)

client.notifications.send(
    channel="telegram",
    to="team_devops",
    message="Deploy v2.4.1 completed on production 🚀",
)
Ruby
v1.5.0 · Rails integration
install
gem install prolayer
usage.rb
require "prolayer"

client = ProLayer::Client.new(
  api_key: ENV["PROLAYER_API_KEY"]
)

client.notifications.send(
  channel: "push",
  to: "team_oncall",
  title: "Incident Escalation",
  body: "P1 incident unacknowledged for 10 min. Escalating to secondary on-call."
)
Go
v1.3.0 · context-aware
install
go get github.com/prolayer/prolayer-go
main.go
package main

import (
    "context"
    prolayer "github.com/prolayer/prolayer-go"
)

func main() {
    client := prolayer.NewClient("pl_live_sk_...")

    client.Notifications.Send(context.Background(),
        &prolayer.SendParams{
            Channel: "slack",
            To:      "#alerts",
            Message: "Server CPU > 90%",
        },
    )
}
PHP
v2.1.0 · Laravel compatible
install
composer require prolayer/prolayer-php
usage.php
<?php
use ProLayer\Client;

$client = new Client(
    getenv('PROLAYER_API_KEY')
);

$client->notifications->send([
    'channel' => 'in_app',
    'to' => 'team_engineering',
    'title' => 'CI/CD Pipeline Failed',
    'body' => 'Build #4821 failed on main branch. Check logs for details.',
]);
REST API
No SDK needed

Prefer calling the API directly? Every endpoint is accessible via standard HTTP requests. Use any HTTP client in any language.

Base URL

https://api.prolayer.io/v1

Channels

ProLayer supports six alert channels through a single unified API. Each channel is optimized for internal team alerts and monitoring. All recipients must be approved, opted-in team members.

WhatsApp
Critical alerts to on-call engineers via WhatsApp Business API. Ideal for high-severity incidents that need immediate attention, even when team members are away from their workstations. All recipients must be approved team contacts.

Required fields

  • toapproved contact phone in E.164 format
  • messagealert body or template reference
  • channel"whatsapp"
whatsapp-request.json
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...

{
  "channel": "whatsapp",
  "to": "team_oncall",
  "message": "CRITICAL: Database connection pool exhausted on prod-db-3. Active connections: 500/500.",
  "severity": "critical",
  "buttons": [
    { "type": "url", "text": "Open Runbook", "url": "https://wiki.internal/runbooks/db-pool" }
  ]
}
Telegram
Team channel alerts for deployment and CI/CD updates via the Telegram Bot API. Supports Markdown/HTML formatting, inline keyboards, and media attachments. Perfect for DevOps pipelines and build status notifications.

Required fields

  • toteam group chat ID or approved contact
  • messagealert body
  • channel"telegram"
telegram-request.json
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...

{
  "channel": "telegram",
  "to": "team_devops",
  "message": "🚀 *Deploy v2.4.1 completed*\nService: api-gateway\nEnvironment: production\nStatus: healthy",
  "parse_mode": "Markdown",
  "reply_markup": {
    "inline_keyboard": [
      [{ "text": "View Deployment", "url": "https://ci.internal/deploys/2841" }]
    ]
  }
}
Email
Incident summaries and daily monitoring digests delivered to your team. Supports HTML content, attachments, custom headers, and reply-to addresses. Built-in DKIM signing and SPF alignment. Ideal for post-incident reports and weekly SLA summaries.

Required fields

  • toapproved team member email
  • subjectalert subject line
  • bodyHTML or plain text
  • channel"email"
email-request.json
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...

{
  "channel": "email",
  "to": "oncall@company.com",
  "from": "alerts@yourcompany.com",
  "subject": "Incident Report: API Latency Spike — 2025-03-15",
  "body": "<h1>Incident Summary</h1><p>P95 latency exceeded 2s for 12 minutes. Root cause: connection pool saturation on prod-db-3.</p>",
  "attachments": [
    { "filename": "incident-timeline.pdf", "content_base64": "JVBERi0xLjQ..." }
  ]
}
Push
Urgent alerts when team members are away from their desk. Native push notifications to iOS (APNs) and Android (FCM) devices. Supports rich media, action buttons, badges, and sounds. Ensures critical incidents reach on-call engineers immediately.

Required fields

  • toapproved team member ID
  • titlealert title
  • bodyalert body
  • channel"push"
push-request.json
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...

{
  "channel": "push",
  "to": "team_oncall",
  "title": "P1 Incident: Payment Service Down",
  "body": "Payment processing failures detected. Error rate: 43%. Immediate action required.",
  "data": { "incident_id": "inc_9f3a", "severity": "critical" },
  "badge": 1,
  "sound": "alert_critical",
  "priority": "high"
}
Slack
Real-time alerts in team channels with actionable buttons. Post to Slack channels or DM approved team members via the Slack API. Supports Block Kit for rich layouts with severity indicators, metric snapshots, and one-click actions like Acknowledge and Escalate.

Required fields

  • toSlack channel or approved team member ID
  • messagetext fallback
  • channel"slack"
slack-request.json
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...

{
  "channel": "slack",
  "to": "#incidents",
  "message": "CPU alert on prod-server-3",
  "blocks": [
    {
      "type": "section",
      "text": {
        "type": "mrkdwn",
        "text": "🔴 *CPU Usage Alert*\n• Server: prod-server-3\n• Current: 94%\n• Threshold: 90%\n• Duration: 5 min"
      }
    },
    {
      "type": "actions",
      "elements": [
        { "type": "button", "text": { "type": "plain_text", "text": "Acknowledge" }, "url": "https://app.internal/alerts/ack/12345" },
        { "type": "button", "text": { "type": "plain_text", "text": "View Dashboard" }, "url": "https://grafana.internal/d/server-metrics" }
      ]
    }
  ]
}
In-App
Dashboard alerts and system status notifications delivered directly inside your internal tools using the ProLayer real-time SDK. Supports an alert inbox UI, toast popups, unread badges, and deep links to incident details and runbooks.

Required fields

  • toapproved team member ID
  • titlealert title
  • bodyalert body
  • channel"in_app"
in-app-request.json
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...

{
  "channel": "in_app",
  "to": "team_devops",
  "title": "Deployment Rollback Initiated",
  "body": "v2.4.1 rollback triggered on api-gateway due to elevated error rate. Rolling back to v2.4.0.",
  "action_url": "/dashboard/deployments/2841",
  "avatar": "https://cdn.prolayer.io/icons/rollback.png",
  "category": "deployment"
}

Message Templates

Templates let you define reusable alert content with dynamic variables. Create them once and reference them by ID when sending, passing variable values at send time. Templates support all channels and are ideal for standardized alerts like deployment reports, CPU threshold warnings, and incident escalations.

Create a Template
Define a template with {{variable}} placeholders. Templates are validated on creation to ensure all variables are properly formatted.
create-template.sh
curl -X POST https://api.prolayer.io/v1/templates \
  -H "Authorization: Bearer pl_live_sk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "name": "cpu_alert",
    "channel": "slack",
    "body": "🔴 CPU Alert on {{server}}\nCurrent: {{value}}% (threshold: {{threshold}}%)\nDuration: {{duration}}\nDashboard: {{dashboardUrl}}",
    "variables": ["server", "value", "threshold", "duration", "dashboardUrl"]
  }'
response.json
{
  "id": "tpl_r4s5t6u7v8w9",
  "name": "cpu_alert",
  "channel": "slack",
  "body": "🔴 CPU Alert on {{server}}\nCurrent: {{value}}% (threshold: {{threshold}}%)\nDuration: {{duration}}\nDashboard: {{dashboardUrl}}",
  "variables": ["server", "value", "threshold", "duration", "dashboardUrl"],
  "created_at": "2025-03-15T10:00:00Z",
  "updated_at": "2025-03-15T10:00:00Z"
}
Send with a Template
Reference the template by ID and pass variable values. ProLayer will interpolate the values and deliver the final message.
send-with-template.ts
const alert = await client.notifications.send({
  channel: "slack",
  to: "#infrastructure",
  template_id: "tpl_r4s5t6u7v8w9",
  variables: {
    server: "prod-server-3",
    value: "94",
    threshold: "90",
    duration: "5 minutes",
    dashboardUrl: "https://grafana.internal/d/cpu-metrics",
  },
});
curl
curl -X POST https://api.prolayer.io/v1/notifications/send \
  -H "Authorization: Bearer pl_live_sk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "channel": "slack",
    "to": "#infrastructure",
    "template_id": "tpl_r4s5t6u7v8w9",
    "variables": {
      "server": "prod-server-3",
      "value": "94",
      "threshold": "90",
      "duration": "5 minutes",
      "dashboardUrl": "https://grafana.internal/d/cpu-metrics"
    }
  }'

Rendered output

"🔴 CPU Alert on prod-server-3 — Current: 94% (threshold: 90%) — Duration: 5 minutes — Dashboard: https://grafana.internal/d/cpu-metrics"

Webhooks

Receive real-time alert delivery and acknowledgment status updates by configuring a webhook endpoint. ProLayer will send HTTP POST requests to your URL whenever an alert status changes, enabling automated escalation workflows and audit trails.

Setting Up Webhooks
Register a webhook endpoint via the dashboard or the API. Your endpoint must return a 2xx status within 30 seconds.
register-webhook.sh
curl -X POST https://api.prolayer.io/v1/webhooks \
  -H "Authorization: Bearer pl_live_sk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://yourapp.com/webhooks/prolayer",
    "events": [
      "alert.sent",
      "alert.delivered",
      "alert.acknowledged",
      "alert.failed"
    ],
    "secret": "whsec_your_signing_secret_here"
  }'
Available Events
EventDescription
alert.sentFired when an alert is accepted and dispatched to the channel provider.
alert.deliveredFired when the channel provider confirms successful delivery to the approved team member.
alert.acknowledgedFired when a team member acknowledges the alert via an action button or API call.
alert.failedFired when delivery fails permanently after all retry attempts are exhausted.
alert.readFired when the team member opens or views the alert (channels that support read receipts).
Webhook Payload Format
webhook-payload.json
{
  "id": "evt_m3n4o5p6q7r8",
  "type": "alert.delivered",
  "created_at": "2025-03-15T14:30:02Z",
  "data": {
    "notification_id": "ntf_a1b2c3d4e5f6",
    "channel": "slack",
    "to": "team_engineering",
    "status": "delivered",
    "severity": "warning",
    "acknowledged": false,
    "sent_at": "2025-03-15T14:30:00Z",
    "delivered_at": "2025-03-15T14:30:02Z",
    "metadata": {
      "server": "prod-server-3",
      "metric": "cpu",
      "value": 94
    }
  }
}
Signature Verification
Every webhook request includes an X-ProLayer-Signature header containing an HMAC-SHA256 signature. Always verify this signature to confirm the request came from ProLayer.
verify-signature.ts
import crypto from "crypto";

function verifyWebhookSignature(
  payload: string,
  signature: string,
  secret: string
): boolean {
  const expected = crypto
    .createHmac("sha256", secret)
    .update(payload, "utf-8")
    .digest("hex");

  return crypto.timingSafeEqual(
    Buffer.from(signature),
    Buffer.from(expected)
  );
}

// In your webhook handler:
app.post("/webhooks/prolayer", (req, res) => {
  const signature = req.headers["x-prolayer-signature"] as string;
  const isValid = verifyWebhookSignature(
    JSON.stringify(req.body),
    signature,
    process.env.PROLAYER_WEBHOOK_SECRET!
  );

  if (!isValid) {
    return res.status(401).send("Invalid signature");
  }

  // Process the event...
  const { type, data } = req.body;
  console.log(`Event: ${type}, Notification: ${data.notification_id}`);

  res.status(200).send("OK");
});
Retry Policy

If your endpoint returns a non-2xx status code or times out,ProLayer will retry delivery up to 3 times with exponential backoff:

Retry 1

30 seconds

Retry 2

5 minutes

Retry 3

30 minutes

After all retries are exhausted, the event is moved to a dead-letter queue. You can replay failed events from the dashboard for up to 30 days.

Error Handling

ProLayer uses conventional HTTP status codes to indicate the success or failure of an API request. Codes in the 2xx range indicate success, 4xx indicate a client error, and5xx indicate a server error.

Error Response Format
All errors return a consistent JSON structure with a machine-readable code, human-readable message, and the HTTP status.
error-response.json
{
  "error": {
    "type": "invalid_request_error",
    "code": "missing_required_field",
    "message": "The 'to' field is required when sending a notification.",
    "param": "to",
    "status": 400,
    "request_id": "req_a1b2c3d4e5f6"
  }
}
HTTP Status Codes
CodeMeaningDescriptionRetry?
400Bad RequestThe request body is malformed or missing required fields.Do not retry. Fix the request payload.
401UnauthorizedThe API key is missing, expired, or invalid.Do not retry. Check your API key.
403ForbiddenThe API key does not have permission for the requested resource or action.Do not retry. Check key scopes and permissions.
404Not FoundThe requested resource (notification, template, etc.) does not exist.Do not retry. Verify the resource ID.
409ConflictA resource with the same unique identifier already exists.Do not retry. Use a different identifier or fetch the existing resource.
422Unprocessable EntityThe request was well-formed but contains semantic errors (e.g. invalid phone number format).Do not retry. Fix the validation errors listed in the response.
429Too Many RequestsYou have exceeded the rate limit for your plan.Retry after the time specified in the Retry-After header.
500Internal Server ErrorAn unexpected error occurred on our side.Retry with exponential backoff. Contact support if persistent.
503Service UnavailableThe API is temporarily unavailable due to maintenance or overload.Retry with exponential backoff. Check status.prolayer.io.
Handling Errors in Code
error-handling.ts
import ProLayer, { ProLayerError } from "@prolayer/sdk";

const client = new ProLayer({ apiKey: process.env.PROLAYER_API_KEY });

try {
  await client.notifications.send({
    channel: "slack",
    to: "team_engineering",
    message: "Deploy v2.4.1 completed on production",
  });
} catch (error) {
  if (error instanceof ProLayerError) {
    switch (error.status) {
      case 400:
        console.error("Bad request:", error.message);
        break;
      case 401:
        console.error("Invalid API key");
        break;
      case 429:
        const retryAfter = error.headers["retry-after"];
        console.log(`Rate limited. Retry after ${retryAfter}s`);
        break;
      case 500:
      case 503:
        console.error("Server error, retrying...");
        // Implement exponential backoff
        break;
    }
  }
}

Rate Limits

Rate limits protect the API from abuse and ensure fair usage across all teams. Limits vary by plan and are applied per API key.

Limits by Plan
PlanRequestsBurstDaily
Free100 / min20 / sec1,000 / day
Starter500 / min50 / sec50,000 / day
Pro2,000 / min200 / sec500,000 / day
Enterprise10,000 / min1,000 / secUnlimited
Rate Limit Headers
Every API response includes headers that tell you where you stand against your current rate limit.
response-headers
HTTP/1.1 200 OK
X-RateLimit-Limit: 2000
X-RateLimit-Remaining: 1847
X-RateLimit-Reset: 1710510600
Retry-After: 42
X-RateLimit-LimitMaximum number of requests allowed in the current window.
X-RateLimit-RemainingNumber of requests remaining in the current window.
X-RateLimit-ResetUnix timestamp (seconds) when the rate limit window resets.
Retry-AfterSeconds to wait before retrying (only present on 429 responses).
Best Practices
  • Monitor X-RateLimit-Remaining and throttle requests proactively before hitting the limit.
  • Implement exponential backoff with jitter when you receive a 429 response.
  • Use batch endpoints for bulk sends instead of making individual requests in a tight loop.
  • Cache responses where possible to reduce unnecessary API calls (e.g. template lookups).

Pagination

List endpoints use cursor-based pagination for efficient, consistent traversal of large datasets. Cursor pagination avoids the offset drift issues common with page-number pagination.

How It Works
Pass a limit parameter to control page size (default: 20, max: 100). The response includes a cursor field to fetch the next page, and a has_more boolean indicating if more results exist.
first-page.sh
curl "https://api.prolayer.io/v1/notifications?limit=20" \
  -H "Authorization: Bearer pl_live_sk_..."
response.json
{
  "data": [
    {
      "id": "ntf_a1b2c3d4e5f6",
      "channel": "slack",
      "to": "team_engineering",
      "status": "delivered",
      "created_at": "2025-03-15T14:30:00Z"
    },
    {
      "id": "ntf_g7h8i9j0k1l2",
      "channel": "email",
      "to": "oncall@company.com",
      "status": "sent",
      "created_at": "2025-03-15T14:29:55Z"
    }
    // ... 18 more items
  ],
  "cursor": "eyJpZCI6Im50Zl9nN2g4aTlqMGsxbDIiLCJjcmVhdGVkX2F0IjoiMjAyNS0wMy0xNVQxNDoyOTo1NVoifQ==",
  "has_more": true,
  "total": 1482
}
next-page.sh
curl "https://api.prolayer.io/v1/notifications?limit=20&cursor=eyJpZCI6Im50Zl..." \
  -H "Authorization: Bearer pl_live_sk_..."
Iterating All Pages
paginate.ts
async function fetchAllNotifications() {
  const allNotifications = [];
  let cursor: string | undefined;

  do {
    const page = await client.notifications.list({
      limit: 100,
      cursor,
    });

    allNotifications.push(...page.data);
    cursor = page.has_more ? page.cursor : undefined;
  } while (cursor);

  return allNotifications;
}

Idempotency

Prevent duplicate alerts caused by network retries or application bugs by including an Idempotency-Key header with your requests. If we receive a second request with the same key, we return the original response without re-processing.

Using Idempotency Keys
Send a unique key (we recommend a UUID v4) in the Idempotency-Key header. Keys are scoped to your API key and expire after 24 hours.
idempotent-request.sh
curl -X POST https://api.prolayer.io/v1/notifications/send \
  -H "Authorization: Bearer pl_live_sk_..." \
  -H "Content-Type: application/json" \
  -H "Idempotency-Key: 550e8400-e29b-41d4-a716-446655440000" \
  -d '{
    "channel": "slack",
    "to": "team_engineering",
    "message": "Deploy v2.4.1 completed successfully on production",
    "severity": "info"
  }'
idempotent-request.ts
import { v4 as uuidv4 } from "uuid";

const idempotencyKey = uuidv4();

const alert = await client.notifications.send(
  {
    channel: "slack",
    to: "team_engineering",
    message: "Deploy v2.4.1 completed successfully on production",
    severity: "info",
  },
  {
    idempotencyKey,
  }
);

// Sending the same request again with the same key
// returns the original response — no duplicate alert
const duplicate = await client.notifications.send(
  {
    channel: "slack",
    to: "team_engineering",
    message: "Deploy v2.4.1 completed successfully on production",
    severity: "info",
  },
  {
    idempotencyKey, // same key — safe to retry
  }
);

console.log(alert.id === duplicate.id); // true

Key Format

Any string up to 255 characters. We recommend UUID v4 for guaranteed uniqueness.

TTL

Keys expire after 24 hours. After expiry, the same key can be reused for a new request.

Scope

Keys are scoped to your API key. Different API keys can use the same idempotency key independently.

Testing & Sandbox

The sandbox environment lets you test your alerting integration without sending real alerts to team members or incurring charges. Sandbox requests use pl_test_ API keys and behave identically to production, except alerts are simulated.

How Sandbox Mode Works
  • Use your pl_test_ API key — the same endpoints and request format apply.
  • No alerts are actually delivered. All channel providers are mocked.
  • Webhook events are still fired, so you can test your webhook handler end-to-end.
  • No credits are consumed. Sandbox usage does not count toward rate limits.
  • Simulated delivery takes 1–3 seconds to mimic real-world latency.
Test Credentials
Use these magic values in sandbox mode to test specific scenarios.
Recipient ValueSimulated StatusUse Case
+10000000000deliveredTest successful alert delivery flow
+10000000001failedTest alert failure handling and escalation retries
+10000000002pending (delayed)Test timeout and delayed alert delivery
bounce@test.prolayer.iobouncedTest email alert bounce handling
read@test.prolayer.ioreadTest alert acknowledgment and read receipt processing
Triggering Specific Statuses
test-failure.ts
// Force an alert delivery failure in sandbox mode
const failed = await client.notifications.send({
  channel: "slack",
  to: "+10000000001", // magic value → simulates failure
  message: "CPU alert: this will fail in sandbox",
  severity: "warning",
});

console.log(failed.status); // "failed"
console.log(failed.error);  // { code: "delivery_failed", message: "Simulated failure" }
test-delayed.ts
// Simulate a delayed alert delivery (takes ~10 seconds in sandbox)
const delayed = await client.notifications.send({
  channel: "telegram",
  to: "+10000000002", // magic value → simulates delay
  message: "Incident escalation: this will be delayed in sandbox",
  severity: "critical",
});

console.log(delayed.status); // "pending"

// Poll or use webhooks to track when it transitions to "delivered"

Migration Guide

Switching to ProLayer from another alerting provider? Follow these steps for a smooth transition with zero downtime.

  1. 1

    Audit your current alerting integration

    List all channels, alert templates, and webhook handlers in your existing setup. Map each to the equivalent ProLayer feature.

  2. 2

    Set up ProLayer in parallel

    Install the SDK and configure your API keys. Recreate your templates in ProLayer and point webhook URLs to new endpoints.

  3. 3

    Test in sandbox mode

    Use pl_test_ keys to verify all channels, templates, and webhook handlers work correctly before going live.

  4. 4

    Migrate alert traffic gradually

    Use a feature flag to route a percentage of alerts throughProLayer. Start at 5%, monitor delivery metrics, and ramp up to 100%.

  5. 5

    Decommission the old provider

    Once 100% of traffic is flowing through ProLayerand metrics are stable, remove the old provider's SDK, revoke their API keys, and clean up any legacy webhook endpoints.

Provider-Specific Mapping
migration-adapter.ts
// Example: wrapping ProLayer behind an adapter for gradual migration

interface AlertProvider {
  send(params: {
    channel: string;
    to: string;
    message: string;
    severity?: string;
  }): Promise<{ id: string; status: string }>;
}

class ProLayerProvider implements AlertProvider {
  private client: ProLayer;

  constructor(apiKey: string) {
    this.client = new ProLayer({ apiKey });
  }

  async send(params: { channel: string; to: string; message: string; severity?: string }) {
    const result = await this.client.notifications.send(params);
    return { id: result.id, status: result.status };
  }
}

// Use a feature flag to switch providers
const provider: AlertProvider = featureFlags.useProLayer
  ? new ProLayerProvider(process.env.PROLAYER_API_KEY!)
  : legacyAlertProvider;

await provider.send({
  channel: "slack",
  to: "team_engineering",
  message: "CPU alert: prod-server-3 at 94%",
  severity: "warning",
});
Ready to build?

Start sending team alerts today

Create your free account and deliver your first team alert in under 5 minutes. No credit card required. Or explore the full API reference.

Questions? Reach us at support@prolayer.io or join our Discord community.