Build with ProLayer
Everything you need to send internal team alerts and monitoring notifications across Slack, Telegram, Email, Push, WhatsApp, and In-App channels through a single, unified API. All recipients must be approved, opted-in team members.
Quick Start Guide
Go from zero to your first delivered team alert in under five minutes. Follow these steps to integrate ProLayer into your monitoring and alerting pipeline.
https://prolayer.io/registerAfter signing up, you'll land on the dashboard where you can manage API keys, add Approved Contacts (team members), view alert analytics, and configure channels.
Production: pl_live_sk_a1b2c3d4e5f6g7h8i9j0...
Sandbox: pl_test_sk_x9y8w7v6u5t4s3r2q1p0...Security note: Store your API keys in environment variables. Never commit them to version control or expose them in client-side code.
npm install @prolayer/sdkpip install prolayergo get github.com/prolayer/prolayer-goimport ProLayer from "@prolayer/sdk";
const client = new ProLayer({
apiKey: process.env.PROLAYER_API_KEY,
});
const alert = await client.notifications.send({
channel: "slack",
to: "#engineering",
message: "Deploy v2.4.1 completed successfully on production",
metadata: { service: "api-gateway", version: "2.4.1" },
});
console.log(alert.id); // "ntf_a1b2c3d4e5f6"
console.log(alert.status); // "sent"curl -X POST https://api.prolayer.io/v1/notifications/send \
-H "Authorization: Bearer pl_live_sk_a1b2c3d4e5..." \
-H "Content-Type: application/json" \
-d '{
"channel": "slack",
"to": "team_engineering",
"message": "Deploy v2.4.1 completed successfully on production"
}'const status = await client.notifications.get("ntf_a1b2c3d4e5f6");
console.log(status);
// {
// id: "ntf_a1b2c3d4e5f6",
// channel: "slack",
// to: "team_engineering",
// status: "delivered",
// sent_at: "2025-03-15T14:30:00Z",
// delivered_at: "2025-03-15T14:30:02Z",
// acknowledged: true,
// latency_ms: 2140,
// credits_used: 1
// }curl https://api.prolayer.io/v1/notifications/ntf_a1b2c3d4e5f6 \
-H "Authorization: Bearer pl_live_sk_a1b2c3d4e5..."Authentication
All API requests must be authenticated using a Bearer token in theAuthorization header. Unauthenticated requests return a 401 Unauthorized error.
Authorization: Bearer pl_live_sk_a1b2c3d4e5f6g7h8i9j0curl -X POST https://api.prolayer.io/v1/notifications/send \
-H "Authorization: Bearer pl_live_sk_a1b2c3d4e5f6g7h8i9j0" \
-H "Content-Type: application/json" \
-H "X-Request-Id: req_unique_123" \
-d '{ "channel": "slack", "to": "team_engineering", "message": "Deploy v2.4.1 completed on production" }'Prefixed with pl_live_. Alerts are delivered to approved team members. Use these in your production environment only.
pl_live_sk_a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6Prefixed with pl_test_. Alerts are simulated — nothing is actually sent. Use these for development and testing.
pl_test_sk_x9y8w7v6u5t4s3r2q1p0o9n8m7l6k5j4- Rotate production keys every 90 days. Generate a new key before revoking the old one to avoid downtime.
- Use separate keys for each environment (staging, production) and each service that accesses the API.
- Enable IP whitelisting in your dashboard to restrict API key usage to known server IPs.
- Monitor the API key audit log for unexpected usage patterns or requests from unknown IPs.
- If a key is compromised, revoke it immediately from the dashboard. Revocation takes effect within 30 seconds globally.
SDKs & Libraries
Official client libraries for popular languages. Each SDK wraps the REST API with idiomatic helpers, automatic retries, type safety, and built-in request signing.
npm install @prolayer/sdkimport ProLayer from "@prolayer/sdk";
const pl = new ProLayer({
apiKey: process.env.PROLAYER_API_KEY,
});
await pl.notifications.send({
channel: "slack",
to: "#incidents",
message: "CPU usage above 90% on prod-server-3",
metadata: { server: "prod-3", metric: "cpu", value: 92 },
});pip install prolayerimport prolayer
client = prolayer.Client(
api_key="pl_live_sk_..."
)
client.notifications.send(
channel="telegram",
to="team_devops",
message="Deploy v2.4.1 completed on production 🚀",
)gem install prolayerrequire "prolayer"
client = ProLayer::Client.new(
api_key: ENV["PROLAYER_API_KEY"]
)
client.notifications.send(
channel: "push",
to: "team_oncall",
title: "Incident Escalation",
body: "P1 incident unacknowledged for 10 min. Escalating to secondary on-call."
)go get github.com/prolayer/prolayer-gopackage main
import (
"context"
prolayer "github.com/prolayer/prolayer-go"
)
func main() {
client := prolayer.NewClient("pl_live_sk_...")
client.Notifications.Send(context.Background(),
&prolayer.SendParams{
Channel: "slack",
To: "#alerts",
Message: "Server CPU > 90%",
},
)
}composer require prolayer/prolayer-php<?php
use ProLayer\Client;
$client = new Client(
getenv('PROLAYER_API_KEY')
);
$client->notifications->send([
'channel' => 'in_app',
'to' => 'team_engineering',
'title' => 'CI/CD Pipeline Failed',
'body' => 'Build #4821 failed on main branch. Check logs for details.',
]);Prefer calling the API directly? Every endpoint is accessible via standard HTTP requests. Use any HTTP client in any language.
Base URL
https://api.prolayer.io/v1
Channels
ProLayer supports six alert channels through a single unified API. Each channel is optimized for internal team alerts and monitoring. All recipients must be approved, opted-in team members.
Required fields
to— approved contact phone in E.164 formatmessage— alert body or template referencechannel— "whatsapp"
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...
{
"channel": "whatsapp",
"to": "team_oncall",
"message": "CRITICAL: Database connection pool exhausted on prod-db-3. Active connections: 500/500.",
"severity": "critical",
"buttons": [
{ "type": "url", "text": "Open Runbook", "url": "https://wiki.internal/runbooks/db-pool" }
]
}Required fields
to— team group chat ID or approved contactmessage— alert bodychannel— "telegram"
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...
{
"channel": "telegram",
"to": "team_devops",
"message": "🚀 *Deploy v2.4.1 completed*\nService: api-gateway\nEnvironment: production\nStatus: healthy",
"parse_mode": "Markdown",
"reply_markup": {
"inline_keyboard": [
[{ "text": "View Deployment", "url": "https://ci.internal/deploys/2841" }]
]
}
}Required fields
to— approved team member emailsubject— alert subject linebody— HTML or plain textchannel— "email"
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...
{
"channel": "email",
"to": "oncall@company.com",
"from": "alerts@yourcompany.com",
"subject": "Incident Report: API Latency Spike — 2025-03-15",
"body": "<h1>Incident Summary</h1><p>P95 latency exceeded 2s for 12 minutes. Root cause: connection pool saturation on prod-db-3.</p>",
"attachments": [
{ "filename": "incident-timeline.pdf", "content_base64": "JVBERi0xLjQ..." }
]
}Required fields
to— approved team member IDtitle— alert titlebody— alert bodychannel— "push"
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...
{
"channel": "push",
"to": "team_oncall",
"title": "P1 Incident: Payment Service Down",
"body": "Payment processing failures detected. Error rate: 43%. Immediate action required.",
"data": { "incident_id": "inc_9f3a", "severity": "critical" },
"badge": 1,
"sound": "alert_critical",
"priority": "high"
}Required fields
to— Slack channel or approved team member IDmessage— text fallbackchannel— "slack"
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...
{
"channel": "slack",
"to": "#incidents",
"message": "CPU alert on prod-server-3",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "🔴 *CPU Usage Alert*\n• Server: prod-server-3\n• Current: 94%\n• Threshold: 90%\n• Duration: 5 min"
}
},
{
"type": "actions",
"elements": [
{ "type": "button", "text": { "type": "plain_text", "text": "Acknowledge" }, "url": "https://app.internal/alerts/ack/12345" },
{ "type": "button", "text": { "type": "plain_text", "text": "View Dashboard" }, "url": "https://grafana.internal/d/server-metrics" }
]
}
]
}Required fields
to— approved team member IDtitle— alert titlebody— alert bodychannel— "in_app"
POST https://api.prolayer.io/v1/notifications/send
Content-Type: application/json
Authorization: Bearer pl_live_sk_...
{
"channel": "in_app",
"to": "team_devops",
"title": "Deployment Rollback Initiated",
"body": "v2.4.1 rollback triggered on api-gateway due to elevated error rate. Rolling back to v2.4.0.",
"action_url": "/dashboard/deployments/2841",
"avatar": "https://cdn.prolayer.io/icons/rollback.png",
"category": "deployment"
}Message Templates
Templates let you define reusable alert content with dynamic variables. Create them once and reference them by ID when sending, passing variable values at send time. Templates support all channels and are ideal for standardized alerts like deployment reports, CPU threshold warnings, and incident escalations.
{{variable}} placeholders. Templates are validated on creation to ensure all variables are properly formatted.curl -X POST https://api.prolayer.io/v1/templates \
-H "Authorization: Bearer pl_live_sk_..." \
-H "Content-Type: application/json" \
-d '{
"name": "cpu_alert",
"channel": "slack",
"body": "🔴 CPU Alert on {{server}}\nCurrent: {{value}}% (threshold: {{threshold}}%)\nDuration: {{duration}}\nDashboard: {{dashboardUrl}}",
"variables": ["server", "value", "threshold", "duration", "dashboardUrl"]
}'{
"id": "tpl_r4s5t6u7v8w9",
"name": "cpu_alert",
"channel": "slack",
"body": "🔴 CPU Alert on {{server}}\nCurrent: {{value}}% (threshold: {{threshold}}%)\nDuration: {{duration}}\nDashboard: {{dashboardUrl}}",
"variables": ["server", "value", "threshold", "duration", "dashboardUrl"],
"created_at": "2025-03-15T10:00:00Z",
"updated_at": "2025-03-15T10:00:00Z"
}const alert = await client.notifications.send({
channel: "slack",
to: "#infrastructure",
template_id: "tpl_r4s5t6u7v8w9",
variables: {
server: "prod-server-3",
value: "94",
threshold: "90",
duration: "5 minutes",
dashboardUrl: "https://grafana.internal/d/cpu-metrics",
},
});curl -X POST https://api.prolayer.io/v1/notifications/send \
-H "Authorization: Bearer pl_live_sk_..." \
-H "Content-Type: application/json" \
-d '{
"channel": "slack",
"to": "#infrastructure",
"template_id": "tpl_r4s5t6u7v8w9",
"variables": {
"server": "prod-server-3",
"value": "94",
"threshold": "90",
"duration": "5 minutes",
"dashboardUrl": "https://grafana.internal/d/cpu-metrics"
}
}'Rendered output
"🔴 CPU Alert on prod-server-3 — Current: 94% (threshold: 90%) — Duration: 5 minutes — Dashboard: https://grafana.internal/d/cpu-metrics"
Webhooks
Receive real-time alert delivery and acknowledgment status updates by configuring a webhook endpoint. ProLayer will send HTTP POST requests to your URL whenever an alert status changes, enabling automated escalation workflows and audit trails.
2xx status within 30 seconds.curl -X POST https://api.prolayer.io/v1/webhooks \
-H "Authorization: Bearer pl_live_sk_..." \
-H "Content-Type: application/json" \
-d '{
"url": "https://yourapp.com/webhooks/prolayer",
"events": [
"alert.sent",
"alert.delivered",
"alert.acknowledged",
"alert.failed"
],
"secret": "whsec_your_signing_secret_here"
}'| Event | Description |
|---|---|
alert.sent | Fired when an alert is accepted and dispatched to the channel provider. |
alert.delivered | Fired when the channel provider confirms successful delivery to the approved team member. |
alert.acknowledged | Fired when a team member acknowledges the alert via an action button or API call. |
alert.failed | Fired when delivery fails permanently after all retry attempts are exhausted. |
alert.read | Fired when the team member opens or views the alert (channels that support read receipts). |
{
"id": "evt_m3n4o5p6q7r8",
"type": "alert.delivered",
"created_at": "2025-03-15T14:30:02Z",
"data": {
"notification_id": "ntf_a1b2c3d4e5f6",
"channel": "slack",
"to": "team_engineering",
"status": "delivered",
"severity": "warning",
"acknowledged": false,
"sent_at": "2025-03-15T14:30:00Z",
"delivered_at": "2025-03-15T14:30:02Z",
"metadata": {
"server": "prod-server-3",
"metric": "cpu",
"value": 94
}
}
}X-ProLayer-Signature header containing an HMAC-SHA256 signature. Always verify this signature to confirm the request came from ProLayer.import crypto from "crypto";
function verifyWebhookSignature(
payload: string,
signature: string,
secret: string
): boolean {
const expected = crypto
.createHmac("sha256", secret)
.update(payload, "utf-8")
.digest("hex");
return crypto.timingSafeEqual(
Buffer.from(signature),
Buffer.from(expected)
);
}
// In your webhook handler:
app.post("/webhooks/prolayer", (req, res) => {
const signature = req.headers["x-prolayer-signature"] as string;
const isValid = verifyWebhookSignature(
JSON.stringify(req.body),
signature,
process.env.PROLAYER_WEBHOOK_SECRET!
);
if (!isValid) {
return res.status(401).send("Invalid signature");
}
// Process the event...
const { type, data } = req.body;
console.log(`Event: ${type}, Notification: ${data.notification_id}`);
res.status(200).send("OK");
});If your endpoint returns a non-2xx status code or times out,ProLayer will retry delivery up to 3 times with exponential backoff:
Retry 1
30 seconds
Retry 2
5 minutes
Retry 3
30 minutes
After all retries are exhausted, the event is moved to a dead-letter queue. You can replay failed events from the dashboard for up to 30 days.
Error Handling
ProLayer uses conventional HTTP status codes to indicate the success or failure of an API request. Codes in the 2xx range indicate success, 4xx indicate a client error, and5xx indicate a server error.
{
"error": {
"type": "invalid_request_error",
"code": "missing_required_field",
"message": "The 'to' field is required when sending a notification.",
"param": "to",
"status": 400,
"request_id": "req_a1b2c3d4e5f6"
}
}| Code | Meaning | Description | Retry? |
|---|---|---|---|
400 | Bad Request | The request body is malformed or missing required fields. | Do not retry. Fix the request payload. |
401 | Unauthorized | The API key is missing, expired, or invalid. | Do not retry. Check your API key. |
403 | Forbidden | The API key does not have permission for the requested resource or action. | Do not retry. Check key scopes and permissions. |
404 | Not Found | The requested resource (notification, template, etc.) does not exist. | Do not retry. Verify the resource ID. |
409 | Conflict | A resource with the same unique identifier already exists. | Do not retry. Use a different identifier or fetch the existing resource. |
422 | Unprocessable Entity | The request was well-formed but contains semantic errors (e.g. invalid phone number format). | Do not retry. Fix the validation errors listed in the response. |
429 | Too Many Requests | You have exceeded the rate limit for your plan. | Retry after the time specified in the Retry-After header. |
500 | Internal Server Error | An unexpected error occurred on our side. | Retry with exponential backoff. Contact support if persistent. |
503 | Service Unavailable | The API is temporarily unavailable due to maintenance or overload. | Retry with exponential backoff. Check status.prolayer.io. |
import ProLayer, { ProLayerError } from "@prolayer/sdk";
const client = new ProLayer({ apiKey: process.env.PROLAYER_API_KEY });
try {
await client.notifications.send({
channel: "slack",
to: "team_engineering",
message: "Deploy v2.4.1 completed on production",
});
} catch (error) {
if (error instanceof ProLayerError) {
switch (error.status) {
case 400:
console.error("Bad request:", error.message);
break;
case 401:
console.error("Invalid API key");
break;
case 429:
const retryAfter = error.headers["retry-after"];
console.log(`Rate limited. Retry after ${retryAfter}s`);
break;
case 500:
case 503:
console.error("Server error, retrying...");
// Implement exponential backoff
break;
}
}
}Rate Limits
Rate limits protect the API from abuse and ensure fair usage across all teams. Limits vary by plan and are applied per API key.
| Plan | Requests | Burst | Daily |
|---|---|---|---|
| Free | 100 / min | 20 / sec | 1,000 / day |
| Starter | 500 / min | 50 / sec | 50,000 / day |
| Pro | 2,000 / min | 200 / sec | 500,000 / day |
| Enterprise | 10,000 / min | 1,000 / sec | Unlimited |
HTTP/1.1 200 OK
X-RateLimit-Limit: 2000
X-RateLimit-Remaining: 1847
X-RateLimit-Reset: 1710510600
Retry-After: 42X-RateLimit-LimitMaximum number of requests allowed in the current window.X-RateLimit-RemainingNumber of requests remaining in the current window.X-RateLimit-ResetUnix timestamp (seconds) when the rate limit window resets.Retry-AfterSeconds to wait before retrying (only present on 429 responses).- Monitor
X-RateLimit-Remainingand throttle requests proactively before hitting the limit. - Implement exponential backoff with jitter when you receive a
429response. - Use batch endpoints for bulk sends instead of making individual requests in a tight loop.
- Cache responses where possible to reduce unnecessary API calls (e.g. template lookups).
Pagination
List endpoints use cursor-based pagination for efficient, consistent traversal of large datasets. Cursor pagination avoids the offset drift issues common with page-number pagination.
limit parameter to control page size (default: 20, max: 100). The response includes a cursor field to fetch the next page, and a has_more boolean indicating if more results exist.curl "https://api.prolayer.io/v1/notifications?limit=20" \
-H "Authorization: Bearer pl_live_sk_..."{
"data": [
{
"id": "ntf_a1b2c3d4e5f6",
"channel": "slack",
"to": "team_engineering",
"status": "delivered",
"created_at": "2025-03-15T14:30:00Z"
},
{
"id": "ntf_g7h8i9j0k1l2",
"channel": "email",
"to": "oncall@company.com",
"status": "sent",
"created_at": "2025-03-15T14:29:55Z"
}
// ... 18 more items
],
"cursor": "eyJpZCI6Im50Zl9nN2g4aTlqMGsxbDIiLCJjcmVhdGVkX2F0IjoiMjAyNS0wMy0xNVQxNDoyOTo1NVoifQ==",
"has_more": true,
"total": 1482
}curl "https://api.prolayer.io/v1/notifications?limit=20&cursor=eyJpZCI6Im50Zl..." \
-H "Authorization: Bearer pl_live_sk_..."async function fetchAllNotifications() {
const allNotifications = [];
let cursor: string | undefined;
do {
const page = await client.notifications.list({
limit: 100,
cursor,
});
allNotifications.push(...page.data);
cursor = page.has_more ? page.cursor : undefined;
} while (cursor);
return allNotifications;
}Idempotency
Prevent duplicate alerts caused by network retries or application bugs by including an Idempotency-Key header with your requests. If we receive a second request with the same key, we return the original response without re-processing.
Idempotency-Key header. Keys are scoped to your API key and expire after 24 hours.curl -X POST https://api.prolayer.io/v1/notifications/send \
-H "Authorization: Bearer pl_live_sk_..." \
-H "Content-Type: application/json" \
-H "Idempotency-Key: 550e8400-e29b-41d4-a716-446655440000" \
-d '{
"channel": "slack",
"to": "team_engineering",
"message": "Deploy v2.4.1 completed successfully on production",
"severity": "info"
}'import { v4 as uuidv4 } from "uuid";
const idempotencyKey = uuidv4();
const alert = await client.notifications.send(
{
channel: "slack",
to: "team_engineering",
message: "Deploy v2.4.1 completed successfully on production",
severity: "info",
},
{
idempotencyKey,
}
);
// Sending the same request again with the same key
// returns the original response — no duplicate alert
const duplicate = await client.notifications.send(
{
channel: "slack",
to: "team_engineering",
message: "Deploy v2.4.1 completed successfully on production",
severity: "info",
},
{
idempotencyKey, // same key — safe to retry
}
);
console.log(alert.id === duplicate.id); // trueKey Format
Any string up to 255 characters. We recommend UUID v4 for guaranteed uniqueness.
TTL
Keys expire after 24 hours. After expiry, the same key can be reused for a new request.
Scope
Keys are scoped to your API key. Different API keys can use the same idempotency key independently.
Testing & Sandbox
The sandbox environment lets you test your alerting integration without sending real alerts to team members or incurring charges. Sandbox requests use pl_test_ API keys and behave identically to production, except alerts are simulated.
- Use your
pl_test_API key — the same endpoints and request format apply. - No alerts are actually delivered. All channel providers are mocked.
- Webhook events are still fired, so you can test your webhook handler end-to-end.
- No credits are consumed. Sandbox usage does not count toward rate limits.
- Simulated delivery takes 1–3 seconds to mimic real-world latency.
| Recipient Value | Simulated Status | Use Case |
|---|---|---|
+10000000000 | delivered | Test successful alert delivery flow |
+10000000001 | failed | Test alert failure handling and escalation retries |
+10000000002 | pending (delayed) | Test timeout and delayed alert delivery |
bounce@test.prolayer.io | bounced | Test email alert bounce handling |
read@test.prolayer.io | read | Test alert acknowledgment and read receipt processing |
// Force an alert delivery failure in sandbox mode
const failed = await client.notifications.send({
channel: "slack",
to: "+10000000001", // magic value → simulates failure
message: "CPU alert: this will fail in sandbox",
severity: "warning",
});
console.log(failed.status); // "failed"
console.log(failed.error); // { code: "delivery_failed", message: "Simulated failure" }// Simulate a delayed alert delivery (takes ~10 seconds in sandbox)
const delayed = await client.notifications.send({
channel: "telegram",
to: "+10000000002", // magic value → simulates delay
message: "Incident escalation: this will be delayed in sandbox",
severity: "critical",
});
console.log(delayed.status); // "pending"
// Poll or use webhooks to track when it transitions to "delivered"Migration Guide
Switching to ProLayer from another alerting provider? Follow these steps for a smooth transition with zero downtime.
- 1
Audit your current alerting integration
List all channels, alert templates, and webhook handlers in your existing setup. Map each to the equivalent ProLayer feature.
- 2
Set up ProLayer in parallel
Install the SDK and configure your API keys. Recreate your templates in ProLayer and point webhook URLs to new endpoints.
- 3
Test in sandbox mode
Use
pl_test_keys to verify all channels, templates, and webhook handlers work correctly before going live. - 4
Migrate alert traffic gradually
Use a feature flag to route a percentage of alerts throughProLayer. Start at 5%, monitor delivery metrics, and ramp up to 100%.
- 5
Decommission the old provider
Once 100% of traffic is flowing through ProLayerand metrics are stable, remove the old provider's SDK, revoke their API keys, and clean up any legacy webhook endpoints.
// Example: wrapping ProLayer behind an adapter for gradual migration
interface AlertProvider {
send(params: {
channel: string;
to: string;
message: string;
severity?: string;
}): Promise<{ id: string; status: string }>;
}
class ProLayerProvider implements AlertProvider {
private client: ProLayer;
constructor(apiKey: string) {
this.client = new ProLayer({ apiKey });
}
async send(params: { channel: string; to: string; message: string; severity?: string }) {
const result = await this.client.notifications.send(params);
return { id: result.id, status: result.status };
}
}
// Use a feature flag to switch providers
const provider: AlertProvider = featureFlags.useProLayer
? new ProLayerProvider(process.env.PROLAYER_API_KEY!)
: legacyAlertProvider;
await provider.send({
channel: "slack",
to: "team_engineering",
message: "CPU alert: prod-server-3 at 94%",
severity: "warning",
});Start sending team alerts today
Create your free account and deliver your first team alert in under 5 minutes. No credit card required. Or explore the full API reference.
Questions? Reach us at support@prolayer.io or join our Discord community.