Increased Latency in NYM2 Datacenter
Incident Report for Xandr
Postmortem

Incident Summary
Between 22:30 UTC on Saturday, January 15th to 05:30 UTC on Sunday, January 16th, SSP's experienced increased latency in our New York datacenter collections.
Scope of Impact
During the incident window SSP clients may have experienced high latency traffic in the form of higher timeout rates.
Timeline (UTC)
2020-01-15 22:30: Incident Started: Mobile traffic moved into the default datacenter collection
2020-01-15 22:46: Incident Escalated
2020-01-16 01:00: New load balancers were added to the default collection.
2020-01-16 04:30: Additional load balancers were added.
2020-01-16 04:30: Incident Resolved.
Cause Analysis
The incident was caused by backend maintenance. This maintenance caused connectivity issues which then resulted in the latency increase in the New York datacenter.
Next Steps
The issue was mitigated that caused the load balancer to overload, and are continuing to develop long term solutions to continue to improve the SSP services.

Posted Jan 28, 2020 - 21:23 UTC

Resolved

The incident has been fully resolved. We apologize for the inconvenience this issue may have caused, and thank you for your continued support.

Posted Jan 16, 2020 - 05:54 UTC
Investigating

We are currently investigating the following issue:

  • Component(s): Ad Serving
  • Impact(s):
    • External Supply Auctions and Ad Serving is timing out
  • Severity: Major Outage
  • Datacenter(s): NYM2

We will provide an update as soon as more information is available. Thank you for your patience.

Posted Jan 16, 2020 - 01:06 UTC