mnubo status

February 2017

Sandbox degraded service
Service resumed.
Feb 22, 14:48-14:53 EST
mnubo Sandbox - authentication issue
An authentication issue was identified and resolved in our Sandbox environment. Time of the incident 2:30 PM - 3 PM EST.
Feb 20, 15:11 EST
[Scheduled] mnubo SmartObjects - Platform planned maintenance
Maintenance is done
Feb 16, 00:29-01:53 EST
Service issue in sandbox
This incident has been resolved.
Feb 6, 11:32-15:45 EST
[Scheduled] mnubo SmartObjects restitution layer - data availability lag
The scheduled maintenance has been completed.
Feb 2, 00:10-01:52 EST

January 2017

50x returned by ingestion.
Some infrastructure issue experienced by our upstream provider disabled ingestion between 13:30 UTC and 13:49 UTC. No data loss should be expected however as the platform returned the proper error code.
Jan 31, 08:44-12:52 EST
mnubo SmartObjects Ingestion layer: degraded performance
Friday 27 05:02 UTC, ingestion started experiencing problems, the situation was automatically resolved shortly after 05:13 UTC. No data loss should have occured. Please note that applications requiring near real-time data should no longer be impaired.
Jan 27, 00:59 EST
Sandbox ingestion errors.
Everything is stable.
Jan 19, 13:47-14:58 EST
DNS propagation issue
DNS propagation is back on track.
Jan 13, 15:22-15:26 EST
Restitution issues in sandbox. HTTP 50x returned.
Everything is back to normal.
Jan 6, 16:03-17:06 EST

December 2016

[Scheduled] mnubo SmartObjects Ingestion layer: degraded performance (data availability lag)
The scheduled maintenance has been completed.
Dec 21, 22:00-22:30 EST
Response time slowdown.
This incident has been resolved.
Dec 13, 17:53 - Dec 14, 14:04 EST
mnubo SmartObjects: degraded performance
Incident is resolved. Our cloud provider had a short network partition that removed quorum from our coordination service. As a protective measure, applications started giving errors to prevent data loss. When the situation went back to normal, everything recovered and service resumed as expected.
Dec 13, 09:46-11:13 EST

November 2016

Unusually long response times in restitution API.
We've seen cases of delays in the restitution API response times. It was due to a micro-service giving up unusual response times. The issue was seen around 11:20 eastern and resolved around 11:30 eastern.
Nov 30, 11:32 EST
Cassandra cluster issue.
The issue has been resolved and the diagnostics component that caused the issue has been disabled.
Nov 17, 15:39-16:46 EST
Indexing cluster
All our monitoring metrics are now back to normal.
Nov 15, 23:25-23:47 EST