Starting at around 2014-04-10 20:00 UTC (approximately 22 hours ago) we noticed elevated memory usage on our primary database server. While primary API services continue to operate normally, we have temporarily disabled recurring transaction processing. We are working diligently to pinpoint the problem and release memory to its normal level without the need for a database server restart. However, we are scheduling a downtime window from 2014-04-12 02:00-03:00 UTC (10pm-11pm EDT/7pm-8pm PDT tonight) to reboot our primary database server if necessary. Once the db layer is stable, recurring processing will be reenabled and delayed transactions will be queued for processing.
We expect resulting downtime to be 30 seconds or less. However, if we experience an issue requiring fail over to our backup database server, we could experience as much as 10 minutes of downtime.
This is in no way related to the “Heartbleed” bug, which CheddarGetter remains wholly unaffected by, and is separate from the cut over to our new hosting platform still scheduled for Monday morning. In fact, this scenario is the type of issue that our new high availability system will help mitigate. After Monday, we will be able to rotate database servers in and out without experiencing noticeable downtime.