Power Issue at Colo4 Dallas, Texas (Data Center)

  • August 11, 2011 by Tech #1
Dear Valued Clients,

First of all, we sincerely truly apologize for the issue impacting your website from 12AM - 7AM Singapore Time today (11 Aug 2011).
Issue was caused by power issue at our datacenter in Dallas, Texas.

There is nothing that we can do since the issue is at the datacenter and the only way we did was following the thread at http://www.webhostingtalk.com/showthread.php?t=1072692 and asking for some updates of their work.

Even though Colo4 has redundancies and fail-over plan, but this event is unfortunate and out of their control.

We thank so much for our valued clients to bear with us during down time.

Your website and email should be working fine as of now.
If you still unable to connect or having issue, please open a support ticket to support@bennykusman.com or email : sosys_86@yahoo.com  

We will update you for new news from our datacenter.

Here is the updates from Colo4 Data center (taken from colo4.com):

Updates

Current Update:

To close the loop for tonight, the migration from generator to utility power for the UPSs is complete. All equipment and connectivity is functioning properly.

If you experience any issues, please open a help ticket. We have extra staff on-site this evening and are walking the data center to ensure all equipment is online. Thank you.



Previous:

20:39

As of 6:30PM CDT all PDUs are online supplying power to our customers. The UPSs are currently in bypass mode while the batteries charge. During this time we are operating on generator power, as is the best practice. Once the UPS batteries are fully charged, we will migrate the PDUs to the UPSs and then put power back on the utility. This process will be short in duration and begin around 9:00PM CDT. We do not expect any issues or impact to customers tonight while these transitions happen. We also have extra staff that will remain on-site throughout the evening.

We have technicians working with our customers to help bring any equipment online that did not come back on with the power restore. There are also a few customers who tripped breakers while the equipment was powering up, and we are working to reset those devices. We appreciate your patience as we continue to bring this issue to closure. If you encounter any problems with your equipment or access, please open a help ticket so that we may respond in the fastest manner.

We will update customers with an official reason for outage (RFO) once we assess the reports that were generated today.


18:09

The power has been restored fully and all customers should be up. If you are a customer and have not come online yet, please open a help ticket for us to handle directly.

In addition, we have deployed a team member to walk the data center and look for any cabinets not powered up. We will reach out to you and coordinate getting your equipment live for any that we observe in this process check.

We will provide customers with a more detailed update upon completion of our after-action review for this incident. Our first goal at this time is to ensure everyone is up safely and all connectivity is restored.

Thank you again for your patience.


17:38

Power has been restored to the distribution gear from the temporary ATS. HVAC units are now all online, and we will be beginning the process of restoring power to UPSs soon, then PDUs, and then customer equipment.

We will update you as the other areas come online. Thank you again for your patience.


16:46

Our team and electricians are working diligently to get the temporary ATS installed, wired and tested to allow power to be restored. As the ATS involves high-voltage power, we are following the necessary steps to ensure the safety of our personnel and your equipment housed in our facility.

Based on current progress the electricians expect to start powering the equipment on between 6:15 – 7:00pm Central. This is our best estimated time currently. We have thoroughly tested and don’t anticipate any issues in powering up, but there is always the potential for unforeseen issues that could affect the ETA so we will keep you posted as we get progress reports. Our UPS vendor has checked every UPS, and the HVAC has checked every unit and found no issues. Our electrical contractor has also checked everything.

We realize how challenging and frustrating that it has been to not have an ETA for you or your customers, but we wanted to ensure we shared accurate and realistic information. We are working as fast as possible to get our customers back online and to ensure it is done safely and accurately. We will provide an update again within the hour.

While the team is working on the fix, I’ve answered some of the questions or comments that have been raised:

1. ATSs are pieces of equipment and can fail as equipment sometimes does, which is why we do 2N power in the facility in case the worst case scenario happens.

2. There is no problem with the electrical grid in Dallas or the heat in Dallas that caused the issue.

3. Our website and one switch were connected to two PDUs, but ultimately the same service entrance. This was a mistake that has been corrected.

4. Bypassing an ATS is not a simple fix, like putting on jumper cables. It is detailed and hard work. Given the size and power of the ATS, the safety of our people and our contractors must remain the highest priority.

5. Our guys are working hard. While we all prepare for emergencies, it is still quite difficult when one is in effect. We could have done a better job keeping you informed. We know our customers are also stressed.

6. The ATS could be repaired, but we have already made the decision to order a replacement. This is certainly not the cheapest route to take, but is the best solution for the long-term stability.

7. While the solution we have implemented is technically a temporary fix, we are taking great care and wiring as if it were permanent.

8. Colo4 does have A/B power for our routing gear. We identified one switch that was connected to A only which was a mistake. It was quickly corrected earlier today but did affect service for a few customers.

9. Some customers with A/B power had overloaded their circuits, which is a separate and individual versus network issue. (For example, if we offer A/B 20 amp feeds and the customer has 12 amps on each, if one trips, the other will not be able to handle the load.)

As you could imagine, this is the top priority for everyone in our facility. We will provide an update as quickly as possible.



14:53

Thank you for your patience as we work to address the ATS issue with our #2 service entrance. We apologize for the situation and are working as quickly as possible to restore service.

We have determined that the repairs for the ATS will take more time than anticipated, so we are putting into service a backup ATS that we have on-site as part of our emergency recovery plan. We are working with our power team to safely bring the replacement ATS into operation. We will update you as soon as we have an estimated time that the replacement ATS will be online.

Later, once we have repaired the main ATS, we will schedule an update window to transition from the temporary power solution. We will provide advance notice and timelines to minimize any disruption to your business.

Again, we apologize for the loss of connectivity and impact to your business. We are working diligently to get things back online for our customers. Please expect another update within the hour.


13:34

It has been determined that the ATS will need repairs that will take time to perform. Fortunately Colo4 has another ATS that is on-site that can be used as a spare. Contractors are working on a solution right now that will allow us to safely bring that ATS in and use it as a spare while that repair is happening.

That plan is being developed now and we should have an update soon as to the time frame to restore temporary power. We will need to schedule another window when the temp ATS is brought offline and replaced by the repaired ATS.


13:05

There has been an issue affecting one of our 6 service entrances. The actual ATS (Automatic Transfer Switch) is having an issue and all vendors are on site. Unfortunately, this is affecting service entrance 2 in the 3000 Irving facility so it is affecting a lot of the customers that have been here the longest.

The other entrance in 3000 is still up and working fine as well as the 4 entrances in 3004. Customers utilizing A/B should have access to their secondary link. It does appear that some customers were affected by a switch that had a failure in 3000. That has been addressed and should be up now.

This is not related to the PDU maintenance we had in 3004 last night. Separate building, service entrance, UPS, PDU, etc.

We will be updating customers as we get information from our vendors so that they know the estimated time for the outage. Once this has been resolved we also distribute a detailed RFO to those affected.

Our electrical contractors, UPS maintenance team and generator contractor are all on-site and working to determine what the best course of action is to get this back up.


12:40

Colo4 is currently experiencing a power issue.