City Network Status

General Outage (Solved)

22:39 Technicians have now solved all problems related to the outage and this one is confirmed as solved. There might still be some Virtual Machines that cant be reached. If so:

1: Try to access it vith VNC

2: Re-boot

If you still have problems with your VM. Please send us an e-mail with more details and we´ll take a look at it as soon as possible.


22:17 Discovered problems with One Click-Installation in the aftermath of the outage. Technicians are working on this issue.


Status change from RED (Outage) to Yellow (Reduced)

21:02 All VM´s that went down during our outage are now running. Please keep in mind that you might need to check your server it might be waiting for a disk inconsistency check. If you are unable to reach your VM, login to citycloud admin interface and check through VNC if the server is waiting for an input. If you still have issues with your VM please contact us for manual handling.

We have amped up our support staff and tonight our phone line is open until 00:00.

18:58 In order to make the ongoing process of getting VM´s back up we have temporarily closed access to City Cloud admin.

18:51 Affected VM´s are starting in a steady stream, now much faster than before. If nothing unexpected occurs, we hope to have all VM´s up and running very soon.

17:56 All machines are in queue to be started. All should be up momentarily.

17:25 We are now moving all of the last affected VMs off of the faulty hard ware. We have lost a number of hosts due to the power surge (hard ware).

16:51 All focus is now directed to the last few VMs that are not up. All else should be up and running and out of 1500 or so servers that were initially down – all are up but the last chassie with blades for VMs. We are attempting to both move the VMs to functioning hosts as well as getting the chassie and network from it going. We have replacement hardware available for the chassie – but it seems there is something between the chassie and our core network not functioning properly.

16:42 As we had a big power spike we are also looking at hardware in general as some have been damaged from the power surge. There seems to be correlations between the network issues and the chassies.

16:30 Still working on the last 200 or so virtual machines that are not up due to dead blade chassies. Technicians are working on it and are determining if there is a network issue related as well.

16:19 Admin pages for shared and City Cloud now restored.

16:07 For those that had issues logging into their email – it should be all solved now. Webmail is also fully functional.

15:52 We are still having issues with the chassies which contain a number of host blades for City Cloud. Therefore the last two hundred or so servers are still not up. We are working hard to determine the issue and will continue to post more information here as we have it.

At this point alla shared sites should be up. Mail has been up but with some issues for some as an old database was activated and being restored as this is written.

15:17 Shared environment is up. Can take a few more minutes for all sites to be activated. We are testing numerous sites at this time. City Cloud has a couple of chassies for our blade servers that seem damaged from the power spike. We are investigating further if we need to change them completely to new ones from our lab.

14:56 Our shared environment is being worked on and should also be up shortly. There are a few DB-machines that need tending before it should be active again. Geting closer to all virtual machines up in City Cloud.

14:48 80% of virtual machines in City Cloud are now restored. Admin interface still down but will be up shortly.

14:32 Email is restored. Webmail will take a little longer but you can access your email via client now.

14:25 Power is normal and we are working hard at getting all affected machines fully started – including services. This will take some time due to the sheer volume of machines. Some services are up.

14:06 Power is back and we are starting to restore functionality to a very large number of physical servers. We believe possibly as many as 1500 physical servers are affected by the outage.

13:48 We have been experiencing a general outage due to powerfailure during maintenance of one of the UPS