Anonymously contributed:
In the past couple of years, one of LLNL's achievements was the build out of an
enterprise-class data center in B112 under the O& B PAD.
Fully redundant power, industry standard everything!
Only one side of it has been fully populated. When the time came to to populate the other side, ULM decided to do it on the cheap by ordering the undoing of the power redundancy from the populated side to accommodate the unpopulated side.
This is to allow more tenants to move in with little cost.
The result is that we will have a grade C data center.
Imagine telling doing an A+ job and being told that C+ would have been OK!
That is what happens when bureaucrats (instead of Managers) make decisions.
In the past couple of years, one of LLNL's achievements was the build out of an
enterprise-class data center in B112 under the O& B PAD.
Fully redundant power, industry standard everything!
Only one side of it has been fully populated. When the time came to to populate the other side, ULM decided to do it on the cheap by ordering the undoing of the power redundancy from the populated side to accommodate the unpopulated side.
This is to allow more tenants to move in with little cost.
The result is that we will have a grade C data center.
Imagine telling doing an A+ job and being told that C+ would have been OK!
That is what happens when bureaucrats (instead of Managers) make decisions.
Comments
industry standards of major co-location and App hosting providers.
As a center of excellence, LLN will accept no less.
The only mistake was that
the south side was populated sparsely and that did not fool anyone in ULM!
It took resources to get the datacenter where it is today and it will take resources to divert power
from south to north side!
Talk to your customer, document and verify their needs, create solutions that meet those needs.
In private industry if you execute well you get to regularly return to step one.
I am glad upper management saw that.
Are you kidding me? Support orgs and management help create greater job security and growth of their particular areas by more spending and greater inefficiency.
Those costs are then burdened onto the remaining scientific staff as larger tax and overhead rates for which they must seek additional funding. It's almost like a Ponzi scheme for those still trying to do science at the NNSA labs. Who will be the last fool left within the scientific ranks to help pay for this mess?
To those people, I say: don't forget you are still "overhead" and as such
you better find cheaper ways to do things.
If you were the Terascale facility, that would be another story!
I have seen a commercial data center that had redundant power, redundant cooling, motor generators that could keep the center running in case of full power outages. It was a beautiful setup. It was a phone data center. It wasn't to keep the phones working, it was to keep the billing system going.
That was probably the dream/goal of what was going to be put into place for B112. They fell far short of that goal.
For those that argue that such redundancy is not needed, remember that your capability to fill out your time card and GET PAID rests on the viability of that data center.
And for those who do have equipment housed in B112, chime in and tell the stories of how easy it is to get in there and work on your equipment. Especially on the off hours.
The DC has to improve its relations with tenants and making decisions to benefit its tenants.
After all, the tenants are customers and in the private sector what happens when tenants are not happy?
First of all leave the power configuration alone. You don’t build data centers without power redundancy, it is one of the most important pieces of the foundation for which you will build upon. You certainly don’t change your power strategy especially after it has already been implemented. Its cost prohibitive and it’s a bad move. In addition I would imagine too many things are implemented by now to make that sort of change.
Virtualize physical hardware wherever possible! Rack all systems into the fewest amounts of racks and rows with respect to power, size, weight, and cooling capability. Go dense, go lean and fill out the south side. Demonstrate a well thought out solution with real ROI and then propose building out the north side of the data center.
One of the many goals amongst any modern day data center is to achieve the maximum amount of compute resource in the smallest amount of space possible.
Results:
$ Eliminate costly hardware
$ Preserve datacenter real estate for systems that cannot be virtualized
$ Save on power cost and infrastructure
$ Save on cooling cost and infrastructure
$ Do more with less
$ Keep everyone employed
$ Lets get real scientific about it too so DOE can lead by example and set standards based on LLNL’s success
$ ULM will be the victim of success and DOE will praise them for it. :-)
Lastly there are too many people in areas who are afraid to let go of their precious hardware. They cannot handle change and believe that they will lose their jobs if they can’t hold onto their closet data centers!!! We better change, work together and get it right...and real soon!!
Lord help you if you move your equipment into B112. Although you are a tenant, the actions of how they run the place gives the impression you can't be trusted. They had cutbacks and are not staffed on swing or owl. You can't get in on the off hours without calling them in (and incurring extra charges for doing so). I'd be concerned about my precious hardware if it were the prisoner of B112.
Your comment definitely holds merit, however if we don’t get it together and make some changes then at some point we all lose out. Things cannot continue the way they are for too much longer.
I too feel the trust issues and I struggle for access to the DC, it just needs to CHANGE as I mentioned. We all need to CHANGE!
March 11, 2010 9:54 PM
From what, to what? Can you be more specific?
Show me the improved model and I'd consider the change.
Don't come with the "If we build it, they will come" mentality.
B112 just announced extended hours of coverage, from 7:00 am to 4:45 pm.
It was also just announced that all of the hardware signed a Memorandum of Understanding (MOU) stating that they will not fail except during the coverage hours. That should allay all fears for 24x7 systems.