JanWiersma.com

Goodbye SDL

img_3207

After 3 very dynamic years, I’m leaving SDL today. It has been a great journey and I enjoyed every minute of it. Anyone who has followed SDL in the last 9 months has seen a lot of changes announced; divestment of 3 business units, new CEO, new CTO,…

While I personally think these changes are good for the company and it will bring focus and stability going forward, I also decided I wasn’t going to be part of that future anymore.

With this in mind, I shifted my focus in the last few months on helping to find a good home for the divesting business units. It provided me with the option to slowly step away from my day-to-day responsibilities without disrupting it too much.

During the hand-over period, you automatically get confronted with what you are going to leave behind. <cue music> Don’t Know What You Got (Till It’s Gone) </cue music> and the saddest thing to leave behind are actually my teams & peers.

Continue with reading

Share

Applying firefighter tactics to (IT) leadership

1-hsowjrjlMrNDh6Cjpod3JwThis week I will be celebrating my 15th year of active volunteer firefighter duty. As you naturally tend to do when celebrating milestones like these, is to reflect on the past years and learnings.
One thing that specifically stood out are moments in my IT leadership career, where I applied firefighter techniques and skills, I picked up over the years.
Most of them revolve around problem solving and how to get the most out of teams. While there is an obvious link between firefighters and solving issues in a high pressure or crisis situation, I did learn the same tactics also apply to any challenge I was confronted with.
When firefighters arrive at the scene of a fire, they always follow the same protocol;
-Assess the situation
-Locate fire
-Identify & control flow path
-Extinguish the fire
-Reset & evaluate
In business and especially at higher leadership levels some problems may seem very daunting, creating anxiety and leave you with the feeling of being overwhelmed. Firefighters are used to stepping in to highly unknown situations with confidence and as such a protocol like above helps to, step by step, gain control of the situation.

Continue with reading

Share

The future of datacenter build & co-lo (or CIO’s are getting out of the datacenter business – Part2)

Last year my friend Tim Crawford wrote an excellent article on why CIOs should get out of the datacenter business.  Tim focused on how current big cooperates are moving away from building, owning or renting datacenter facilities in favour of consuming IT at higher levels of the stack.

DataCenterCloudSpectrum

As he focused on the migration of leading big companies, it leaves the question; what about the future Fortune 500 companies?

Continue with reading

Share

Code of Conduct

As a ‘code of conduct’ seems to be needed now a days for interactions between people, especially at tech conferences, I releasing my own ‘code of conduct’.

The following applies when you interact with me, listen to my talks or see any of my rants on social media;

1. Respect & integrity. I will treat you with respect by default, please extend the same courtesy to me. I have a strong views on certain issues that maybe completely the opposite of your view.

2. Acknowledge my culture. I’m Dutch. I’m direct, blunt and we founded ‘going Dutch‘. I acknowledge the fact that you may have an other cultural background and therefore a different view of the world around us.

3. If you don’t like what I’m saying or how I’m acting, let me know. Or walk away. If you don’t confront me and just complain behind my back, you take away my ability to learn. There some good guidelines for delivering feedback to someone. You may want to read it someday, if you want your feedback to resonate.

4. Confidentiality ; by default I will keep any information you provide to me confidential. You can share anything I told you with anyone, unless I specifically tell you the information I’m sharing is confidential.

5. I’m even more blunt on social media en during delivering keynotes. Just unfollow me, if you can’t handle that. See the disclaimer: https://www.janwiersma.com/?page_id=160

 

Share

Our Trust issue with Cloud Computing

Recently SDL (my employer) did a survey on customer ‘trust’ for the marketer.B0AgpWXIAAA5mZm

Being in the IT space I tend to deal a lot with ‘trust’ the last few years. Being responsible for the Cloud services delivery for my companies SAAS & hosted products we deal with clients evaluating and buying our services. My teams also evaluate & consume IAAS/PAAS/SAAS services in the market, on which we build our services.

The ‘trust’ issue in consuming Cloud services is an interesting one. IAAS platforms like Amazon abstract complexity away from its user. It is easy to consume. The same goes for SAAS services like Box.com of Gmail; the user has no clue what happens behind the scenes. Most business users don’t care about the abstraction of that complexity. It just works….

It’s the IT people that seems to have the biggest issue with gracefully losing control and surrendering data, applications, etc., to someone else. Control is often an emotional issue we are often unprepared to deal with. It leaves us with a feeling ‘they can’t take care of it as good as I can…’ Specifically IT people know how complex IT can be, and how hard it can be to deliver the guarantees that the business is looking for. For many years we have tried to manage the rising complexity of IT within the business with tools and processes, never completely able to satisfy the business as we where either to expensive or not hitting our SLA’s. Continue with reading

Share

German companies ask for Internet border patrol.

border_0_0In the last year multiple companies started serving German customers out of Germany based datacenter locations.

There seems to be a specifically strong sentiment around security & privacy with German companies after the Edward Sonwden leaks. The kneejerk reaction is to mandate that servers should sit within German borders, as that would take any security & privacy concern away. Cloud providers are now starting to follow this customer demand.

Interestingly this reaction is more sentiment driven as there is no legal ground to request this. Especially as more and more German companies are putting this in place as a default policy, regardless of what type of data (privacy sensitive or not…)

Looking at the Federal Data Protection Act (Bundesdatenschutzgesetz in German) (“BDSG”) it states that certain transfer of data (like personal data) outside of the EU needs to be reported and approved and Data controllers must take appropriate technical and organizational measures against unauthorized or unlawful processing and against accidental loss or destruction of, or damage to, personal data. Nothing says servers need to be in Germany.

Looking at other EU countries, Germany seems to be the only country where organizations express such behavior. The only next inline could be Switzerland.

Continue with reading

Share

Google’s BMS got hacked. Is your datacenter BMS next ?

<English cross post with my DCP blog>

A recent USA Congressional survey stated that power companies are targeted by cyber attacks 10.000x per month.hacked-scada

After the 2010 discovery of the Stuxnet virus the North American Electric Reliability Corporation (NERC) established both mandatory standards and voluntary measures to protect against such cyber attacks, but most utility providers haven’t implemented NERC’s voluntary recommendations.

Stuxnet hit the (IT) newspaper front-pages around September 2010, when Symantec announced the discovery. It represented one of the most advanced and sophisticated viruses ever found. One that targeted specific PLC devices in nuclear facilities in Iran:

Stuxnet is a threat that was primarily written to target an industrial control system or set of similar systems. Industrial control systems are used in gas pipelines and power plants. Its final goal is to reprogram industrial control systems (ICS) by modifying code on programmable logic controllers (PLCs) to make them work in a manner the attacker intended and to hide those changes from the operator of the equipment.

DatacenterKnowledge picked up on it in 2011, asking ‘is your datacenter ready for stuxnet?’

After this article the datacenter industry didn’t seem to worry much about the subject. Most of us deemed the chance of being hacked with a highly sophisticated virus ,attacking our specific PLC’s or facility controls, very low.

Recently security company Cylance published the results of a successful hack attempt on a BMS system located at a Google office building. This successful hack attempt shows a far greater threat for our datacenter control systems.

 

The road towards TCP/IP

The last few years the world of BMS & SCADA systems radically changed. The old (legacy) systems consisted of vendor specific protocols, specific hardware and separate networks. Modern day SCADA networks consist of normal PC’s and servers that communicate through IT standard protocols like IP, and share networks with normal IT services.

IT standards have also invaded facility equipment: The modern day UPS and CRAC is by default equipped with an onboard webserver able of send warning using an other IT standard: SNMP.

The move towards IT standards and TCP/IP networks has provided us with many advantages:

  • Convenience: you are now able to manage your facility systems with your iPad or just a web browser. You can even enable remote access using Internet for your maintenance provider. Just connect the system to your Internet service provider, network or Wi-Fi and you are all set. You don’t even need to have the IT guys involved…
  • Optimize: you are now able to do cross-system data collection so you can monitor and optimize your systems. Preferably in an integrated way so you can have a birds-eye view of the status of your complete datacenter and automate the interaction between systems.

Many of us end-users have pushed the facility equipment vendors towards this IT enabled world and this has blurred the boundary between IT networks and BMS/SCADA networks.

In the past the complexity of protocols like Bacnet and Modbus, that tie everything together, scared most hackers away. We all relied on ‘security through obscurity’ , but modern SCADA networks no longer provide this (false) sense of security.

Moving towards modern SCADA.

The transition towards modern SCADA networks and systems is approached in many different ways. Some vendors implemented embedded Linux systems on facility equipment. Others consolidate and connected legacy systems & networks on standard Windows or Linux servers acting as gateways.

This transition has not been easy for most BMS and SCADA vendors. A quick round among my datacenter peers provides the following stories:

  • BMS vendors installing old OS’s (Windos/Linux) versions because the BMS application doesn’t support the updated ones.
  • BMS vendors advising against OS updates (security, bug fix or end-of-support) because it will break their BMS application.
  • BMS vendors unable to provide details on what ports to enable on firewalls; ‘ just open all ports and it will work’.
  • Facility equipment vendors without software update policies.
  • Facility equipment vendors without bug fix deployment mechanisms; having to update dozens of facility systems manually.

And these stories all apply to modern day, currently used, BMS&SCADA systems.

Vulnerability patching.

Older versions of the SNMP protocol have known several vulnerabilities that affected almost every platform, included Windows/Linux/Unix/VMS, that supported the SNMP implementation.

It’s not uncommon to find these old SNMP implementations still operational in facility equipment. With the lack of software update policies, that also include the underlying (embedded) OS, new security vulnerabilities will also be neglected by most vendors.

The OS implementation from most BMS vendors also isn’t hardened against cyber attacks. Default ports are left open, default accounts are still enabled.

This is all great news for most hackers. It’s much easer for them to attack a standard OS like a Windows or Linux server. There are lots of tools available to make the life of the hacker easer and he doesn’t have to learn complex protocols like Modbus or Bacnet. This is by far the best attack surface in modern day facility system environments.

The introduction of DCIM software will move us even more from the legacy SCADA towards an integrated & IT enabled datacenter facility world. You will definitely want to have your ‘birds-eye DCIM view’ of your datacenter anywhere you go, so it will need to be accessible and connected. All DCIM solutions run on mainstream OS’s, and most of them come with IT industry standard databases. Those configurations provide an other excellent attack surface, if not managed properly.

ISO 27001

Some might say: ‘I’m fully covered because I got an ISO 27001 certificate’.

The scope of ISO27001 audit and certificate is set by the organization pursuing the certification. For most datacenter facilities the scope is limited to the physical security (like access control, CCTV) and its processes and procedures. IT systems and IT security measures are excluded because those are part of the IT domain and not facilities. So don’t assume that BMS and SCADA systems are included in most ISO 27001 certified datacenter installations.

Natural evolution

Most of the security and management issues are a normal part of the transition in to a larger scale, connected IT world for facility systems.

The same lack of awareness on security, patching, managing and hardening of systems has been seen by the IT industry 10-15 year ago. The move from a central mainframe world to decentralized servers and networks, combined with the introduction of the Internet has forced IT administrators to focus on managing the security of their systems.

In the past I have heard Facility departments complain that IT guys should involve them more because IT didn’t understand power and cooling. With the introduction of a more software enabled datacenter the Facility guys now need to do the same and get IT more involved; they have dealt with all of this before…

Examples of what to do:

  • Separate your systems and divide the network. Your facility system should not share its network with other (office) IT services. The separate networks can be connected using firewalls or other gateways to enable information exchange.
  • Assess your real needs: not everything needs to be connected to the Internet. If facility systems can’t be hardened by the vendor or your own IT department, then don’t connect them to the Internet. Use firewalls and Intrusion Detection Systems (IDS) to secure your system if you do connect them to the Internet.
  • Involve your IT security staff. Have facilities and IT work together on implementing and maintaining your BMS/SCADA/DCIM systems.
  • Create awareness by urging your facility equipment vendor or DCIM vendor to provide a software update & security policy.
  • Include the facility-systems in the ISO 27001 scope for policies and certification.
  • Make arrangements with your BMS and/or DCIM vendor about management of the underlying OS and its management. Preferably this is handled by your internal IT guys who already should know everything about patching IT systems and hardening them. If the vendor provides you with an appliance, then the vendor needs to manage the patching process and hardening of the system.

If you would like to talk about the future of securing datacenter BMS/SCADA/DCIM systems than join me at Observe Hack Make (OHM) 2013. IOHM is a five-day outdoor international camping festival for hackers and makers, and those with an inquisitive mind. Starts July 31st 2013.

Note:
There are really good whitepapers on IDS systems (and firewalls) for securing Modbus and Bacnet protocols, if you do need to connect those networks to the internet. Example: Snort IDS for SCADA (pdf) or  books about SCADA & security at Amazon.

Source:
A large part of this blog is based on a Dutch article on BMS/SCADA security January 2012 by Jan Wiersma & Jeroen Aijtink (CISSP). The Dutch IT Security Association (PViB) nominated this article for ‘best security article of 2012’.

Share

Where is the open datacenter facility API ?

<English cross post with my DCP blog>

For some time the Datacenter Pulse top 10 has featured an item called ‘ Converged Infrastructure Intelligence‘. The 2012 presentation mentioned:stack21-forceX

Treat the DC infrastructure as an IT system;
– Converge in the infrastructure instrumentation and control systems
– Connect it into the IT systems for ultimate control
Standardize connections and protocols to connect components

With datacenter infrastructure becoming a more complex system and the need for better efficiency within the whole datacenter stack, the need arises to integrate layers of the stack and make them ‘talk’ to each other.

This is shown in the DCP Stack framework with the need for ‘integrated control systems’; going up from the (facility) real-estate layer to the (IT) platform layer.

So if we have the ‘integrated control systems’, what would we be able to do?

We could:

  • Influence behavior (can’t control what you don’t know); application developers can be given insight on their power usage when they write code for example. This is one of the needed steps for more energy efficient application programming. It will also provide more insight of the complete energy flow and more detailed measurements.
  • Design for lower level TIER datacenters; when failure is imminent, IT systems can be triggered to move workloads to other datacenter locations. This can be triggered by signals from the facility equipment to the IT systems.
  • Design close control cooling systems that trigger on real CPU and memory temperature and not on room level temperature sensors. This could eliminate hot spots and focus the cooling energy consumption on the spots where it is really needed. It could even make the cooling system aware of oncoming throttle up from IT systems.
  • Optimize datacenters for smart grid. The increase of sustainable power sources like wind and solar energy, increases the need for more flexibility in energy consumption. Some may think this is only the case when you introduce onsite sustainable power generation, but the energy market will be affected by the general availability of sustainable power sources also. In the end the ability to be flexible will lead to lower energy prices. Real supply and demand management in the datacenters requires integrated information and control from the facility layers and IT layers of the stack.

Gap between IT and facility does not only exists between IT and facility staff but also between their information systems. Closing the gap between people and systems will make the datacenter more efficient, more reliable and opens up a whole new world of possibilities.

This all leads to something that has been on my wish list for a long, long time: the datacenter facility API (Application programming interface)

I’m aware that we have BMS systems supporting open protocols like BACnet, LonWorks and Modbus, and that is great. But they are not ‘IT ready’. I know some BMS systems support integration using XML and SOAP but that is not based on a generic ‘open standard framework’ for datacenter facilities.

So what does this API need to be ?

First it needs to be an ‘open standard’ framework; publicly available and no rights restrictions for the usage of the API framework.

This will avoid vendor lock-in. History has shown us, especially in the area of SCADA and BMS systems, that our vendors come up with many great new proprietary technologies. While I understand that the development of new technology takes time and a great deal of money, locking me in to your specific system is not acceptable anymore.

A vendor proprietary system in the co-lo and wholesale facility will lead to the lock-in of co-lo customers. This is great for the co-lo datacenter owner, but not for its customer. Datacenter owners, operators and users need to be able to move between facilities and systems.

Every vendor that uses the API framework needs to use the same routines, data structures, object classes. Standardized. And yes, I used the word ‘Standardized’. So it’s a framework we all need to agree up on.

These two sentences are the big difference between what is already available and what we actually need. It should not matter if you place your IT systems in your own datacenter or with co-lo provider X, Y, Z. The API will provide the same information structure and layout anywhere…

(While it would be good to have the BMS market disrupted by open source development, having an open standard does not mean all the surrounding software needs to be open source. Open standard does not equal open source and vice versa.)

It needs to be IT ready. An IT application developer needs to be able to talk to the API just like he would to any other IT application API; so no strange facility protocols. Talk IP. Talk SOAP or better: REST. Talk something that is easy to understand and implement for the modern day application developer.

All this openness and ease of use may be scary for vendors and even end users because many SCADA and BMS systems are famous for relying on ‘security through obscurity’. All the facility specific protocols are notoriously hard to understand and program against. So if you don’t want to lose this false sense of security as a vendor; give us a ‘read only’ API. I would be very happy with only this first step…

So what information should this API be able to feed ?

Most information would be nice to have in near real time :

  • Temperature at rack level
  • Temperature outside of the building
  • kWh, but other energy related would be nice at rack level
  • warnings / alarms at rack and facility level
  • kWh price (can be pulled from the energy market, but that doesn’t include the full datacenter kWh price (like a PUE markup))

(all if and where applicable and available)

The information owner would need features like access control for rack level information exchange and be able to tweak the real time features; we don’t want to create unmanageable information streams; in security, volume and amount.

So what do you think the API should look like? What information exchange should it provide? And more importantly; who should lead the effort to create the framework? Or… do you believe the Physical Datacenter API framework is already here?

More:

Good API design by Google : http://www.youtube.com/watch?v=heh4OeB9A-c&feature=gv

Share

Labels, metrieken en hokjes voor ‘groene ICT’

Oke, laten we eerlijk zijn… de meeste mensen zijn gek op labels plakken en dingen in hokjes stoppen. Het houd de zaken overzichtelijk en zorgt er voor dat we dingen met elkaar kunnen vergelijken. Zo ook in de ICT sector en de datacenter industrie.

In de afgelopen maanden werd ik diverse keren geconfronteerd met ‘misbruik’ van metrieken <pue blog> of de creatie van nieuwe metrieken of labels. Soms oneigenlijk gebruik van bestaande methode, maar steeds vaker een poging tot introductie van nieuwe labels die vooral gefocust zijn op ‘groen’ en de gehele IT dienstverlening keten.

Binnen DatacenterPulse (DCP) hebben we de diverse lagen en onderdelen in het datacenter gevat in een praat plaat genaamd de DataCenterPulse Stack. Naast het feit dat deze de opbouw en onderlinge afhankelijkheden laat zien van de lagen, word hier ook gesproken over metrieken of labels.

Het gedeelte van “Input metrics –> Layer metrics per business sector” doelt daar op. Het verwijst naar de verschillende metrieken en labels die op de diverse lagen beschikbaar zijn.

Voorbeelden hier van zijn:

  • PUE, op de Physical&Real Estate laag, welke energie efficiëntie in het facilitaire deel van het datacenter in beeld brengt.
  • WUE, op de Real Estate laag, welke efficiënt water gebruik in beeld brengt.
  • SPECpower, op de Platform laag, welke energie efficiëntie voor servers in beeld brengt.
  • Etc…

Diverse organisaties proberen ook al enige tijd een ‘usefull work’ metriek uit te brengen. Deze moet de overhead in energie gebruik laten zien v.s. de hoeveelheid energie die verbruikt word voor het ‘werk dat er echt toe doet’.

Dit is echter lastig op te lossen aangezien ‘usefull work’ voor het ene bedrijf iets totaal anders kan betekenen dan voor het andere bedrijf. De output van IT kan niet altijd op dezelfde manier gewogen worden.

Het brengt ook het probleem met zich mee van het vangen de de alle lagen in de Stack in 1 label of berekening/metriek. Gezien de complexiteit van al deze lagen en de variabelen (kwalitatief / kwantitatief) is de vraag of dat wel haalbaar is.

Een recente discussie die ik mocht bijwonen ging over een ‘groen label voor cloud computing’. Hier mee zou dan gemakkelijk leveranciers te vergelijken zijn en kunnen bedrijven aantonen dat ze ‘groener’ worden door over te stappen naar cloud computing. Het zou dan een F tot A++ label zijn, zoals met wasmachines en koelkasten op dit moment werkt.

Ik begrijp de hang hier naar best, maar laten we deze wens eens uit elkaar trekken:

  1. We beginnen met de definitie: wat is groen dan ? Vaak zie je dat er eigenlijk energie efficiënt word bedoeld. Echter bij groen moet men alle elementen van verbruik mee nemen. Hier in zit dus ook water gebruik en andere grondstoffen. Ook uitstoot moet eigenlijk mee genomen worden. Van totale CO2 uitstoot tot afval van server systemen.
  2. Hoe weet ik of ik ‘groener’ word ? Dit betekend dat ik in de hele context van de vraag moet weten waar ik nu sta en dat ik dit moet kunnen vergelijken met het ICT ecosysteem van een ander.
  3. Wat is dan de definitie van cloud computing ? Hier zijn al hele boeken en blogs over vol geschreven. Deze afkadering is nog steeds erg flexibel. Laten we voor dit argument eens zeggen dat het Software As A Service is (SAAS). Dan betekend dit dat we de hele DCP Stack in 1 label proberen samen te vatten op het onderwerp ‘groen’. Hier in zouden we dus alle soorten koeling, stroom distributie, server typen, besturingssystemen, applicatie frameworks en talen moeten wegen en in 1 label moeten vangen…

Een label op deze twee grote hypes (groen & cloud) die uit zulke complexe onderdelen bestaat… schreeuwt om misbruik door zijn eigen industrie. Zoals we binnen de datacenter industrie ook met PUE hebben gedaan.

Dit brengt ons bij het punt van creatie, acceptatie en standaardisatie van metrieken en labels. Mijn GreenGrid collega Andre Rouyer gebruikt daar altijd een mooi plaatje voor:

image

Deze begint bij Industry Alliances zoals de Green Grid, ICT~Office, DatacenterPulse, etc.. Dit is vaak de broedplaats voor nieuwe labels en metrieken. Zodra is voldoende markt acceptatie is, worden deze uitgewerkt door Standaardisatie organisaties. Denk hierbij aan NEN, CENELEC en ISO. Deze uitwerking leid tot een meetbaar en auditbare norm op het label of de metriek. Het zorgt voor duidelijke definities en beoordelingscriteria. Hierna worden deze normen opgenomen in (lokale) regelgeving door de diverse Overheden en kan er op gehandhaafd worden. Dit totale proces duur jaren.

Met de PUE hebben we gezien hoe dit proces kan (mis)lopen: bedacht door de GreenGrid en uitgewerkt in 2 a 3 jaar. Op dit moment ligt het op ISO niveau waar het tot een internationale standaard uitgewerkt gaat worden in de komende 2 a 3 jaar. In de tussen tijd heeft echter de overheid de PUE al opgepikt om er op te handhaven. Daarbij is er dus een belangrijke stap overgeslagen.

Het gebruik van metrieken&labels voor regulering vanuit overheid moet daar naast ook niet leiden tot de blokkade van innovatie zoals dat bij de adoptie van ASHRAE 90.1 gebeurde, waarbij het uitgebrachte label elke andere vorm van innovatieve koeling uitsloot. Als dit label vervolgens een wettelijke eis word door overheid adoptie, dan streeft deze in feiten zijn eigen doel voorbij.  

Men moet dus goed nadenken over de consequenties van de introductie van metrieken en labels:

  1. Is het wel haalbaar; probeer ik niet een te complex systeem te vangen ?
  2. Zijn de definities van de onderdelen die ik probeer te vangen wel helder ?
  3. Welke manipulatie laat het label toe ? (gaming the system)
  4. Indien er adoptie plaats vind door de overheid; welke effecten zal dit hebben op je sector/industrie ?

Het proces daarna is zo mogelijk nog belangrijker: uitproberen en testen van het label/metriek door de markt –> veel feedback verzamelen en verwerken –> aanscherpen en verder uitwerken. Indien blijkt dat het toch niet zo’n goed idee was, dan ook niet bang zijn om weer afscheid te nemen van het idee. Pas als het label goed gerijpt en uitwerkt is, dan is het klaar voor de stap naar standaardisatie.

De roep om een label is makkelijk gedaan, maar zoals de Amerikanen zeggen ‘Be Careful What You Wish For’.

Share

Wil iemand het licht uit doen voor EUE ??!?

In 2007 lanceerde The Green Grid het idee van PUE. Het was niet perfect en niet uitgewerkt. In 2009 kwam de DOE met de EUE. Daar van zijn inmiddels diverse beschrijvingen en vormen:geen-eue

1. “EUE is calculated by dividing the total source energy by total UPS energy. Some factors will not be a part of the calculation, such as heating and cooling degree days, data center type (traditional, hosting, Internet, etc) and UPS utilization.”

2. “EUE (energy usage efficiency) is very similar to PUE (power usage efficiency) with two notable exceptions. PUE covers only electrical usage where EUE covers all energy coming in to the data center. PUE also only covers electrical power from the entrance to the facility, EUE covers the energy from the source.”

3. “EUE is the same as PUE but calculated in one full year”

4. “EUE is the same as PUE but calculated in kWh”

5. “Deze EUE geeft een energie-prestatiemaat aan, de verhouding tussen het energiegebruik van ondersteunende diensten (zoals koeling) en het ICT-gerelateerde energieverbruik.”

Deze laatste kwam uit Nederland waar men dacht de opstart problemen met PUE ook te moeten aanpakken. ICT~Office kwam, met TNO, in 2009 met het idee voor EUE. Vaak werd aangehaald dat PUE niet over 1 heel jaar berekend hoeft te worden of geen rekening houd met energie (of warmte) terug leveren vanuit het datacenter. Ook het feit dat Nederland een eigen specifieke metriek voor datacenter energie efficiëntie nodig had werd vaak aangehaald.

Nou ben ik behoorlijk nationalistisch ingesteld; ik sta compleet in het oranje in de kroeg tijdens het EK, ik leef mee met ons koningshuis en ik eet haring… maar die EUE heb ik nooit begrepen. Als het dan iets echt Nederlands moest zijn, dan tenminste een echte vertaling – Energie Gebruiks Efficiëntie (EGE).

Recent duikt de EUE weer vol op op in Nederland. Ik hoor ICT~Office er over en AgentschapNL.

Tijdens een EMEA bijeenkomst met internationale collega’s van The Green Grid werd ik dus ook een beetje meewarig aangekeken toen de term viel. Zeker omdat de EPA&DOE de EUE inmiddels heeft los gelaten en PUE ondersteund. Ook de Europese Commissie heeft de PUE geadopteerd voor de Datacenter Code of Conduct.

Veel van de manco’s zijn ook inmiddels geadresseerd voor de PUE. In whitepaper #22 worden enkele zaken geadresseerd:

  • Since power distribution losses and cooling equipment power consumption will always take positive values, DCiE can never be greater than 100%, nor can PUE be less than 1.0
  • The PUE and DCiE metrics can be computed using either power (kilowatt) or energy (kilowatt-hour) measurements.
  • The Green Grid discourages comparisons of different datacenters based on reported PUE/DCiE results.  Location, architecture, vintage, size and many other factors all play a role in a data center’s final results
  • The Green Grid’s PUE /DCiE Detailed Analysis provides instructions for several options, differentiated by expected accuracy, for collecting power consumption data and calculating PUE and DCiE values.
  • Without some indication as to the time over which particular results were calculated or the frequency with which individual data points were collected, comparison of results are difficult.
  • pPUE = Total Energy within a boundary divided by the IT Equipment Energy within that boundary. (Zie deze presentatie en whitepaper #42)

Dus: de PUE mag niet onder de 1 zijn, PUE mag in kW of kWh berekend worden, PUE is niet bedoeld om datacentra met elkaar te vergelijken. Daarnaast dient er bij PUE publicatie context mee gegeven te worden. Deze context bestaat uit de locatie van de meting en de periode van meting;

De locatie van meting:

image

Het moment van meten:

The subscript is created by appending a character denoting the averaging period and a
character denoting the data collection frequency onto the reported metric
• Averaging Period:
o ‘Y’ denotes a measurement averaged over a year
o ‘M’ denotes a measurement averaged over a month
o ‘W’ denotes a measurement averaged over a week
o ‘D’ denotes a measurement averaged over a day

•Frequency:
o ‘M’ denotes a measurement taken monthly
o ‘W’ denotes a measurement taken weekly
o ‘D’ denotes a measurement taken daily
o ‘C’ denotes a measurement taken continuously (at least hourly)
o ‘–‘ denotes a single measurement (averaging period not used)

Dit geeft de volgende voorbeelden:

image

Bovenstaande geeft de lezer van een PUE getal een betere context rond de gepubliceerde cijfers. De discussie over een PUE ‘is niet over 1 heel jaar’ is daar mee dus ook afgedekt. Voor de manier van meten en technische uitwerking is samen met ASHRAE een boek geschreven genaamd “Real-Time Energy Consumption Measurements in Data Centers”.

Daar waar leveranciers van datacenter modules, koeling of energie onderdelen graag hun efficientie in beeld willen brengen kunnen ze voor dat deel de Partial PUE (pPUE) gebruiken. Hier mee is meteen duidelijk dat het over een datacenter deel gaat en niet over de gehele installatie.

Voor PUE berekening bestaat ook al enige tijd een tool op de Green Grid website.

Als men energie terug levert als (rest)product van het datacenter, kan men die inzichtelijk maken via de ERE: A Metric for Measuring the Benefit of Reuse Energy from a Data Center. Daarbij komt de PUE dus nooit onder de 1.

Tot ziens EUE

Nu er internationaal gekozen word voor PUE en deze zo ver uitgewerkt is, zou het fijn zijn als de Nederlandse organisaties zich hierbij aansluiten. Sommige ontwikkelde normen zoals BREEAM-NL en te ontwikkelen normen zoals NPR 5313 hebben de PUE ook over genomen. Dit voorkomt allemaal ruis in de markt. Zowel bij datacenter klanten, bij datacenter eigenaren, als bij de overheid. Spraakverwarring over een onduidelijke EUE moet vermeden worden, zodat we bij RFP’s of overheidsregels elkaar niet wazig behoeven aan te kijken. Zoals Neelie Kroes zegt;

Once we have a transparent way to measure, we can start in earnest to audit, report, and exchange best practice in the ICT sector.

Laten we die transparantie eenduidig en internationaal houden… en EUE gedag zeggen. *klik*

Share