JanWiersma.com

Google’s BMS got hacked. Is your datacenter BMS next ?

<English cross post with my DCP blog>

A recent USA Congressional survey stated that power companies are targeted by cyber attacks 10.000x per month.hacked-scada

After the 2010 discovery of the Stuxnet virus the North American Electric Reliability Corporation (NERC) established both mandatory standards and voluntary measures to protect against such cyber attacks, but most utility providers haven’t implemented NERC’s voluntary recommendations.

Stuxnet hit the (IT) newspaper front-pages around September 2010, when Symantec announced the discovery. It represented one of the most advanced and sophisticated viruses ever found. One that targeted specific PLC devices in nuclear facilities in Iran:

Stuxnet is a threat that was primarily written to target an industrial control system or set of similar systems. Industrial control systems are used in gas pipelines and power plants. Its final goal is to reprogram industrial control systems (ICS) by modifying code on programmable logic controllers (PLCs) to make them work in a manner the attacker intended and to hide those changes from the operator of the equipment.

DatacenterKnowledge picked up on it in 2011, asking ‘is your datacenter ready for stuxnet?’

After this article the datacenter industry didn’t seem to worry much about the subject. Most of us deemed the chance of being hacked with a highly sophisticated virus ,attacking our specific PLC’s or facility controls, very low.

Recently security company Cylance published the results of a successful hack attempt on a BMS system located at a Google office building. This successful hack attempt shows a far greater threat for our datacenter control systems.

 

The road towards TCP/IP

The last few years the world of BMS & SCADA systems radically changed. The old (legacy) systems consisted of vendor specific protocols, specific hardware and separate networks. Modern day SCADA networks consist of normal PC’s and servers that communicate through IT standard protocols like IP, and share networks with normal IT services.

IT standards have also invaded facility equipment: The modern day UPS and CRAC is by default equipped with an onboard webserver able of send warning using an other IT standard: SNMP.

The move towards IT standards and TCP/IP networks has provided us with many advantages:

  • Convenience: you are now able to manage your facility systems with your iPad or just a web browser. You can even enable remote access using Internet for your maintenance provider. Just connect the system to your Internet service provider, network or Wi-Fi and you are all set. You don’t even need to have the IT guys involved…
  • Optimize: you are now able to do cross-system data collection so you can monitor and optimize your systems. Preferably in an integrated way so you can have a birds-eye view of the status of your complete datacenter and automate the interaction between systems.

Many of us end-users have pushed the facility equipment vendors towards this IT enabled world and this has blurred the boundary between IT networks and BMS/SCADA networks.

In the past the complexity of protocols like Bacnet and Modbus, that tie everything together, scared most hackers away. We all relied on ‘security through obscurity’ , but modern SCADA networks no longer provide this (false) sense of security.

Moving towards modern SCADA.

The transition towards modern SCADA networks and systems is approached in many different ways. Some vendors implemented embedded Linux systems on facility equipment. Others consolidate and connected legacy systems & networks on standard Windows or Linux servers acting as gateways.

This transition has not been easy for most BMS and SCADA vendors. A quick round among my datacenter peers provides the following stories:

  • BMS vendors installing old OS’s (Windos/Linux) versions because the BMS application doesn’t support the updated ones.
  • BMS vendors advising against OS updates (security, bug fix or end-of-support) because it will break their BMS application.
  • BMS vendors unable to provide details on what ports to enable on firewalls; ‘ just open all ports and it will work’.
  • Facility equipment vendors without software update policies.
  • Facility equipment vendors without bug fix deployment mechanisms; having to update dozens of facility systems manually.

And these stories all apply to modern day, currently used, BMS&SCADA systems.

Vulnerability patching.

Older versions of the SNMP protocol have known several vulnerabilities that affected almost every platform, included Windows/Linux/Unix/VMS, that supported the SNMP implementation.

It’s not uncommon to find these old SNMP implementations still operational in facility equipment. With the lack of software update policies, that also include the underlying (embedded) OS, new security vulnerabilities will also be neglected by most vendors.

The OS implementation from most BMS vendors also isn’t hardened against cyber attacks. Default ports are left open, default accounts are still enabled.

This is all great news for most hackers. It’s much easer for them to attack a standard OS like a Windows or Linux server. There are lots of tools available to make the life of the hacker easer and he doesn’t have to learn complex protocols like Modbus or Bacnet. This is by far the best attack surface in modern day facility system environments.

The introduction of DCIM software will move us even more from the legacy SCADA towards an integrated & IT enabled datacenter facility world. You will definitely want to have your ‘birds-eye DCIM view’ of your datacenter anywhere you go, so it will need to be accessible and connected. All DCIM solutions run on mainstream OS’s, and most of them come with IT industry standard databases. Those configurations provide an other excellent attack surface, if not managed properly.

ISO 27001

Some might say: ‘I’m fully covered because I got an ISO 27001 certificate’.

The scope of ISO27001 audit and certificate is set by the organization pursuing the certification. For most datacenter facilities the scope is limited to the physical security (like access control, CCTV) and its processes and procedures. IT systems and IT security measures are excluded because those are part of the IT domain and not facilities. So don’t assume that BMS and SCADA systems are included in most ISO 27001 certified datacenter installations.

Natural evolution

Most of the security and management issues are a normal part of the transition in to a larger scale, connected IT world for facility systems.

The same lack of awareness on security, patching, managing and hardening of systems has been seen by the IT industry 10-15 year ago. The move from a central mainframe world to decentralized servers and networks, combined with the introduction of the Internet has forced IT administrators to focus on managing the security of their systems.

In the past I have heard Facility departments complain that IT guys should involve them more because IT didn’t understand power and cooling. With the introduction of a more software enabled datacenter the Facility guys now need to do the same and get IT more involved; they have dealt with all of this before…

Examples of what to do:

  • Separate your systems and divide the network. Your facility system should not share its network with other (office) IT services. The separate networks can be connected using firewalls or other gateways to enable information exchange.
  • Assess your real needs: not everything needs to be connected to the Internet. If facility systems can’t be hardened by the vendor or your own IT department, then don’t connect them to the Internet. Use firewalls and Intrusion Detection Systems (IDS) to secure your system if you do connect them to the Internet.
  • Involve your IT security staff. Have facilities and IT work together on implementing and maintaining your BMS/SCADA/DCIM systems.
  • Create awareness by urging your facility equipment vendor or DCIM vendor to provide a software update & security policy.
  • Include the facility-systems in the ISO 27001 scope for policies and certification.
  • Make arrangements with your BMS and/or DCIM vendor about management of the underlying OS and its management. Preferably this is handled by your internal IT guys who already should know everything about patching IT systems and hardening them. If the vendor provides you with an appliance, then the vendor needs to manage the patching process and hardening of the system.

If you would like to talk about the future of securing datacenter BMS/SCADA/DCIM systems than join me at Observe Hack Make (OHM) 2013. IOHM is a five-day outdoor international camping festival for hackers and makers, and those with an inquisitive mind. Starts July 31st 2013.

Note:
There are really good whitepapers on IDS systems (and firewalls) for securing Modbus and Bacnet protocols, if you do need to connect those networks to the internet. Example: Snort IDS for SCADA (pdf) or  books about SCADA & security at Amazon.

Source:
A large part of this blog is based on a Dutch article on BMS/SCADA security January 2012 by Jan Wiersma & Jeroen Aijtink (CISSP). The Dutch IT Security Association (PViB) nominated this article for ‘best security article of 2012’.

Share

DCIM–Its not about the tool; its about the implementation

 

<English cross post with my DCP blog>

Failure Success Road Sign

So you just finished your extensive purchase process and now the DCIM DVD is on your desk.

Guess what; the real work just started…

The DCIM solution you bought is just a tool, implementing it will require change in your organization. Some of the change will be small; for example no longer having to manually put data in an Excel file but have it automated in the DCIM tool. Some of the change will be bigger like defining and implementing new processes and procedures in the organization. A good implementation will impact the way everyone works in your datacenter organization. The positive outcome of that impact is largely determined by the way you handle the implementation phase.

These are some of the most important parts you should consider during the implementation period:

Implementation team.

The implementation team should consist of at least:

  • A project leader from the DCIM vendor (or partner).
  • An internal project leader.
  • DCIM experts from the DCIM vendor.
  • Internal senior users.

(Some can be combined roles)

Some of the DCIM vendors will offer to guide the implementation process themselves others use third party partners.

During your purchase process its important to have the DCIM vendor explain in detail how they will handle the full implementation process. Who will be responsible for what part? What do they expect from your organization? How much time do they expect from your team? Do they have any reference projects (same size & complexity?)?

The DCIM vendor (or its implementation partner) will make implementation decisions during the project that will influence the way you work. These decisions will give you either great ease of working with the tool or day-to-day frustration. Its important that they understand your business and way of working. Not having any datacenter experience at all will not benefit your implementation process, so make sure they supply you with people that know datacenter basics and understand your (technical) language.

The internal senior users should be people that understand the basic datacenter parts (from a technical perspective) and really know the current internal processes. Ideal candidates are senior technicians, your quality manager, backoffice sales people (if you’re a co-lo) and site/operations managers.

The internal senior users also play an important role in the internal change process. They should be enthusiast early adapters who really want to lead the change and believe in the solution.

Training.

After you kicked off your implementation team, you should schedule training for your senior users and early adaptors first. Have them trained by the DCIM vendor. This can be done on dummy (fictive) data. This way your senior users can start thinking about the way the DCIM software should be used within your own organization. Include some Q&A and ‘play’ time at the end of the training. Having a sandbox installation of the DCIM software available for your senior users after the training also helps them to get more familiar with the tool and test some of their process ideas.

After you have done the loading of your actual data and you made your process decisions surrounding the DCIM tool, you can start training all your users.

Some of the DCIM vendors will tell you that training is not needed because their tool is so very user friendly. The software maybe user friendly but your users should still need to be trained on the specific usage of the tool within your own organization.

Have the DCIM vendor trainer team up with your senior users in the actual training. This way you can make the training specific for your implementation and have the senior users at hand to answer any organization specific questions.

The training of general users is an important part of the change and process implementation in your organization.

Take any feedback during the general training seriously. Provide the users with a sandbox installation of the software so they can try things without breaking your production installation and data. This will give you broad support for the new way of working.

Data import and migration.

Based on the advise in the first article , you will already have identified your current data sources.

During the implementation process the current data will need to be imported in to the DCIM data structure or integrated.

Before you import you will need to assess your data; are all the Excel, Visio and AutoCAD drawings accurate? Garbage import means garbage output in the DCIM tool.

Intelligent import procedures can help to clean your current data; connecting data sources and cross referencing them will show you the mismatches. For example: adding (DCIM) intelligence to importing multiple Excel sheets with fiber cables and then generating a report with fiber ports that have more than 1 cable connected to them (which would be impossible i.r.l ).

Your DCIM vendor or its partner should be able to facilitate the import. Make sure you cover this in the procurement negotiations; what kind of data formats can they import? Should you supply the data in a specific format?

This also brings us back to the basic datacenter knowledge of the DCIM vendor/partner. I have seen people import Excel lists of fiber cable and connect them to PDU’s… The DCIM vendor/partner should provide you a worry free import experience.

Create phases in the data import and have your (already trained) senior users preform acceptance tests. They know your datacenter layout and can check for inconsistencies.

Prepare to be flexible during the import; not everything can be modeled the way you want it in the software.

For example when I bought my first DCIM tool in 2006 they couldn’t model blade servers yet and we needed a work around for it. Make sure the workarounds are known and supported by the DCIM vendor; you don’t want to create all your workaround assets again when the software update finally arrives that supports the correct models. The DCIM vendor should be able to migrate this for you.

Integration.

The first article did a drill down of the importance of integration. Make sure your DCIM vendor can accommodate your integration wishes.

Integration can be very complex and mess-up your data (or worse) if not done correctly. Test extensively, on non-production data, before you go live with integration connections.

The integration part of the implementation process is very suitable for a phased approach. You don’t need all the integrations on day one of the implementation.

Involve IT Information architects if you have them within your company and make sure external vendors of the affected systems are connected to the project.

Roadmap and influence.

Ask for a software development roadmap and make sure your wishes are included before you buy. The roadmap should show you when new features will be available in the next major release of you
r DCIM tool.

The DCIM vendor should also provide you with a release cycle displaying the scheduled minor releases with bug fixes. When you receive a new release it should include release-notes mentioning the specific bugs that are fixed and the new features included in that new release. Ask the DCIM vendor for an example of the roadmap and release-notes.

During the purchase process you may have certain feature requests that the vendor is not able to fulfill yet. Especially new technology, like the blade server example I used earlier, will take some time to appear in the DCIM software release. This is not a big problem as long as the DCIM vendor is able to model it within reasonable time.

One way to handle missing features is to make sure it’s on the software development roadmap and make the delivery schedule part of your purchase agreement.

After you signed the purchase order your influence on the roadmap will become smaller. They will tell you it doesn’t… but it does… Urge your DCIM vendor to start a user-group. This group should be fully facilitated by the vendor and provide valuable input for the roadmap and the future of the DCIM tool. A strong user-group can be of great value to the DCIM vendor AND its customers.

Got any questions on real world DCIM ? Please post them on the Datacenter Pulse network: http://www.linkedin.com/groups?gid=841187 (members only)

Share

Before you jump in to the DCIM hype…

dilbert-information-strategy

<English cross post with my DCP blog>

You’re ready to enter the great world of DCIM software and jump right into the hype ?

Do you actually know what you need from a DCIM solution ? What are your functional requirements ?

So before you jump in, let’s take a step back and look at DataCenter Information Management from a 40,000 feet level: the datacenter facility information architecture.

Let’s start with ‘data’;

Data is all around us in the datacenter environment. It’s on the post-it notes on your desk, the dozen Excel files you manage to report and collect measurements and the collection of electrical and architectural drawings sitting in your desk drawer.

A modern day datacenter is filled with sensors connected to control systems. Some of the equipment is connected to central SCADA or BMS systems, some handle all the process control locally at the equipment. HVAC, electrical distribution and conversion systems, access control and CCTV; they all generate data streams. With the growth of datacenters in square meters and megawatts, the amount of data grows too.

The introduction of PUE and focus on energy efficiency have shown us the importance of data and especially data analysis. For most of us this has introduced even more data points, but enabled us to do better analysis of our datacenter’s performance. So; more data has enabled more efficiency and a better return on investment. Some of us could even say they entered the BigData era with datacenter facility data.

DCIM can play a role in the analysis of all this data, but it’s important to know where your data is first. Where is the current data stored ? What are the data streams within your datacenter ? What data is actually available and what data actually matters to your operation ? It’s a false assumption that all the data needs to be pulled in to a DCIM solution; that depends on your processes and your information requirements.

Process

Every datacenter has its collection of structured activities or tasks that produce a specific service or product for our internal or external customer. These are the primary processes focusing on the services your datacenter needs to provide. Examples are operations processes like Work Orders or Capacity Management.

These primary processes are assisted by supporting processes that make the core (primary) processes work and optimize them. Examples are Quality, Accounting or Recruitment processes.

Indentifying the primary and supporting processes in your datacenter enables you to optimize them by executing them in a consisted way very time and checking the output.

If you run an ISO9001 certified shop, you will definitely know what I’m talking about.

To run the processes we need information. Information is used in our processes to make decisions. The needed information can be collected and supplied by an individual or an (IT) system.

When data is collected it’s not yet information. Applying knowledge creates information from data. IT systems can assist us to create information from data, with built-in or collected knowledge.

Indentifying your datacenter processes also enables you to get a grip on the information that is needed to move the processes forward. Is this information available ? What is the quality of the information and process output ? How much time does it take to make it available ? Can this be optimized ?

DCIM solutions can assist you in creating information from data and provide information and process optimization. Most of the DCIM solutions depend on built-in knowledge on how datacenters work and operate, to facilitate this and optimize processes.

DCIM is only one of the applications used to support and optimize our datacenter processes. To support the full stack of processes we need a whole range of applications and tools. These applications can be everything from Planning to Asset Management to Customer Relationship Management (CRM) to SCADA/BMS tools.

Most of us already have some type of SCADA or BMS system running in our datacenter to control and monitor our facility infrastructure. This SCADA or BMS system will handle typical facility protocols like Modbus, BACnet or LonWorks. The programming logic used in most SCADA/BMS systems is not something found in typical DCIM solutions.

With the growing amount of sensors and their data, the SCADA/BMS system must be able to handle hundreds of measurements per minute. It must store, analyze and be able to react-on the provided data to control things like remote valves and pumps. This functionality is also typically not found in DCIM solutions. (So SCADA/BMS does not equal (!=) DCIM.)

Anyone running a production datacenter will already have a collection of applications to support their datacenter processes. You may have a ticketing system, a CRM application, MS Office application, etc.. Some times DCIM is perceived as the only tool you need to manage your datacenter but it will definitely not replace all your current tools and applications.

Model

Now that you have indentified your data, processes and current applications it’s time to focus on what you need DCIM for anyway; define your functional requirements.

One way of assisting you in this definition is creating your own datacenter facility information model.

IT architects are trained in creating information models, so if you have any walking around ask them to assist you.

Example of a model would be the one that the 451 Group created for their DCIM analysis. This is featured in the DCK Guide to Data Center Infrastructure Management (DCIM) (The model doesn’t cover the full scope for every organization, but it helps me to explain what I mean in this blog…)

dcim-451group

The model displays functionality fields what would typically exist when operating a datacenter.

You can use a model like this to identify what functionality you currently don’t have (from a process and application perspective) and what can be optimized.

It also enables you to plot your current tools on the model and indentify gaps and overlap. In this example I have plotted one of my SCADA/BMS systems on the (slightly modified) model:

dcim-plot-bms

I have also plotted the DCIM need for that project:

dcim-plot-dcim

Using models like this will give you a sense of what you actually expect from a DCIM solution and assist in creating your functional requirements for DCIM tool selection (RFP/RFI).

Integration is key

Modern day IT information management consists of collections of applications and datastores, connected for optimal information exchange. IT information and business architects have already tried the ‘one application to rule them all’ approach before and failed. Because creating information islands also doesn’t work, we need to enable applications and information stores to talk to each other.

You may have some customer information about the usage of datacenter racks in a CRM system like Salesforce. You may already have some asset information of your CRAC’s in a asset management system or maybe an procurement system. This is all interesting and relevant information for your ‘datacenter view on the world’. Connecting all the systems and datastores could get really ugly, time consuming and error-prone:

dc-info-connect

IT architects have already struggled with this some time ago when integrating general business applications. This has started things like Service-oriented architecture (SOA) , enterprise service bus (ESB) and application programming interface (API). All fancy words (and IT loves their 3 letter acronyms) for IT architectural models, to be enable applications to talk to each other.

dc-info-connect-soa

The DCIM solution you select, needs to be able to integrate in to your current world of IT applications and datastores.

When looking at integration, you need to decide what information is authoritative and how the information will flow. Example: you may have an asset management system containing unique asset names and numbers for your large datacenter assets like pumps, CRACs and PDUs. You would want this information to be pushed out to the DCIM solution but changes in the asset names should only be possible in the asset management system. Your asset management system would then be considered authoritative for those information fields and information will only be pushed from the asset system to DCIM and not vice versa (flow).

Integration also means you don’t have to pull all the data from every available data source in to your DCIM solution. Select only the information and data that would really add value to your DCIM usage. Also be aware that integration is not the only way to aggregate data. Reporting tools (sometimes part of the DCIM solution) can collect data from multiple datasources and combine them in one nice report, without the need to duplicate information by pulling a copy in to the DCIM database.

The 451group model does an excellent job of displaying this need for integration showing the “integration and reporting” layer across all layers.

Using your own information model you can also plot integration and data sources.

dcim-plot-integrate

Integration within the full datacenter stack (from facilities to IT)  is also key for the future of datacenter efficiency like I mentioned in my “Where is the open datacenter facility API ?” blog.

So, to summarize:

  • Look at what data you currently have, where it is stored and how that data flows across your infrastructure.
  • Look at the information and functionality you need by analyzing your datacenter processes. Indentify information gaps and translate them to functional requirements.
  • Look at the current tools and applications ; what applications to replace with DCIM and what applications to integrate with DCIM. What are the integration requirements and what information source is authoritative ?
  • Create your own datacenter facility information model. Position all your current applications on the model. (If you have in-house IT (information) architects; have them assist you…)

Preparing your DCIM tool selection this way will save you from headaches and disappointment after the implementation.

In my next blog we will jump to the implementation phase of DCIM.

 

More resources:

Full credits for the DCIM model used in this blog, go to the 451Group. Taken from the excellent DCK Guide to Data Center Infrastructure Management (DCIM) at http://www.datacenterknowledge.com/archives/2012/05/22/guide-data-center-infrastructure-management-dcim/

Disclosure: between 2006 and 2012 I have selected, bought and implemented three different DCIM solutions for the companies I worked for. At that time I was also part of either the beta-pilot group for those vendors or on the Customer Advisory Board. That doesn’t make me a DCIM expert, but it generated some insight into what is sold and what actually works and gets used.

Share

Where is the open datacenter facility API ?

<English cross post with my DCP blog>

For some time the Datacenter Pulse top 10 has featured an item called ‘ Converged Infrastructure Intelligence‘. The 2012 presentation mentioned:stack21-forceX

Treat the DC infrastructure as an IT system;
– Converge in the infrastructure instrumentation and control systems
– Connect it into the IT systems for ultimate control
Standardize connections and protocols to connect components

With datacenter infrastructure becoming a more complex system and the need for better efficiency within the whole datacenter stack, the need arises to integrate layers of the stack and make them ‘talk’ to each other.

This is shown in the DCP Stack framework with the need for ‘integrated control systems’; going up from the (facility) real-estate layer to the (IT) platform layer.

So if we have the ‘integrated control systems’, what would we be able to do?

We could:

  • Influence behavior (can’t control what you don’t know); application developers can be given insight on their power usage when they write code for example. This is one of the needed steps for more energy efficient application programming. It will also provide more insight of the complete energy flow and more detailed measurements.
  • Design for lower level TIER datacenters; when failure is imminent, IT systems can be triggered to move workloads to other datacenter locations. This can be triggered by signals from the facility equipment to the IT systems.
  • Design close control cooling systems that trigger on real CPU and memory temperature and not on room level temperature sensors. This could eliminate hot spots and focus the cooling energy consumption on the spots where it is really needed. It could even make the cooling system aware of oncoming throttle up from IT systems.
  • Optimize datacenters for smart grid. The increase of sustainable power sources like wind and solar energy, increases the need for more flexibility in energy consumption. Some may think this is only the case when you introduce onsite sustainable power generation, but the energy market will be affected by the general availability of sustainable power sources also. In the end the ability to be flexible will lead to lower energy prices. Real supply and demand management in the datacenters requires integrated information and control from the facility layers and IT layers of the stack.

Gap between IT and facility does not only exists between IT and facility staff but also between their information systems. Closing the gap between people and systems will make the datacenter more efficient, more reliable and opens up a whole new world of possibilities.

This all leads to something that has been on my wish list for a long, long time: the datacenter facility API (Application programming interface)

I’m aware that we have BMS systems supporting open protocols like BACnet, LonWorks and Modbus, and that is great. But they are not ‘IT ready’. I know some BMS systems support integration using XML and SOAP but that is not based on a generic ‘open standard framework’ for datacenter facilities.

So what does this API need to be ?

First it needs to be an ‘open standard’ framework; publicly available and no rights restrictions for the usage of the API framework.

This will avoid vendor lock-in. History has shown us, especially in the area of SCADA and BMS systems, that our vendors come up with many great new proprietary technologies. While I understand that the development of new technology takes time and a great deal of money, locking me in to your specific system is not acceptable anymore.

A vendor proprietary system in the co-lo and wholesale facility will lead to the lock-in of co-lo customers. This is great for the co-lo datacenter owner, but not for its customer. Datacenter owners, operators and users need to be able to move between facilities and systems.

Every vendor that uses the API framework needs to use the same routines, data structures, object classes. Standardized. And yes, I used the word ‘Standardized’. So it’s a framework we all need to agree up on.

These two sentences are the big difference between what is already available and what we actually need. It should not matter if you place your IT systems in your own datacenter or with co-lo provider X, Y, Z. The API will provide the same information structure and layout anywhere…

(While it would be good to have the BMS market disrupted by open source development, having an open standard does not mean all the surrounding software needs to be open source. Open standard does not equal open source and vice versa.)

It needs to be IT ready. An IT application developer needs to be able to talk to the API just like he would to any other IT application API; so no strange facility protocols. Talk IP. Talk SOAP or better: REST. Talk something that is easy to understand and implement for the modern day application developer.

All this openness and ease of use may be scary for vendors and even end users because many SCADA and BMS systems are famous for relying on ‘security through obscurity’. All the facility specific protocols are notoriously hard to understand and program against. So if you don’t want to lose this false sense of security as a vendor; give us a ‘read only’ API. I would be very happy with only this first step…

So what information should this API be able to feed ?

Most information would be nice to have in near real time :

  • Temperature at rack level
  • Temperature outside of the building
  • kWh, but other energy related would be nice at rack level
  • warnings / alarms at rack and facility level
  • kWh price (can be pulled from the energy market, but that doesn’t include the full datacenter kWh price (like a PUE markup))

(all if and where applicable and available)

The information owner would need features like access control for rack level information exchange and be able to tweak the real time features; we don’t want to create unmanageable information streams; in security, volume and amount.

So what do you think the API should look like? What information exchange should it provide? And more importantly; who should lead the effort to create the framework? Or… do you believe the Physical Datacenter API framework is already here?

More:

Good API design by Google : http://www.youtube.com/watch?v=heh4OeB9A-c&feature=gv

Share

Datacenter & SCADA security

Vorig jaar publiceerde het Platform voor Informatie Beveiliging (PvIB) het artikel over SCADA security van mijn en Jeroen.

Hier is een copy van het complete artikel.

Vorige week bereikte mij het nieuws dat deze genomineerd was voor artikel van het jaar 2012:

Altijd leuk om zulke waardering te krijgen, maar het deed mij beseffen dat er nog veel mythes zijn rond de (on)veiligheid van SCADA systemen en BMS systemen binnen datacentra.

In de komende periode zal ik hier wat aandacht aan schenken op mijn blog.

Share

SCADA netwerken & beveiliging… vervolg

De laatste weken is er aardig wat rumoer rond SCADA netwerken en hun beveiliging;

Sporthal gehackt via wijd open SCADA-systeem;

Een openstaande log-in heeft internetgebruikers onbeperkte toegang gegeven tot het beheren van fysieke zaken als luchttoevoer, temperatuur en alarmen in een sporthal in Spijkenisse.

En;

Nederlandse hacker onthult open SCADA-systemen;

Een Nederlandse hacker heeft een lijst van SCADA-systemen gepubliceerd die open en bloot aan het internet hangen.

En;

Zeeuwse gemalen te hacken via SCADA-lek;

De SCADA-systemen van rioleringspompen en gemalen van de gemeente Veere blijken slecht beveiligd. Hackers kunnen de systemen manipuleren vanuit huis. De gemeente is geschrokken.

En zelfs op Nu.nl;

D66 en VVD willen opheldering over hackers;

AMSTERDAM – D66 en VVD willen dat minister Ivo Opstelten (Veiligheid) opheldering geeft over het feit dat het kinderlijk eenvoudig is sluizen, gemalen, rioleringspompen en bruggen via internet te bedienen.

En 1-Vandaag besteden er een uitzending aan;

Sluizen, gemalen en bruggen slecht beveiligd;

Het blijkt kinderlijk eenvoudig om sluizen, gemalen, rioleringspompen en zelfs bruggen in Nederland via internet op afstand te bedienen. In EenVandaag een reportage die bijvoorbeeld laat zien hoe slecht de rioleringspompen en gemalen van de gemeente Veere zijn beveiligd. Met een paar simpele handelingen zijn ze vanaf een thuiscomputer te bedienen.

Het eerder geschreven artikel door Jeroen en mij rond datacentra, SCADA en veiligheid, is inmiddels opgepikt door DatacenterWorks: http://datacenterworks.nl/uploads/DCW%209-2011.pdf . Ook het Vakblad informatiebeveiliging pikte het op voor de leden van de PvIB : http://www.pvib.nl/tijdschrift

Het Nationaal Cyber Security Centrum (NCSC) heeft inmiddels een factsheet online gezet met daarin de veiligheidrisisco’s die overheden lopen. “De praktijk wijst uit dat veel organisaties onwetend zijn over welke van hun systemen direct via internet bereikbaar zijn. Een deel van die onwetendheid komt omdat systemen zijn aangelegd en/of worden beheerd door derden, waarmee geen of onvoldoende afspraken zijn gemaakt over beveiligingseisen”, waarschuwt het NCSC.

 

 

Share

Een virus in je noodstroom generator…

Artikel door Jeroen Aijtink, CISSP en Jan Wiersma, Int. Director EMEA DatacenterPulse.securing-scada-network-p1

In de zomer van 2010 werd bekend dat er een zeer geavanceerd worm virus was gevonden die als doel had om Iraanse ultracentrifuges te saboteren. Door programma code in PLC’s van Siemens te wijzigen kon men de motoren van de centrifuges beïnvloeden. Het virus kreeg de naam Stuxnet mee en was het eerste virus dat industriële besturingssystemen als specifiek doel had.

Tot voor kort was het idee dat SCADA en industriële besturingssystemen kwetsbaar waren voor een cyber aanval, voor vele slechts theorie. Op basis van de complexiteit van deze systemen en hun bijzondere communicatie protocollen achtte leveranciers en gebruikers zich veilig tegen dergelijke aanvallen. De redenatie was dat dergelijke industriële besturingssystemen zoals PLC’s simpel weg zo verschillend zijn van normale IT systemen, dat deze geen interessant doel zijn voor traditionele hackers. De verschillen waar door de industriële besturingssystemen geen interessant doel zouden zijn waren:

  • Industriële besturingssystemen bevatten componenten zoals PLC’s, motor controllers en intelligente automaten waar van de kennis ver buiten die van de meeste hackers ligt.
  • Industriële besturingssystem componenten zijn uitgevoerd in wijdvertakte netwerken met honderden onderdelen waarbij de complexiteit er van meestal door enkel ervaren engineers begrepen word.
  • Industriële besturingssystem communicatie protocollen zoals Modbus en BACNet zijn onbekend in de wereld van de meeste hackers. Data pakketten in deze omgeving zijn niet te vertalen zonder specifieke kennis van dit soort systemen.
  • Zonder gedetailleerde kennis van de specifieke systeem architectuur, zegt veel van de systeemdata de hacker niet veel.
  • Industriële besturingssystemen bevatten geen financiële of persoonlijke data die normaliter het doel zijn van de traditionele hacker.

Een beveiligings strategie gebaseerd op de bovenstaande systeem complexiteit en unieke systeem architecturen word door beveiliging specialisten ook wel ‘security through obscurity’ genoemd. Dit alles heeft er voor gezorgd dat de meeste SCADA netwerken slechts beveiligd zijn met een wachtwoord voor het bedienend personeel en niet veel meer.

In de afgelopen jaren is de wereld van industriële besturingssystemen en SCADA’s echter behoorlijk veranderd. (Oudere)Legacy industriële besturingssystemen bestonden uit speciale hardware, merk gebonden communicatie protocollen en aparte communicatie netwerken. Moderne industriële besturingssystemen bestaan uit standaard PC’s en servers die communiceren via standaard IT protocollen zoals IP en hun netwerk omgeving delen met andere IT eindgebruiker netwerken. Deze verandering heeft diverse voordelen opgeleverd zoals gereduceerde hardware kosten en verhoogde flexibiliteit en bruikbaarheid van deze industriële besturingssystemen. Het gaf de leveranciers van industriële besturingssystemen de mogelijkheid om hun systeem te ontwikkelen op standaard Windows of Unix platformen. De systemen kregen ook de mogelijkheid om gemakkelijke data en rapportages te delen met andere IT en netwerk systemen. Hier door zagen we in de afgelopen jaren echter wel de grens tussen IT (Kantoor Automatisering) omgevingen en de facilitaire industriële besturingssystemen vervagen. De voordelen brengen ook de nadelen van de traditionele IT omgeving met zich mee; de verhoogde kans op cyber crime.

In 2008 schreef het Amerikaanse NIST hier over in hun Guide to Industrial Control System Security: “Widely available, low-cost Internet Protocol (IP) devices are now replacing proprietary solutions, which increases the possibility of cyber security vulnerabilities and incidents.”

Dat de verhoogde kans op cyber crime een reële bedreiging vormt toonde DOE engineers van het Idaho National Engineering Lab (USA) in het Aurora Project (2007) aan. Samen met hackers van het Department of Homeland Security (DHS) startte ze een cyber aanval met als doel een grote diesel generator te vernielen. Enkele minuten nadat de hackers toegang kregen tot het SCADA systeem wist men de generator in handen te krijgen. Op een video die in 2009 getoond werd in het Amerikaanse CBS’60 Minutes was te zien dat de 27ton wegende generator gestart werd, flink begon te schudden en na enige seconde volledige gehuld was in rook. De generator overleefde de cyber aanval niet.

[youtube=http://www.youtube.com/watch?v=fJyWngDco3g&w=448&h=252&hd=1]

Het Aurora Project project toonde daar mee aan dat het mogelijk was voor hackers om via een netwerk toegang fysieke schade toe te brengen aan een generator. De hackers hadden hier bij kwetsbaarheden gebruikt die in de meeste industriële besturingssystemen vandaag de dag aanwezig zijn.

Beveiligen van een modern SCADA netwerk

Het feit dat leveranciers van moderne SCADA systemen deze gebaseerd hebben op standaard IT onderdelen en protocollen, levert wel als voordeel op dat er al veel kennis is over beveiliging van dergelijke omgevingen.

De basis voor een goed beveiligde omgeving begint met het bepalen van dreigingen en risico’s die er zijn voor de organisatie en zijn infrastructuur. Door het combineren van de facilitaire systemen met de IT-omgeving is de stelling “we hebben toch een firewall” niet meer afdoende. Denk bijvoorbeeld aan het draadloze netwerk, laptops van bezoekers  of een account manager die regelmatig via draadloze netwerken in hotels of restaurants op Internet gaat. Let ook op  zaken die niet direct met ICT te maken hebben. Welke (openbare) informatie is over de organisatie beschikbaar, welke communicatie stromen zijn er met onderhoudsleveranciers en hoe controleer ik de onderhoudsmonteur. De vraag ‘waartegen beveilig ik de organisatie’ is een belangrijke vraag om te beantwoorden. Iedere organisatie heeft tenslotte andere tegenstanders.

Goed beveiligingmaatregelen zijn afgestemd op de bedreigingen en tegenstanders van een organisatie. Zorg dat beveiligingmaatregelen bruikbaar zijn en de processen de organisatie ondersteunen. Naast techniek is scholing van medewerkers over beveiliging erg belangrijk.

Om een bedrijfsnetwerk te beveiligingen is het creëren van verschillende zones het uitgangspunt. Iedere zone heeft zijn eigen functie en de overgang tussen zones verloopt via een firewall. Communicatie met een zone die minder vertrouwd worden, zoals Internet, verloopt via een proxy-server in een demilitarized zone. Door de proxy-server ontstaat een scheiding in de verkeersstroom en wordt de inhoud gecontroleerd op virussen en malware. Houdt ook het rekencentrum voor kantoorautomatisering en de facilitaire systemen gescheiden van elkaar met eigen zones.

Door de verschillende zones zijn de facilitaire systemen gescheiden va
n de IT-omgeving en communicatiestromen hiernaar toe zijn te inspecteren met intrussion detection systemen. Een tweede voordeel van het aanbrengen van zones is een performance verbetering doordat communicatiestromen elkaar niet hinderen.

Naast beveiliging binnen het netwerk zijn servers, besturingssystemen, applicaties en databases goed beveiligd en voorzien van de laatste (security) patches en antivirus software. Server beveiliging is belangrijk, omdat de firewall alleen kijkt of de communicatie toegestaan is terwijl antivirus software de servers beveiligd tegen bekende virussen. Patches voor het besturingssysteem zorgen dat beveiligingslekken gedicht worden. Na installatie moet software in een doorlopend proces onderhouden worden.

Steeds meer facilitaire systemen hebben remote-support vanuit de leverancier. Denk hierbij goed na over welke informatie de organisatie kan verlaten. Daarnaast kan het een goed idee zijn om de leverancier alleen toegang te geven wanneer er daadwerkelijk een probleem is. Het oplossen van storingen vanuit de locatie lijkt ouderwets, maar geeft wel de meeste controle. Zeker wanneer de monteur begeleid wordt door iemand die inhoudelijk kan beoordelen wat deze uitvoert.

Bovenstaande lijkt niet in lijn te zijn met de visie, van de laatste jaren, op informatiebeveiliging volgens Jericho. Jericho gaat uit van netwerken zonder “firewall-slotgrachten”,  waarbij beveiliging aangebracht wordt daar waar dat nodig is op de servers en werkplekken. De traditionele firewall zou hierdoor overbodig zijn. Wanneer beveiliging op alle systemen integraal geregeld is kan de firewall uit. Voordat dit moment is aangebroken is er bij veel organisaties nog een lange weg te gaan.

Continue with reading

Share

Datacenter Management Tools overzicht

Data center infrastructure management (DCIM) tools hebben de afgelopen 2 – 3 jaar een grote vlucht genomen. De focus van deze tools is de integratie tussen IT systemen en de fysieke datacenter omgeving. Hierbij richt men zich grof weg op de twee volgende gebieden: Asset Management en Realtime-monitoring.

real-time-monitoring-diagramBij Asset Management gaat het om het registreren en bijhouden van de componenten die in het datacenter beschikbaar zijn. Denk aan de racks, bekabeling, koeling, energie voorziening, etc.. Hierbij gaat het om de complete levenscyclus van de apparatuur: van aankoop tot afvoer. Over het algemeen worden bij Asset Management de algemene regels en methodes van ITIL gevolgd. Hier zijn hele boeken over geschreven. Tussen al deze componenten kan men de relaties vast leggen. Energie ketens kunnen van hoofdverdeler tot op de server vast gelegd worden en netwerk bekabeling van systeem tot systeem. Door al deze gegevens te combineren kan men ook capaciteitsmanagement in regelen; hoeveel heb ik, hoeveel verbruik ik, hoeveel heb ik nog nodig en wanneer loop ik tegen de grenzen aan? Dit voor rack ruimte (hoogte eenheden of U), beschikbare koeling en/of energie (Watt), aantal beschikbare glas/koper poorten en beschikbare elektra outlets.

Daar waar men dit soort DCIM systemen vaak begin als een statische database, is er de laatste tijd een beweging naar de integratie van real-time gegevens van temperatuur sensors en bijvoorbeeld de hoofdverdeler zelf. Hiermee word het inzicht in het gedrag van de omgeving verbeterd. Daarnaast worden de capaciteits gegevens realistischer om dat het gat tussen theorie en praktijk word gedicht.

Aan de Asset Management kant worden real-time gegevens opgeleverd door barcode of RFID systemen die continue weten waar bepaalde componenten zich in het datacenter bevinden.

IDC heeft in november 2010 een aardig rapport uitgebracht over de DCIM markt. Deze geeft een overzicht over de leveranciers en tools op deze markt. Rapport hier.

Share