Applying firefighter tactics to (IT) leadership

1-hsowjrjlMrNDh6Cjpod3JwThis week I will be celebrating my 15th year of active volunteer firefighter duty. As you naturally tend to do when celebrating milestones like these, is to reflect on the past years and learnings.
One thing that specifically stood out are moments in my IT leadership career, where I applied firefighter techniques and skills, I picked up over the years.
Most of them revolve around problem solving and how to get the most out of teams. While there is an obvious link between firefighters and solving issues in a high pressure or crisis situation, I did learn the same tactics also apply to any challenge I was confronted with.
When firefighters arrive at the scene of a fire, they always follow the same protocol;
-Assess the situation
-Locate fire
-Identify & control flow path
-Extinguish the fire
-Reset & evaluate
In business and especially at higher leadership levels some problems may seem very daunting, creating anxiety and leave you with the feeling of being overwhelmed. Firefighters are used to stepping in to highly unknown situations with confidence and as such a protocol like above helps to, step by step, gain control of the situation.

Continue with reading

Seven years of Cloud experience in ten Tweets.

With AWS celebrating 10 years after the launch of Amazon S3 in March 2006 and Twitter also celebrating 10 years , I wanted to revisit my ‘cloud rules’ published on Twitter and on my Dutch blog in 2011. The original was written after 2 years working on an enterprise IT implementation of whatever was perceived ‘cloud’ in 2009, building a Gov version of Nebula (the Openstack predecessor) and starting to utilize AWS & Google Apps in enterprise IT environments.

As the original rules where published on Twitter with its 140 character limit, it lacked some nuance and context so I converted it in to a blog post. The original 7 rules from 2011, with context;

Even though the debate on a definition of ‘what is cloud’ died down a bit, it does still surface now and then. Given the maturity of the solutions , the current market state and the speed of change in the market of ‘cloud’ ,I still stick to my opinion from 5+ years ago: a generic definition of cloud is not currently relevant.

The most common used definition seems to be the one NIST (pdf) published in 2011, and provides a very broad scope. Looking at IT market development the last few years and the potential of what is still to come, we are continually refining these definitions.
As ‘cloud’ products and services pushed to the current market will be common IT practice in a few years,  we will slowly see the ‘cloud’ name being dropped as a result.

There is still a valid argument to have a common definition of ‘cloud’ and the delivery models within companies, to avoid miscommunication between IT and the business. The actual content of that internal definition can be whatever you want it to be, as long as there is a common understanding.

The definition debate in the general IT market will continue until the current hype phase has passed. As soon as we enter Gartner’s “Trough of Disillusionment” all marketing departments will want to move away from the ‘cloud’ term, and replace it with whatever the new hype is. We can already see this happening with the emergence of ‘DevOps’, ‘BigData’, ‘Internet of Things (IoT)’.

Just remember there is just one truth when it comes to ‘cloud’ ; “There is no cloud. It’s just someone else’s computer” 

Continue with reading

The new normal – Cloud & developer enablement.

48_imgAccording to AWS CTO Werner Vogels “Cloud is now the new normal.”

Where the first day keynote at AWS’s ReInvent 2015 conference was all about enabling companies to migrate their current services to the cloud, the second day keynote by Vogels was all about the ‘new normal’ – developer enablement.

With new services like AWS Snowball , AWS Database Migration Service and AWS Schema Conversion Tool , AWS tries to smoothen the migration path from old on-premise infrastructure & application deployments, to using AWS’s Infrastructure As A Service offering (EC2, RDS, VPC, S3, ..).

While these new services help companies to move to a consumption model for compute, storage and networking, it is still very infrastructure focused. Design decisions around (virtual)network layout, load balancers and the build & management of the operating systems (Windows/Linux) are still the customer’s responsibility.

Needing to still deal with all these elements, holds developers back from moving fast as they go from idea to the launch of a new service. It slows the creation of real value to the company down.

In the real ‘new normal’ world, the developer is enabled to deploy a new service by building & releasing something fast, without needing to worry about the infrastructure behind it. By stitching external managed capabilities/services together in a smart way the developer can move even faster.

Where in the past a developer would try to speed releases up by code-reuse with, for example, software libraries, the availability of developer ready services like a fully managed message queuing service (AWS SQS) or a push messaging service (AWS SNS) have enabled developers to move even faster without worrying about the manageability of the solution.

Continue with reading

The future of datacenter build & co-lo (or CIO’s are getting out of the datacenter business – Part2)

Last year my friend Tim Crawford wrote an excellent article on why CIOs should get out of the datacenter business.  Tim focused on how current big cooperates are moving away from building, owning or renting datacenter facilities in favour of consuming IT at higher levels of the stack.


As he focused on the migration of leading big companies, it leaves the question; what about the future Fortune 500 companies?

Continue with reading

Code of Conduct

As a ‘code of conduct’ seems to be needed now a days for interactions between people, especially at tech conferences, I releasing my own ‘code of conduct’.

The following applies when you interact with me, listen to my talks or see any of my rants on social media;

1. Respect & integrity. I will treat you with respect by default, please extend the same courtesy to me. I have a strong views on certain issues that maybe completely the opposite of your view.

2. Acknowledge my culture. I’m Dutch. I’m direct, blunt and we founded ‘going Dutch‘. I acknowledge the fact that you may have an other cultural background and therefore a different view of the world around us.

3. If you don’t like what I’m saying or how I’m acting, let me know. Or walk away. If you don’t confront me and just complain behind my back, you take away my ability to learn. There some good guidelines for delivering feedback to someone. You may want to read it someday, if you want your feedback to resonate.

4. Confidentiality ; by default I will keep any information you provide to me confidential. You can share anything I told you with anyone, unless I specifically tell you the information I’m sharing is confidential.

5. I’m even more blunt on social media en during delivering keynotes. Just unfollow me, if you can’t handle that. See the disclaimer: http://janwiersma.nl/?page_id=160


The problem with the Docker hype

hyundai-elantra-flattened-3Remember when the Cloud hype kicked off and we all looked mesmerised at the Cloud Unicorn companies (like Netflix) that got great benefit from Cloud usage? We all wanted that so badly. We wanted to get out of the pain of high maintenance cost and the lack of agility. Amazon, Google, Microsoft Azure all seemed to provide that. Just by the click of a button.

In 2012 I did a short whitepaper on what it takes to move an application the cloud, based on my painful experience with some Enterprise IT moves to the cloud.  I stated, “The idea that one can just move applications without change is flawed.” There was not enough benefit in moving the monolithic, 10 year old, application on the cloud. That type of move may deliver small cost savings, but that is actually a hosting exercise. It could even be dangerous to move in that way because the application may not be suitable for the cloud providers’ reference architecture. That could for example lead to availability and performance issues. The unicorn benefits could only be gained if you changed your way of working and thinking.

The same goes for the Docker hype now;

I agree with all the potential that Docker unlocks; portability & abstraction. It is a game changer and some even say ‘Docker changes everything’

Hearing people talk at large conferences (like AWS ReInvent) about Docker seems like the first phase of the Cloud hype all over again. They state ‘Just docker-ize your app’ and all will be great. Sureal conversations with people that try to put anything and everything in a container. ‘Yes, just put that big monolithic app in a container’

People seem to forget that Docker is an enabler for architecture elements like portability and micro-services (that leads to scalability).

I highly recommend reading James Lewis & Martin Fowler ‘s article on microservices first: http://martinfowler.com/articles/microservices.html

Then See this:

Because The problem with the Docker hype currently? It makes it about the tool. And only the tool will not fix your problem.

Other things to consider around Docker:  Docker Misconceptions

Our Trust issue with Cloud Computing

Recently SDL (my employer) did a survey on customer ‘trust’ for the marketer.B0AgpWXIAAA5mZm

Being in the IT space I tend to deal a lot with ‘trust’ the last few years. Being responsible for the Cloud services delivery for my companies SAAS & hosted products we deal with clients evaluating and buying our services. My teams also evaluate & consume IAAS/PAAS/SAAS services in the market, on which we build our services.

The ‘trust’ issue in consuming Cloud services is an interesting one. IAAS platforms like Amazon abstract complexity away from its user. It is easy to consume. The same goes for SAAS services like Box.com of Gmail; the user has no clue what happens behind the scenes. Most business users don’t care about the abstraction of that complexity. It just works….

It’s the IT people that seems to have the biggest issue with gracefully losing control and surrendering data, applications, etc., to someone else. Control is often an emotional issue we are often unprepared to deal with. It leaves us with a feeling ‘they can’t take care of it as good as I can…’ Specifically IT people know how complex IT can be, and how hard it can be to deliver the guarantees that the business is looking for. For many years we have tried to manage the rising complexity of IT within the business with tools and processes, never completely able to satisfy the business as we where either to expensive or not hitting our SLA’s. Continue with reading

German companies ask for Internet border patrol.

border_0_0In the last year multiple companies started serving German customers out of Germany based datacenter locations.

There seems to be a specifically strong sentiment around security & privacy with German companies after the Edward Sonwden leaks. The kneejerk reaction is to mandate that servers should sit within German borders, as that would take any security & privacy concern away. Cloud providers are now starting to follow this customer demand.

Interestingly this reaction is more sentiment driven as there is no legal ground to request this. Especially as more and more German companies are putting this in place as a default policy, regardless of what type of data (privacy sensitive or not…)

Looking at the Federal Data Protection Act (Bundesdatenschutzgesetz in German) (“BDSG”) it states that certain transfer of data (like personal data) outside of the EU needs to be reported and approved and Data controllers must take appropriate technical and organizational measures against unauthorized or unlawful processing and against accidental loss or destruction of, or damage to, personal data. Nothing says servers need to be in Germany.

Looking at other EU countries, Germany seems to be the only country where organizations express such behavior. The only next inline could be Switzerland.

Continue with reading

My move to SDL…

It’s the weekend before the holiday season and just like last year I find my self at an US airport making my way home… just in time for Christmas.

Sitting at the airport lobby, listing Christmas songs, I can’t help to reflect on the past year.

A lot has happened and things changed a lot. I have left OCOM (LeaseWeb/EvoSwitch/Dataxenter) after 2 years in September this year. Something that some of my peers in the market didn’t expect, but was long overdue. For too long I couldn’t identify myself with the way the company was run and its strategy. No good or bad here… just a big difference of opinion on vision and execution.

The last 2 months I have been able to talk to lots of different organizations in an effort to see what my next career step should be. I needed some time to recuperate from my little USA adventure with LeaseWeb & EvoSwitch. It was a great project to participate in but all the travel took a big toll on my personal life and me.

I also learned how passion for your work can be killed and what it takes to be sparked again. And how people are motivated by the Why in their jobs.

I had some good conversations with DCP board members Mark and Tim about my frustration in the lack of progress in the datacenter industry. It felt like I have been doing the same datacenter and cloud talks for the last 3 years, and things still didn’t move. I wanted to have the opportunity to really make a difference.

Continue with reading