JanWiersma.com

Seven years of Cloud experience in ten Tweets.

With AWS celebrating 10 years after the launch of Amazon S3 in March 2006 and Twitter also celebrating 10 years , I wanted to revisit my ‘cloud rules’ published on Twitter and on my Dutch blog in 2011. The original was written after 2 years working on an enterprise IT implementation of whatever was perceived ‘cloud’ in 2009, building a Gov version of Nebula (the Openstack predecessor) and starting to utilize AWS & Google Apps in enterprise IT environments.

As the original rules where published on Twitter with its 140 character limit, it lacked some nuance and context so I converted it in to a blog post. The original 7 rules from 2011, with context;

Even though the debate on a definition of ‘what is cloud’ died down a bit, it does still surface now and then. Given the maturity of the solutions , the current market state and the speed of change in the market of ‘cloud’ ,I still stick to my opinion from 5+ years ago: a generic definition of cloud is not currently relevant.

The most common used definition seems to be the one NIST (pdf) published in 2011, and provides a very broad scope. Looking at IT market development the last few years and the potential of what is still to come, we are continually refining these definitions.
As ‘cloud’ products and services pushed to the current market will be common IT practice in a few years,  we will slowly see the ‘cloud’ name being dropped as a result.

There is still a valid argument to have a common definition of ‘cloud’ and the delivery models within companies, to avoid miscommunication between IT and the business. The actual content of that internal definition can be whatever you want it to be, as long as there is a common understanding.

The definition debate in the general IT market will continue until the current hype phase has passed. As soon as we enter Gartner’s “Trough of Disillusionment” all marketing departments will want to move away from the ‘cloud’ term, and replace it with whatever the new hype is. We can already see this happening with the emergence of ‘DevOps’, ‘BigData’, ‘Internet of Things (IoT)’.

Just remember there is just one truth when it comes to ‘cloud’ ; “There is no cloud. It’s just someone else’s computer” 

A part of the FUD around cloud wants you to believe the lock-in aspect is unique to cloud deployments, but as with any technology choice, you just need to be aware of elements that block portability. When deploying a VMware + NetApp based architecture within your own datacenter, or deploying on AWS or Azure; your technology choice locks you in to that specific reference architecture.
As long as you make sure these are conscious decisions, where the unique platform requirements are known and documented, you can define an exit strategy.

The exit strategy should also include ways to extract your data from the service when needed. Again, not something new to the IT business as IT hosting & outsourcing have been part of our IT eco system for many years now, with external parties handling our data.

Most of the cloud application delivery is done using the internet, where the end-user bandwidth may vary massively from fiber-to-the-home down to slower mobile-internet connections.
Given modern application architecture, frameworks and (CDN) delivery techniques , the current state of technology can easily help you overcome these obstacles… as long as you explicitly design for it.

Things will fail all the time; software will have bugs, hardware will fail and people will make mistakes. In the past IT would focus on avoidance of service failure by pushing as much redundancy as (financially) possible in to its designs, resulting in very complex and hard to manage services. Running reliable IT services at large scale is notoriously hard, forcing an IT design rethink in the last few years. Risk acceptance and focusing on the ability to quickly restore the service, has shown to be a better IT design approach. Combining a different design approach, with the right level of monitoring and automation seems to provide the best results.

Many IAAS providers force their customers to follow these new design patterns, as the platform they provide has limited availability guarantees for single instances.

The Reactive Manifesto provides some vital elements to understanding the new design approach; http://www.reactivemanifesto.org/

A very common mistake in IT, is to apply traditional design philosophy to new cloud platforms like AWS or Google Cloud. Traditional designed IT applications can’t just be ‘thrown in the cloud’ and expect to work without any issues. This will result in expensive and/or unreliable IT services. Running your service on any IAAS platform provides all kinds of benefits, but these are provided only if you comply to the reference architectures given by the cloud providers.
(see also #4 & #5)

‘the Cloud’ provides any company the ability to push out functionality that is highly standardized (commodity IT), allowing you to focus on services that provide a real competitive advantage to the business. Running a spreadsheet program or a word-processor will not provide you with a business advantage as your competitor has already bought and deployed the same capability. The most important question for CIO’s nowadays; what services should you be focusing on building & maintaining to provide you that advantage ?

So much has been written on the topic of shadow-IT and the changing role of the CIO. I recommend this piece by my friend Tim Crawford.

In 2014 I added 2 rules to my list;

Moving to cloud computing, means gracefully losing control and surrendering data, applications, etc., to someone else. I have written extensively about this in ‘Our Trust issue with Cloud Computing’.

Given the international developments in data privacy regulation, it is important to identify your dataflow’s across cloud vendors and around the world. Knowing your data, the actual law and the cloud service provider’s reference architecture, allows you to make risk based decisions on where to host your data. This shouldn’t be about preventing cloud service use, but force conscious decisions.

The benefits of cloud consumption cannot be had by forcing your own design rules on the vendor. You can’t ask AWS to buy HP servers and NetApp storage, because your internal policy demands it. You can’t ask a SAAS provider like Salesforce to add some firewalls and IDS’s to its infrastructure design, because your internal IT security team feel like it’s a better design that way. Cloud services are delivered as is, allowing them to scale while balancing cost.
Demands like above show the IT organization doesn’t understand cloud service procurement, or maybe better served by an outsourcing contract.

And after changing organizations for cloud adoption in the last 5 years; I added number 10;

Changing an IT departments mindset and culture isn’t easy, but a needed change if one wants to be successful in deploying a cloud strategy. It requires change in the way Finance handles the IT budget, how the IT auditor handles the evaluation of IT processes, how Development & Operations work together, down to retraining IT staff as certain roles in the organization become obsolete.
Like any change process that includes culture, it is a multiyear program requiring strong leadership. Most failing cloud strategies are a result of underestimating this, as technology is no longer the issue; it’s all about people…

Share

1 comment

Leave a Reply

Your email address will not be published. Required fields are marked *