2016-10-25

Pre-OpenStack Summit Post

I am on my way to the summit – cutting it fine, as I will only be arriving after the keynotes have started on Day 1. That is part of my life being a religious orthodox Jew.

Just a few hours ago, I finished the festival of Sukkot, a festival where we ‘leave’ our homes for 8 days and move to a temporary house. Well not really leave the house – but we eat all our meals in the Sukkah for the whole festival.

It symbolizes the fact that we have faith in G-d and remember that life is was not so easy and that we left slavery in Egypt many years ago to become a free nation.

Going back to the title of the post.

Barcelona Summit

The main reason I am going to this summit is because I am co-presenting a session on the work
Shamail Tahir and myself have been leading with the Active User Contributor (AUC) working group over the past few months.

If you are interested in more details about the work – I will not rehash the post that has already been published - Recognizing active user contributors to OpenStack.

Please take a minute or three to read it over.

Great!! You are back.

I think that we are at a time where OpenStack is about to change, and for the better. Traditionally Operators and Users have been neglected left out of many of the decisions made in the projects, something which I have vocally opposed many a time (you can find most of my posts here) especially this one - We Are All OpenStack! Are We Really????.

A number of things have happened recently – and will continue to evolve over the next few months – which will enable Users and Operators to actually have a voice (at least that is my true hope and belief).

Proper representation, real recognition, and hopefully some more influence in directions and perhaps also priorities.

If you are going to be at the summit, make sure you join Shamail and myself presenting the work that was done over the past few months.

I hope that one day we will be able to look back on the past and reflect on a time once past, and understand that we are all now one community.

Looking forward to see you there!

2016-10-16

VMware on AWS - My Thoughts

The world shook a few days ago, with the announcement of a partnership between VMware and AWS.

Screenshot at Oct 15 21-46-03

There are a number of posts that have been released, by a number of bloggers and analysts,  about what this actually means but I would like to highlight 3 of them, and also insights from the Joint announcement and my thoughts on the matter as whole.

So first some history, VMware has always perceived AWS as a competitor. It is no secret that VMware over the years (https://gigaom.com/2013/03/01/vmware-stick-with-us-because-amazon-will-kill-us-all/) have warned their customers about going to AWS. Some of the claims included (but not only):

  • Why this is bad thing.
  • One way ticket – vendor lock-in
  • Enterprise workloads are not suitable
  • The list goes on.

VMware even has (or had) specific playbooks and marketing decks that were used to explain to their customers why vCloud (or vCloud Air) was a much better choice that AWS.

So first facts.

The Announcement

This was the slide that was presented on the live broadcast.

Screenshot at Oct 15 21-48-35

  1. An integrated hybrid cloud service.

    To me that means that I can move my workloads from my on-premises Private Cloud to a Public Cloud Provider – and in this case it would be AWS.

    One thing that should be made clear off the bat.

    The offering that was announced – is not a hybrid solution. What VMware is offering is the option to place your workloads either in your on-premises Private Cloud or in another Private Cloud which is located in a 3rd party Datacenter – in this case – AWS. VMware are trying to position and sell as something that it is not.

    You are not making use AWS as a cloud provider – all you are doing is using them as a 3rd party bare metal provider.
  2. Jointly Architected Solution

    Yes there was some work needed on both sides. But honestly – I think the work was almost completely on the part of VMWare, and hardly any investment on the part of AWS. Let me explain why.

    If you look at what AWS offers today, they are not adding, removing, or changing anything at all in their infrastructure to accommodate this offer. Perhaps they are ensuring that there will be enough capacity to run the solution – but nothing besides that has to change.

    It could be I am wrong – but this is how I would envision how the product would work.

    Once a new Cloud is ordered (see screenshot from Demo below) VMware will go and request the appropriate bare metal hosts from AWS – by invoking an API, install ESXi on the hosts and then provision all the additional building blocks on top of it. I assume that a lot of the work that will bring up this infrastructure will be using the work and the expertise of the people in Project Zombie which already is being used (I hope) for building up vCloud Air.

    Screenshot at Oct 15 21-18-38

    What does AWS have to do allow this solution to work? Nadda, it already exists – anyone can request a bare metal server.

    On second thought, I may be exaggerating – they probably did and will provide an AMI (Amazon Machine Image) with ESXi readily installed – to make things move a bit faster.

    (An interesting tidbit of info that comes out this relationship, that the AWS proprietary hardware – will now be certified to run ESXi)

    So if we look again. What effort was needed on AWS’s part – I assume the AMI image (perhaps some other background stuff – which we do not know about). From the VMware perspective – writing the code to call the correct AWS API’s for the bare metal nodes, the code to provision all the nuts and bolts on the servers and of course let us not forget the required work that went into certifying the proprietary AWS hardware that they use. If you are aware of it – or not, AWS builds their own hardware in direct partnership with hardware manufacturers, they don’t run brand names. At the scale they are functioning – it does not make sense.

    (So just as side bit of information – unless this is a specialized version of ESXi –  it might be – but I assume that the changes will make it into the default code base at some point because maintaining a specific branch, just for AWS does not make sense. In the future you probably will be able to deploy your own bare metal node from AWS and install ESXi on it – all on your own – without the service from VMware. It will not be as polished as the service provided by VMware – but it should be possible. Once they have opened this Pandora’s box – it will be hard to close it)
  3. Primary Public Cloud Solution – delivered, sold and supported by VMware

    So as I said above – this is not really a public cloud solution solution – this is a Private cloud running on a Public Cloud on AWS – who are the suppliers are the bare metal servers. Is this way AWS are no different than any other vendor who provides the same service.

    They are not offering a public cloud solution running on top of AWS. They are not offering the option for you to deploy a Cloud solution that you will be able to sell to your customers. It is a vSphere ,VSAN and NSX environment that you use to your hearts desire. Your applications can use all of AWS’s other services – you cam already do that today – without running VMware on AWS.

    But it is not a Public Cloud.
  4. Run any application across vSphere based Private, Public and Hybrid Clouds 

    Finally something I can agree with!!
  5. AWS will be VMware’s primary Public cloud infrastructure partner, and VMware will be AWS’s primary private cloud partner

    So this one is interesting.

    Firstly, if I was a VMware vCloud partner – I would be very, very worried because there is no-one that can compete with the scale of AWS, and a statement like this one – saying that we have one main partner would put dampers on other partners. For example the partnership announced just over two months ago with IBM (https://www-03.ibm.com/press/us/en/pressrelease/50420.wss).

    The other side of the equation of AWS using VMware as a private cloud partner – is not something that I would take too seriously. AWS has always and still believes that everything should run in the Public cloud.

    What does AWS have to gain from this? Absolutely everything!!!

    VMware are trying to position this offer as a way to start using AWS resources – not with any kind of integration mind you – because from the information currently made available – there is no special integration done with VMware to make use of the AWS resources. Yes the announcement said they will be able to use AWS services – but it also was stated that the benefit here will be the proximity to AWS resources in each location – nothing more. That is a huge plus but then again, this is something that anyone can do today, running from within AWS or running from with their own datacenter. It is just a matter of proximity, not any technological advantage.

    In addition – when VMware users start learning about the plethora of available services that they can use while in AWS  they will see that it might start to make sense to use them natively – without running first on VMware.

    Screenshot at Oct 15 22-13-10

    The above screenshot was from Jeff Barr’s blog post, which I think points towards the future direction and why AWS were ready to accept this with open arms.

    They view the workloads that will run in VMware on AWS as old fashioned, historical applications (and there will always be a need for such applications). And when you the User are ready to grow up and move forward to the future – we are ready to help and will accept you with open arms, you are already half the way there.

    The same as well when you are interested to scale your application – then we have the capacity and the geographical spread to help you out. Just know that running it on VMware might not be your best bet.

    I am sure they are betting on the fact that once they have the workload in an AWS datacenter – the path for migrating off of VMware into AWS native will not be so scary.

Elastic DRS

Screenshot at Oct 15 21-37-23

Technologically speaking this is being made out be a much bigger feature than it really is. For a shop that is heavily into automation – how hard is it to actually scale up your cluster? Honestly – I think the biggest thing would be to order the actual hardware, rack it and stack it. The rest of it really trivial, installing ESXi, Host profiles and adding it to a cluster – is all something that we know how to do, and if we really did some work, automate it up the wazoo.

This is a very simple thing to now to do for VMware in this solution – all they need to do is make an API call to provision a new bare metal server and do all the magic behind. No more rack and stack, no more waiting for hardware to arrive – no more doing things the old way. They have a theoretically infinite supply of hardware to run VMware workloads – so this makes perfect sense. 

The Demo – was just that – a Demo

The demo showed the provisioning of an environment in AWS.

Screenshot at Oct 15 21-28-32

There is no way in hell, ever, that a 4 host cluster with VSAN and NSX, could be provisioned in less than 3 minutes. The installation of all that software – including vCenter  (especially vCenter) is not that fast.

I would suppose that it would take an hour or two to actually provision all the pieces to get it up and running.

Screenshot at Oct 15 21-30-46

(So just for purposes of bike shedding, do you start paying per hour – from the time that you click go – or from the time that the resources are fully available)

Cost

Last but not least – the matter of how much this is going to cost.

There is no denying that this is going to open up a huge amount of options to people who previously were limited in scale because of power, space, hardware limitations, etc.

But this is not going to be cheap. And let us all be honest – the amount of people that are going to use this as on per-hour basis – is going minimal. The majority of customers will use this on a regular basis – trying to maximize the resources they can use for the lowest cost possible.

Doing the math on the amount of CPU and RAM available in the cluster size offered in the demo – it seems that the Dedicated Host type will probably be a an M4.

The monthly cost of an M4 (without even putting any VMware software on it is $1451.90 x 4 servers – means we are looking at ~$6,000 for hardware alone per month.

I assume that VMware are going to want to factor into this the costs of licensing and operations – remember – this is a fully managed service that you are getting (which also brings up a question – how much control do you have over the underlying infrastructure, I assume none).

The current cost for a vCloud host (Dedicated Cloud) is just short of $6,000 per month – without support. I can envision that VMware manage to keep the costs more or less the same as they have up until now – but it will definitely be a good idea to run a proper cost-analysis to see if this is something that will be good for and your organization.

Summary

AWS will be the ones to benefit the most from this in the long run.

VMware are making this out to be a much bigger deal than it actually really is.

As always – please feel free to leave your thoughts and comments below.

2016-10-14

Saying Thank You!

So if I am already on a roll with saying thank you, I wanted to share wiith you all a post I wrote on LinkedIn a month ago (https://www.linkedin.com/pulse/saying-thank-you-maish-saidel-keesing)

Reposting it here in it’s entirety.


Thankyou(Source: http://www.flickr.com/photos/27282406@N03/4134661728/)

Two small words, but they make so much of a difference.

We take things for granted - every single day.

  • Life
  • Breathing
  • Our kids
  • Our family
  • Electricity
  • Email

All of these are things that we interact with every day, and only when they do not work, things go wrong, or are no longer there - do we wonder.

How can this be? Something so basic, so simple - it should be there.

It should work.

But nothing is there forever.

When was the last time you said thank you to your e-mail Admin - for making sure that you can check your email on your phone, at home, when you walk in the door every morning to work, anywhere and everywhere.

Did you say thank you to the utility company that makes sure your gas, water and electricity are working each and every day?

There are so many people that work their ass off to make this happen, come rain or shine, snow, hail or thunderstorm. Every day, every hour of the day.

I would like to share with you a short letter I wrote to a food company that supplies Kosher food, on a flight I was on last week.

Thank You

I received an answer back a week later (and am publishing with their permission)

Thank you

I cannot describe what a great feeling it gave me to receive this answer back.

Say thank you.

Say thank you to the doorman that opens the door for you.

Say thank you to those who clean your office every day.

Say thank you those who serve you a drink in the plane.

Say thank you.

You never know how much those two small words will make a difference - for you and for all those around you.


Have a great weekend everyone!

2016-10-13

A Small Note to Thank two New Sponsors

Writing a blog is mostly fun, but it comes with a cost not only of time and effort but also some cold hard cash (or actually everything is paid today with a credit and it all just numbers on a spreadsheet..) be it for Webhosting, Domain registration – well you all know the drill.

So it is time to thank two new Sponsors of this blog.

solarwinds – who have many products for the IT / Network / DB / VMware Admin and some of them are even free.

The second sponsor that I would like welcome is:

Nakivo – a vendor that have a backup solution for your workloads running in VMware or in AWS.

For any other potential parties who would like sponsor my blog, please feel free to sign up here.

2016-10-09

I Don’t Want to Talk to my Cloud Provider

A short while back I participated in an internal event. A number of priority customers of our internal cloud service were invited for a feedback session, to voice their thoughts, listen to roadmap sessions and just to get to know each other.

There was one comment made there by one of the participants that has been on my mind since then, and it was something along the lines of:

"I have been using AWS longer than I have been using our internal cloud service – that is more than 5 years. Ever since I have started, I have only contacted AWS support once – when I managed to somehow corrupt my two-factor authentication token. I have never received a marketing call, never been invited to such a feedback forum. There has never been a need.

On the other hand, on our internal services, I am flagged as a priority customer, have weekly, bi-monthly, monthly feedback sessions, capacity planning sessions and escalation paths.”

This got me thinking, and I completely agree.

A service where I can say that I use and rely on is something that just plain and simply works.

Let me give you an example. How would you feel if you would have to have a direct line into your electricity company to escalate when your lights did not come on when you flicked the switch, when you had to meet with them to discuss how many appliances you were planning to use this month, if you were planning on baking cakes for a party this week, and therefore you would be needing more electrical resources this month – so the electrical company should be ready for your surge in power consumption. Is that a company that you would say you can trust?

Probably not.

You would like the lights to go on every time you flick the switch, and not have worry about leaving the air conditioner on the whole day because it is really hot that day. All I would need know is that I will pay more that day or month – because I use more of the service your offered.

When we try to build a service – we should be doing it in such a way that it is something that the customer should be able to rely upon, something that we should be able use – without having to have direct lines into the support team to ask questions, because that team has already provided the proper documentation to explain exactly what I need to know – in order to use that service. You have provided additional channels and ways for them to self support themselves. There will always be some who know more, and others that know less. There will even be times where your customers know a lot more and do things with your infrastructure that you would have never dreamed of doing on your own. Let them help others as well.

Provide them with the tools to share their knowledge with others, be it a Wiki that they can update, a chat room where they can also assist other customers. It will benefit your other customers for and will benefit you in the long run, because you will have less load on your support team to answer these questions.

Self service portals for as much as possible – exposing as many of your services as API’s so that your end customers can consume them – without having to ask you to do it for them. All of these and many more, are what makes a successful service, both a public one – and even more so one in-house.

Put in as much automation as possible – because scaling people is so much harder that scaling a process.

Not everything can be infinite – and not everyone can build a service at scale like AWS – but this should be your goal – even if it is not possible at day-1.

I will leave you with the wish that my colleague gave to our internal cloud service team.

"I hope we come to a time where we do not have to contact you – or have frequent meetings to discuss this service. We just consume the service.

When we get to that point – then you know that you are providing service that just plain and simply works, and you have done a really good job."

Some food for thought.

As always, please feel free to leave your thoughts, and comments below.

2016-09-20

Losing the Will to Share

I assume that it has been apparent that I have not been active on my blog. Not for a while at least.

The last time I actually wrote something here was just under 6 months ago.

I do enjoy information, I love consuming information – but about 6 months ago I lost my will to write here on my own blog. It is a shame – because I thoroughly enjoy this as my own place in the world, my place where I could vent, where I could provide insight, a place I could call my own.

It could very well be that this was due to the loss of a parent (well actually both – because dementia and Alzheimer’s does not leave very much of my only living parent, that raised me most of my life) – but I dropped off of the radar. I considerably reduced my social media involvement considerably as well.

Everyone is entitled to their reasons, I have my own. I have contributed elsewhere – just not under my own name.

I do appreciate your patience – and now I am finally back in a place where I feel I am ready and will start to share my feelings, thoughts, tips and what-nots.

I do want to point out one thing. This used to be a very VMware centric blog.

Used to – I do not think that it has been that way for more than a year.

I hardly use VMware products anymore – definitely not on a day-to-day basis.

So – there will be some changes, in content – in focus, in presentation.

It does feel good to be back – so sit back and enjoy the ride.

2016-05-20

Sandboxed Malware Testing with Veertu

A while back I blogged about native virtualization on a Mac, and today I am pleased to host Clarence Chio with a guest post about a very interesting use case for using Veertu.

Clarence is a Security Research Engineer at Shape Security, working on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented independent research on Machine Learning and Security at Information Security conferences in several countries, and is also the organizer of the “Data Mining for Cyber Security” meetup group in the SF Bay Area.

And without further ado…

Malware testing is a common task in a security professional’s workflow. For the uninitiated, malware testing involves examining the behavior and capabilities of malicious software by executing it in a controlled but realistic environment.

Why do malware testing?

On one level, you will be able to observe the malware’s actions close up, and understand how a piece of malware can impact (or has impacted) your systems. On a deeper level, you can dive into the actual code and use disassembly tools to perform binary analysis. Doing this can help you understand exactly what the malware is doing, and in some cases even find ways to neutralize the threat.

Windows malware often latch themselves onto the system by making changes to the Windows registry and/or filesystem. Malware that make network calls often also “phone home” to a Command and Control (C&C) host to exfiltrate information or receive further instruction on what to do after infection. By performing malware testing, all of what the malicious applications do will no longer be a mystery. You have a chance to peek through the fences and understand how attackers think.

How to do malware testing?

Malware testing is typically done in Virtual Machine (VM) lab environments. In this post, I will walk you through a series of steps that you can follow to set up your own Windows 7 malware testing lab on your Mac OS X machine using Veertu. The benefits of using Veertu over other legacy OS X virtualization software mainly comes from Veertu’s implementation of a native hypervisor using Apple’s newly released Hypervisor.framework, resulting in a lightweight yet well-encapsulated solution for virtualization on Macs. In my comparisons, VMs performed significantly better on Veertu compared with alternatives, with a smaller host memory footprint and CPU utilization. This allows you to use system resources more efficiently and ensure that the malware runs on an environment that is as similar to bare-metal as possible.

In addition, Veertu’s “VM read-only” mode allows you to make a base-VM which you can make ephemeral changes to. Any effects and stored system state that the malware has on the read-only VM will be completely and safely removed simply after restarting the VM process. This is analogous to many legacy virtualization software’s “snapshot” functionality, but comes without the hassle of having to manually snapshot system state, manage snapshots, and potentially accidentally retaining some malicious state stored on the machine after reusing the VM.

Getting started with Veertu malware analysis

First of all, you need a paid version of Veertu OS X native virtualization software, as well as a valid Microsoft Windows ISO. You can get Veertu software from your Mac App Store.The free version of Veertu allows you to run a good selection of Linux VMs included in the Veertu Cloud library, but since most malware still target Windows machines, having a Windows malware analysis lab would be nice. After going through the steps of configuring the Windows VM in Veertu, we will have a clean VM that we can start to configure for malware testing.

In this walkthrough, we will be using Windows 7 Home Premium to analyze the Petya Ransomware that has very recently (~ March 2016) become one of the most popular pieces of ransomware in the wild.

The next thing that you will want to do after launching your Windows VM is to install Veertu Guest Add-ons on your VM. With the Veertu guest VM window selected, click on the “Commands” drop down menu on your Mac menu bar, and select “Install Guest Add-ons”.

image


This will mount the Veertu Guest Add-ons installation disk as a removable disk in your VM. Simply run the installation script by following through the standard Windows software installation procedure. You will need to restart the OS on the VM for the installation to take effect.

install-guest-add-ons


Next, we will set up a shared directory between the Host OS (your Mac) and the Guest OS. (your VM) This allows you to conveniently share files between host and guest systems. In order to do this, open the Veertu VM management window and select the dropdown menu for the VM that we just installed. Click the “Edit VM…” option, and go to the “Advanced” tab. You will be able to configure your shared-folder there.

image
 

Malware execution script

For your convenience, here is a simple Windows batch file (.bat) that does 3 simple things:

copy "Z:\malware.bin" "C:\Users\John\Desktop\"
ren "C:\Users\John\Desktop\malware.bin" malware.exe
start /d "C:\Users\John\Desktop\" malware.exe

Copy the malware named “malware.bin” from the shared folder to the Windows Desktop (the Windows user is John in this example)
Rename “malware.bin” to “malware.exe” for conventional Windows execution.
Execute “malware.exe”

For this environment, we want this script to execute automatically upon boot. To do this, simply create a text file named “run-malware.bat” on your Mac and place it in the Veertu shared directory. By default, the shared directory is mounted as a shared network drive, “Z:\\”.

We want this Windows script (called a batch file) to execute automatically on VM boot. To do this, we first move the script from the shared directory to the VM file system.

copy-run-malware-to-fs

To make this script run automatically on VM system boot, we go to the Start menu and open the “Startup” folder, then create a shortcut to the script (we copied it to the desktop earlier) and drag the shortcut into the “Startup” folder.

This Windows batch script now runs on system boot.

run-batch-script-on-boot

Enabling Veertu’s Read-Only mode

We are now ready to enable Veertu’s Read-Only mode by going to the same “Advanced” tab in the VM settings window, (as we did when we were configuring the shared folder earlier) and checking the “VM Read-only” checkbox.

vm-read-only
After doing this, your VM will be in a read-only state. Any further changes you make to system state will be ephemeral.

Watching the magic happen

Now, you have a malware analysis lab. Let’s demonstrate the efficacy of this lab environment by running malware on it. First, shutdown the VM and prepare your payload.

If you don’t yet have a Windows malware binary at hand, you can find one from reputable online sources such as Malwr.com. Downloading arbitrary binaries from the Internet is as dangerous as it sounds. “Fake” malware analysis sites have been known to exist for the sole purpose of distributing malware. Make sure that your source is reputable. Be extra careful in doing this, and make sure that you do not execute any of the binaries that you download on your host operating system. Most malware binaries that you download from these sites come with the “.bin” extension, as an entry-level measure to prevent you from executing it by accidentally clicking it.

Rename the malware to “malware.bin”, and place it in the Veertu VM shared directory. Then, start the VM. The script should start once the system boots.

ransomware-run

It works! The Petya Ransomware executes, and the system immediately reboots to a fake DOS screen stating that it is “Repairing file system on C:”. What the malware is actually doing is encrypting your “C:”. Once the encryption is done, it promptly informs you that you have fallen victim to the Petya Ransomware, and you need to pay some Bitcoin for the decryption code. We chose this particular ransomware for the demonstration because it is particularly tricky to get rid of. Even though it is possible to decrypt your hard drive without paying the ransom, it requires the physical removal of your hard drive.

Here comes the magic. Simply power-off the VM, remove the malware binary from the shared directory, and restart the VM. No more trace of the malware, and the VM’s hard disk image is no longer encrypted. As you might imagine, this shaves many cycles off the malware testing workflow, and gets rid of any VM snapshot management intricacies. You have a malware testing environment that you can use over and over again without having to reprovision or reinstall any tools on the guest OS.

Performing further analysis

Of course, just observing malware execution behavior by running the executable can hardly be called malware “analysis”. Browse the huge open-source list of Awesome Malware Analysis tools and more in-depth tutorials for how you can get started with observing malware behavior and perform binary analysis to dive into actual malware code.

Once you have identified set of malware analysis tools that you can use to perform deeper malware introspection, simply turn off Veertu’s VM read-only mode to install these tools, then turn read-only mode back on, and continue analyzing malware.

Fin

“To know your Enemy, you must become your Enemy.”
- Sun Tzu’s “The Art of War”

Malware testing allows you to gain a deep understanding of the threats to your system, and understand trends in malicious software. And, if your are a Mac user, then, with Veertu, you can create a highly optimized and streamlined malware analysis lab environment that you can use to dissect malware behavior.

2016-03-24

We Are All OpenStack! Are We Really????

Background

OpenStack has a vast community, globally distributed, spanning multiple time zones and all sorts and kinds. The community has a culture - a charter, a way of doing things. It is something that you have to learn how to get used to - because the lay of the land is not always as straight forward as you would expect, not what you are accustomed to, and sometimes it might see downright weird. But as the saying goes, "When in Rome, do as the Romans".

A thread on the OpenStack mailing list as of late week (actually two of them) caught my eye, and attention - so much that I feel the need to write this post.

Ever since I have started to get involved with OpenStack, I have noticed that there is a very obvious divide between the Developers (those who write and contribute code to OpenStack) and the User community (this is split into two parts - those who actually consume OpenStack - the end users, and those who deploy and maintain OpenStack - the operators). But before the thread - let me explain current situation in OpenStack for those who might not be acquainted with the way things work.

Anyone who wants can sign up as an OpenStack foundation member. All you need to do is to sign up on the site, fill in a form, accept an agreement - and you are a full fledged OpenStack foundation member.

The 'benefits' that come with this are not many - and the obligations are not that demanding either.

2.2 Individual Members.
(a) Individual Members must be natural persons. Individual Members may be any natural person who has an interest in the purpose of the Foundation and may be employed by Platinum Members or Gold Members.
(b) The application, admission, withdrawal and termination of persons as Individual Members are set forth in the membership policy attached as Appendix 1 (“Individual Member Policy”).

As an foundation member - you can run as a candidate for the Individual Member Director elections and of course you can vote in the above elections. You are invited to participate in the User surveys that are published on a regular basis. And of course you consider yourself a member of the OpenStack community. You are also eligible to run as a candidate for the Technical Committee elections (but you are not allowed to vote – see below).

Next come the Active Technical Contributors (ATC). These are the people that actually write the code (well at least that was the intention - I will explain later). The criteria for becoming an ATC are as follows:

(i) An Individual Member is an ATC who has had a contribution approved for inclusion in any of the official OpenStack projects during one of the two prior release cycles of the Core OpenStack Project (“Approved Contribution”). Such Individual Member shall remain an ATC for three hundred and sixty five days after the date of acceptance of such Approved Contribution.
(ii) An Individual Member who has made only other technical contributions to the OpenStack Core Project (such as bug triagers and technical documentation writers) can apply to the chair of the Technical Committee to become an ATC. The final approval of such application shall be approved by a vote of the Technical Committee. The term shall be for three hundred and sixty five days after the date of approval of the application by the Technical Committee.

This comes with obligations and benefits as well. The obligations are that you are expected to contribute code according to the guidelines and rules of the projects, code quality must be upheld, and etc. etc.

The benefits of being an ATC are as follows.

  • You get a free pass to the OpenStack summit ($600 value).
  • You are eligible to vote in the bi-annual Technical Committee Elections.
  • You are also eligible to vote for the Project Team Lead Elections - which occur every six months - providing you contributed to that specific project over the last cycle.
  • You are considered an integral member of the OpenStack community. Up until last summit - you were also granted access to the OpenStack Design summit - something that was closed for only ATC's (but this has changed since last summit - or two summits ago - I cannot remember).

There are two other groups who are partially recognized by the OpenStack community. There are people that are contributors to the community. They submit bugs, review patches, but do not actually submit code to the projects. I assume that the idea is that once they start getting into the code - then they actually will contribute code back into the projects in the long run and will become ATC's.

And then there are the end-users, and then the Operators. They are the people who are trying to deploy, upgrade and maintain what we all call OpenStack.

  • They are the ones who are trying to get things working in OpenStack.
  • They are the ones who have to provide creative ways to supply solutions to their clients - because OpenStack does not do what it was supposed to - does not do it well enough, does it really badly - or has not got there yet.
    Two clear examples (for me) would be - bad telemetry for all instances (which is now being re-written) or Load balancing that is not enterprise ready.
  • They participate in an Ops meetups (see the outcome from the last one in Manchester a month ago)
  • They contribute code - to the Ops repositories - which include scripts, tools, and much more.
    They participate in working groups - such as Enterprise Working group, Operator Tags, Product marketing - there are many. The technical (a.k.a. Developer) community see these participants as part of the user community. They do not see these contributions as part of the OpenStack product, they are deemed Upstream, or Downstream - but not part of the OpenStack products.

 

The Problem

The thread I am talking about - was titled "Recognizing Ops contributions". It has been voiced many a time (and not only by me - even though I have been very vocal about it) that Operators should be recognized as part of OpenStack. I have actually run for the Technical Committee elections on the basis of this stand - but I was not at all successful.

OpenStack works on the 4 Opens (and this is an official OpenStack resolution!)

I would like to bring your attention to the following points in that resolution

The community controls the design process. You can help make this software meet your needs.

The technical governance of the project is a community meritocracy with contributors electing technical leads and members of the Technical Committee.

The obvious question though is - what is considered a contributor?

If the four opens are to be upheld - then I would expect that the all contributors are equal.

Today that is not the case.

There have been many a time where I personally and many other operators as well have asked for things to change in OpenStack, have asked to be included in the community, have asked to provide feedback, and have been constantly told that everything should be done the way that developers have been doing it. Sometimes with success, sometimes with less, but I think that OpenStack has started on the right track - but still there is a whole lot of resistance to allow Operators a full and equal say in what happens in the development process.

When I see comments like this (from Thierry Carrez - a long time TC member), I come across what has been happening for many years already.

Yeah, we can't really overload ATC because this is defined and used in governance. I'd rather call all of us "contributors".
<..snip..>
Upstream contributors are represented by the Technical Committee and vote for it. Downstream contributors are represented by the User Committee and (imho) should vote for it.

Yes Thierry continues to say that all of the contributors should be equal - but keep the two separate - only people who contribute Upstream (which I understand as those who write code) should be part of the TC, and all the others - part of the User Committee.

What is this User Committee? Looking on the OpenStack site there is a clear definition of its mission.

As the number of production OpenStack deployments increase and more ecosystem partners add support for OpenStack clouds, it becomes increasingly important that the communities building services around OpenStack guide and influence the product evolution. The OpenStack User Committee mission is to:

  • Consolidate user requirements and present these to the management board and technical committee.
  • Provide guidance for the development teams where user feedback is requested.
  • Track OpenStack deployments and usage, helping to share user stories and experiences.
  • Work with the user groups worldwide to keep the OpenStack community vibrant and informed.

To me it seems that the user committee is a mantlepiece body which has absolutely no influence on what happens in OpenStack and was formed to say, "Hey we have a group of people that represent our users. They can't really do anything - but at least we can say they have representation."

And how do I know that the User committee - has no teeth? I looked at the bylaws (4.14).

4.14 User Committee. The User Committee shall be an advisory committee to the Board of Directors and shall be comprised of at least three (3) Individual Members, one (1) appointed by the Technical Committee, one (1) appointed by the Board of Directors, and one (1) appointed by the appointees of the Technical Committee and Board of Directors. The User Committee shall organize its meetings and may, on approval of the Board of Directors, create elected seats to be filled by a vote of the Individual Members. On request of the User Committee, the Board of Directors shall invite the User Committee to attend each regular quarterly meeting and shall allocate at least one (1) hour during the regular quarterly meeting following the annual election to hear the report and recommendations of the User Committee. On request of the User Committee, the Technical Committee shall invite the User Committee to attend a regular meeting and shall allocate at least one (1) hour during such meeting up to four times each calendar year to hear the report and recommendations of the User Committee.

The UC (User Committee) has to request to be invited to attend the BOD (Board of Directors) meeting. The UC has to request to attend the TC (Technical Committee) regular meetings and will be given at maximum 4 hours a year to give recommendations and report.

There is not a single word about what the UC actually can do besides participate in meeting and pass recommendations. Do these recommendations have to be accepted? Can they change anything? That is at the discretion of the TC and the Board.

What does the TC do - or what can they do? First the TC Charter:

The Technical Committee (“TC”) is tasked with providing the technical leadership for OpenStack as a whole (all official projects, as defined below). It enforces OpenStack ideals (Openness, Transparency, Commonality, Integration, Quality...), decides on issues affecting multiple projects, forms an ultimate appeals board for technical decisions, and generally has technical oversight over all of OpenStack.

Again back to the bylaws (4.13)

4.13 Technical Committee.
(a) The Technical Committee shall be selected as provided in the Technical Committee Member Policy in Appendix 4.
(b) (i) The Technical Committee shall have the authority to manage the OpenStack Project, including the authority to determine the scope of the OpenStack Technical Committee Approved Release subject to the procedures set forth below. No changes to the OpenStack Technical Committee Approved Release which deletes all or part of the then current Trademark Designated OpenStack Software. shall be approved by the Technical Committee without approval as provided in the Coordination Procedures. After such approval, the Secretary shall post such description to the Foundation’s website.

Do you notice the difference in tone? This is the committee that decides what happens within the OpenStack projects, who is added, who is not, who is removed, what directions are taken, etc. etc.

Sorry for jumping around, but back to the thread.

Right, this brings up the other important point I meant to make. The purpose of the "ATC" designation is to figure out who gets to vote for the Technical Committee, as a form of self-governance. That's all, but it's very important (in my opinion, far, far, far more important than some look-at-me status on a conference badge or a hand-out on free admission to an event). Granting votes for the upstream technical governing body to people who aren't involved directly in upstream technology decisions makes little sense, or at least causes it to cease being self-governance (as much as letting all of OpenStack's software developers decide who should run the User Committee would make it no longer well represent downstream users).

Again - I read this is as - let the developers write their code and keep Operators far away from having anything to do with how OpenStack projects are managed. Because they have no business here.

The Solution?

Which led me to write this answer. I would like to re-iterate what I said on that thread.

Operator contributions to OpenStack are no less important or no less equal than that of anyone writing code or translating UI's or writing documentation.
By saying that someone who contributes to OpenStack - but doing so by not writing code are not entitled to any technical say in what directions OpenStack should pursue or how OpenStack should be governed, is IMHO a weird (to put it nicely) perception of equality.

So I see two options.

Ops Contributors are considered Active Technical Contributors - just the same as anyone writing code - or fixing a spelling mistake in documentation (and yes submitting a patch to correct a typo in a document - does give you ATC status). Their contributions are just as important to the success of the community as anyone else.
or
Give Ops contributors a different status (whatever the name may be) - and change the governance laws to allow these people with this status a voting right in the Technical committee. They have as much right as any other contributor to cast their opinion on how the TC should govern and what direction it should choose.

By alienating Operators (and yes it is harsh word - but that is the feeling that many Operators - me included - have at the moment) from having a say in - how OpenStack should run, what release cycles should be - what the other side of the fence is experiencing each and every day due to problems in OpenStack's past and possible potential trouble with the future - reminds me of a time in the not so far back history where not all men/women were equal.

Where some were allowed to vote, and others not - they were told that others could decide for them - because those others knew what was best.

OpenStack's 12th release - Mitaka - is coming up very soon. That means that OpenStack will soon be 6 years old.
I think it is time for OpenStack - including the developer community - to accept that they are no longer alone in this - there is a whole lot of information, knowledge and experience that can be reaped from all those who are actually using the products that the community produces and it is now time to accepts all contributors - in whatever way they may choose to do so - as equal members - in every way.

Yes there may be disadvantages, there also could be many advantages as well. But it time to stop treating everyone that does not write code as a "second degree citizen". Because at the moment OpenStack Technical community flaunt that everyone is part of OpenStack - but de-facto - they are most certainly not.

There will be a new working group formed (Non-ATC Recognition Working Group) in the very near future. I plan to be a very active member of this group - with my personal end goal of finally getting all contributors to the OpenStack community - and not only those who actually contribute code - as equals - with equal say - in all aspects of OpenStack - yes - including Technical leadership and decisions.

(I consider myself an Operator - my view could be biased)

2016-02-01

There is no Root Cause, Only Contributing Factors

I participated a week or two ago in the DevOpsJRS meetup in Cisco Jerusalem.  Our guest  speaker was Avishai Ish-Shalom. I always enjoy Avishai's talks, he is a great speaker, a down to earth guy, and I have had the opportunity and pleasure to work with him several times in the past.

One of the slides that he posted included the following:

I am currently involved in an Scrum product team, where we (try and) do retrospectives after each sprint.

For those of you who are not familiar with the Agile methodologies, a short overview and my view on the process.

Making long term plans is quite difficult, and sometimes even impossible in our ever changing world. Things are moving so fast, at such a pace, ever changing.  Scrum groups work in sprints. A sprint is a short burst of work, which can be defined by the team, but usually we are talking about 1-2 week bursts.

The team plans the work for each sprint and concentrates only those tasks at hand for that specific sprint. They produce in small increments but continously produce something that adds value.

After the sprint there is a retrospective. The team looks at what went well, what was bad, and how to improve. There is a huge amount of trust needed within in the team in order for this to be productive, and one of the things that are very important is that these are conducted in a blameless manner.

The point of such an exercise is to learn and to improve and not to point fingers.

Back to the root cause. In my previous IT positions whenever there was an outage, we did a root cause analysis to see what caused the problem. We always wanted to pinpoint that one thing that caused the problem.

2310295343_462278ae01_z

I completely agree with what Avishai said. There is no such a thing as a Root cause, there are only contributing factors. But this seems to be completely against what you might know and have been accustomed to.

Let try and demonstrate with an example.

A critical application stopped responding.
The outage caused downtime for 1 hour in your organization.

In a regular post mortem and root cause analysis, you would have gone through the motions until you think that you found was that the reason the app went down for a hour.

Why did go down for an hour?
Because the host it was running on was disconnected from the network.

Why was it disconnected?
Because John disconnected the wrong cable when working in the datacenter.

There we found the root cause. It was John's fault.

If we are looking only for a root cause, that would be it.
But remember, there is no root cause, only contributing factors.

Digging down a little deeper will uncover a lot more.

Why did John disconnect the wrong cable?
Because he was already at work for more than 24 hours fighting fires and running from crisis to crisis.
He was tired (contributing factor).
And the cables were not marked correctly. (another factor)

So it was not John's fault. There were contributing factors.

The idea of this exercise is to improve and to understand the possible things that we can learn from this event so that it does not occur again.

Possible answers could be:

Make sure that all cables are marked clearly. It would have helped here.
John was tired, over worked. Why? Because he had too much on his plate, he was overloaded.
Perhaps increase automated processes that will free up more time for John and the team.
Invest in more staff, better equipment, additional training so that John would have a better balance and have time to invest in improvement.

We must embrace outages, because they are they best learning opportunities, and the best way to improve.

I would highly recommend using this method in your next retrospective or post-mortem. I can guarantee you, that this will improve your team, yourself and the way you work.

2016-01-05

Native Mac OSX virtualization - with Veertu

I was contacted today by Izik Eidus, an old acquaintance from Ravello which I was really impressed with their technology and introduced them in this post.

I assume that not many of you know that Apple released native hypervisor functionality with their OSX Yosemite release, their Hypervisor.framework.

What this does is it allows you to run a VM natively on OSX, without the need for client hypervisor (such as VMware Fusion or VirtualBox).

Two of the main brains behind the Ravello hypervisor have now released a Native Mac OSX virtualization tool.

Say hello to Veertu.

image

It is light (20MB), supports Windows and Linux Operating Systems, has extensive useability features such as copy/paste between guest and VM, full-screen, and shared folders.

It is the only virtualization tool that is actually available in the Apple store – becuase it does not make any changes to the kernel.

It was really very simple. I downloaded the tool and started it up.

You are presented with 2 choices, create your own VM’s from ISO’s (which is a paid feature) or deploy from Veertu’s servers which has several Linux flavors.

Veertu - splash

I chose Centos 7 Minimal.

Available OS

What happens is that the client downloads the appropriate ISO image that you can install the relevant OS.

Download

(I think that the wording above could be improved because it is not actually downloading a VM, rather an ISO image)

Launch VM

Once downloaded you can change the various settings of the VM.

VM Settings

For example CPU, RAM, Disk, Network etc.

VM boot

Power it on – and your VM goes through the installation process. (this is how I realized that the client is not downloading a full VM – rather the installation ISO)

Management interface

Here is the Management interface.

Installation Complete

And after the Centos 7 installation is complete.

Veertu running VM

And here you have a VM running natively on my Mac.

Now the software is not perfect. And there are things that need to be improved, such as:

  • Each time you create a VM, it downloads the ISO again, which seems a waste of bandwidth to me (it will be changed in a future version)
  • The download was slow for me, and downloading an ISO could be faster from a local mirror – just that the only way to point to a different ISO is paying for the full product.

Of course – what was the first thing I tried to do? Build an ESXi VM

ESXi attempt

But that did not work because Apple have not enabled supported for nested VM’s (yet).

I liked the native interface. I liked the smooth integration, and would definitely keep an eye on this product. We all know that Ravello has an amazing solution which allows you to run your VM’s on any cloud, I think that this will be an interesting way to do things in the future.

And if Applefarm is hint in to where they are going, then this will definitely be interesting

Applefarm

Disclaimer:

I was approached by Izik to look at the tool. I exchanged a few emails with him, with some questions and suggestions, and I also received a development build of Veertu to test – which is similar (but does not have full feature parity) to the full version which is worth $39.99.

I was not asked to write a review.