2015-11-16

Is the #OpenStack Leopard Changing its Spots?

A short while back I tweeted the following:

This was a result of me reading the minutes of the OpenStack Technical committee from November 3rd, 2015 (full log is here)

What pleasantly surprises me is that this might finally becoming a viable option.

Leopard spots

Let me first go into the background of my trail of thoughts.
I really enjoy working with OpenStack, but I also really hate working with OpenStack. A good old love/hate relationship.

There are amazing things about this community, the way we work, and the software we produce (just to name two).

On the other hand there are really bad things about this community, namely – the way we work, and the software we produce.

One might say that is an oxymoron (I am not calling anyone stupid though). But yes it is true. The OpenStack community produces amazing things – or at least that is until someone (let’s call them an Operator) tries to actually use this in a production environment, and then they find out until it is completely and totally, not ready, in any way for production use.

So this Operator tries to raise his concerns, why this is not useable, these are valid concerns and maybe these should be fixed. But unfortunately they have no way of actually getting this done, because – the feedback loop is completely broken. Operators have no way of getting that information back in. The community has no interest in opening up a new conduit into the developer community – because they are fixated on working in a specific way.

I must stress and honestly say, that this was the case up until approximately a year ago. Since then the OpenStack community has embarked on a number of initiatives that are making this a much easier process, and we are actually seeing some change (see the title of this post).

For me the question actually is – what sparked this change? Up until now I was under the impression that the OpenStack community (and I mean the developers) was of the opinion that they are driving the requirements and development goals, but it seems as of late – this could be shifting.

To me this is going to have a significant effect of what OpenStack is, and how it moves forward.

For better and for worse. Time will tell.

What do you all think? Please feel free to leave your thoughts and comments below.

2015-09-27

Pillar #3 - Management

This is third post in the The Three Pillars of DevOps series

Part 1 – The Three Pillars of DevOps
Part 2 – Pillar #1 - Developers
Part 3 – Pillar #2 - Operations Engineers
Part 4 – Pillar #3 - Management
Part 5 – Bringing it all together

In this post we will dive in to the third pillar – Management.

pillar3

Being part of the Pillar

  1. DevOps is a cultural change

    Most people do not like change. Personally, I do not have an easy time adapting to change, it makes me nervous, uncertain, unsure. Different people take to change in different ways. Adoption of a DevOps way of working is not a small thing.

    Boundaries are no longer clear.
    Who is in charge of what?
    Why do we have to take care of this?
    I have no idea of what this means.

    With any big change you should understand what this will do to your organization. See where you will have problems. Where people will need help. Expect that things are going to change. Over time.

  2. Support both of the other pillars on the journey


    In continuation on the previous point. Help your developers with education. They are no longer responsible for a small part, but rather the big picture.

    Help your Operations Engineers with the world of code, again training, courses, books, seminars, pair programming there are a more than a number of ways.

    Bring in someone to help and coach your teams throughout the journey, going from waterfall to Agile is a huge change, even more so such a culture change. Help them in the beginning, through the ups and downs, and how they can continuously improve themselves and the way they work.
  3. Have patience

    This is going to take time. Quite a lot of time. Give the teams the leeway to adapt and learn along the jurney. Let them learn from their mistakes, pave their own path. Your productivity will probably go down in the short term, which also mean that the bottom line might drop as well.

    Be prepared for this. Remember, if you are in it for the short term, the quick win, then this is the wrong reason. And it will most probably fail. Almost definitely. Your are here for the long term, even if it means losing in the short term.

Not being part of the Pillar (or being Samson)

image_thumb4_thumb

  1. We should start doing DevOps! Now!

    I went to this interesting seminar where they showed how they are delivering code 10-20 times a day. I want us to start do the same. Next week. Let’s just make some quick changes, merge some teams, get the Devs working together with the Ops, and we should be able to deliver even more often with better quality.
     

  2. My employees can do double the work, in less time

    So now that we are ‘doing DevOps’ I don’t need so many people on my teams because they are doing the work on both sides of the fence. They can get their work done and also support the applications they are writing from start to finish with less people, and because they are more efficient probably in less time.

  3. There is only one way to get this done.

    Agile, Extreme Programming, Kanban. Rally. We need one tool to monitor and rule it all. One process that everyone has to follow. Everyone has to fit into the box we create. That is the only way we can maintain control.

    The groups have to adapt to the way I want it to work. My way or the highway.

Next up we will look at Part 5 of this series – Bringing it all together.

Pillar #2 - Operations Engineers

This is second post in the The Three Pillars of DevOps series

Part 1 – The Three Pillars of DevOps
Part 2 – Pillar #1 - Developers
Part 3 – Pillar #2 - Operations Engineers
Part 4 – Pillar #3 - Management
Part 5 – Bringing it all together

In this post we will dive in to the second pillar – Operations Engineers.

pillar2

Being part of the Pillar

  1. Allow everyone to consume your infrastructure


    Infrastructure is there to be used. You are there to allow your business to create revenue, as much as possible, and in as short a time as possible.

    They will need resources in order to do that. You have probably been working with cloud and virtualization – long before they have, and have a decent amount of expertise and infrastructure already in place.

    In order to allow development teams to do their work, they will need resources, for a number of solutions, be it Continuous Integration, Continuous Delivery – or just plain old sandboxes for development purposes.

    Help them use what you have, help them build their own if they need it.
  2. Become a trusted broker for your development teams


    Developers need your help. They have deadlines and problems that they are dealing with and will have a very difficult time learning all you know in a short time.

    Explain to them what the benefits are of using different kinds of infrastructure, when they should go to the cloud, and when they should stay in house. What are the security implications of choosing a cloud solution, what they need to be aware of. They are also on a journey and need to adapt to this new world.
  3. Make it as easy as possible


    Again, infrastructure is there to be used. The same way that you take for granted that when you flip the light switch in your room you expect the lights to go on, that is what developers and the organization expect to happen when they need to turn on a server.

    Of course we all know that when you flip on a switch – there are so many things that happen, so much infrastructure is needed, from wiring to circuit boards, to light fixtures, to metering, to electric company.. (and so on..), but all of that is transparent to end user.

    You should aspire to make your infrastructure as easy to consume as the electricity in the building. No-one is saying that it is easy, but that should be your end goal.
  4. Work with the development teams to help produce quality products


    The development teams have a way of doing things, not necessarily is this best way, and you probably no longer have a number of grey hairs that you pulled out over the years trying to solve problems created by your development teams.

    Explain to them what does work, what does not, and why. Work with them together to find a solution that acceptable and will work for all sides.

    Don’t expect that things will be perfect the first time around, because they won’t. Iterate and make improvements in stages, small steps until you get to Nirvana.

    Go through deployment models with them, explain to them what scaling is, how high availability is achievable in this new world, and what they need to change in order to get there.

Not being part of the Pillar (or being Samson)

image_thumb4
  1. Be an infrastructure hugger


    We paid for it. We installed it. I know more about this infrastructure than any of these developers think they know.

    They cannot use what we have already because:

    - We have no capacity
    - The environment is not suitable for their needs
    - They will make a huge mess

    Let them go and buy their own, learn what we have for the past 5-10 years and then we will talk.


  2. Create workarounds to accommodate badly written software


    Software is not perfect, sometimes it is just really crappy. And over the years you have learned to deal with that. Creating cron jobs to restart processes on a regular basis due to a memory leak, or creating cluster mechanisms to to solve high availability issues.

    These workarounds make your life more livable, manageable, but do not solve the underlying issue, just work as a band-aid until they get their stuff together and fix the junk they wrote.

    And since the development teams don’t care about these things any way – you never relayed back to them that these issues exist.
  3. Let them go and use AWS if they want to, and hang themselves..


    We cannot provide the developers the kind of cloud experience that the demand. It is too difficult for us to make these changes, due to budget constraints, manpower or perhaps time constraints as well.

    If they need these things – let them go the where it is available, and pay for it themselves, and let them worry about their own issues of security, administration etc..
Next up we will look at Pillar #3 – Management.

2015-09-08

NSX 6.2 is Available for Download!! (Evaluation)

Since this has been totally unavailable up until now (except for a select few), it is great to see that it now available for public download

As has been said a number of times before (here and here), it was not possible to get hold of NSX unless you had specifically been given access.

There were a number of reasons for this – some I agree with, some I do not.

But it seems that as of the 6.2 release it is now possible to actually download NSX and try it out for 60 days.

Here’s how I went about it.

First I tried the VMware site

Product

But that led me to more or less a dead end. It took me to the same HOL environment where you could try it out.

HOL

So how do you get access to the bits?

I came across this when ‘cruising’ through the My VMware portal.

**Full disclosure here – I have not purchased NSX, I have note engaged with PSO, I work for Cisco who is probably seen as NSX’s biggest and worst ‘enemy’. I do have access to (AFAIK) Enterprise plus and some vCloud suite license in the portal – as a result of previous purchases and management of a decent sized VMware environment. This is not part of a vExpert freebie. **

In the portal I searched for NSX

NSX

Link

And lo-and behold – instead of the usual greyed out download box, I got this.

Download!

Documentation can be found here.

And just to confirm – NSX can operate in evaluation mode for 60 days (from the documentation)

Evaluation

Personally – I see this as a welcome change by VMware – allowing access to the product to try it out before having to dish out $$$ to look at the product and the capabilities.

So thank you VMware.

And if this actually does stay this way – it remains to see how VIAdmins will react to the product and see how suitable it is (or is not) for their environment.

Again – your mileage may vary – depending on your portal access.

2015-08-27

Pillar #1 - Developers

This is second post in the The Three Pillars of DevOps series

Part 1 – The Three Pillars of DevOps
Part 2 – Pillar #1 - Developers
Part 3 – Pillar #2 - Operations Engineers
Part 4 – Pillar #3 - Management
Part 5 – Bringing it all together

In this post we will dive in to the first pillar – Developers.

I apologize in advance – but I will be using a number of stereotypes in this series – on purpose. I will probably exaggerate – also on purpose – but this is in order to get a point across.
I do have the utmost respect for all people in all three pillars – and I promise you, I will dump on each and every single one of the pillars (hopefully equally).

pillar1
developers-developers-o[3]Whenever I hear someone say developers – the first thing that comes to mind is the famous Steve Ballmer dance.

But these people are not as crazy as Steve on stage.
They are probably the people that work with you side by side. They create the things you use – daily.

For example – I can guarantee you that no Operations Engineer or management director was the one who wrote the applications and the functionality built into something that each and every one of us use every single day.

Be it your:

  • Web browser
  • Mail Client
  • Facebook application
  • Phone

All of these things need to be written in code. And I have the utmost respect for people who have the capability to create something from scratch and build it into something like the examples above.

Being part of the Pillar

  1. Understand what you know how to do best (and learn what you don’t)


    I do not know everything, I don’t think anyone can, and if they say they do – then don’t believe them. You know how to write code, you know how create an interface and I could go on with examples all day. But there are also things you don’t understand, such as Operating system security, firewalls, resource usage, network traffic, databases – and again I can go on and on and on…

    There are people who have been doing this – just as long as you have been writing code – so use them, consult with them – learn from them.
  2. Build with the broader picture in mind


    Applications do not run in a vacuum – they are part of a complete solution – they need to run in parallel or on top of other applications – so take that into account.

    Think about how you will interact with other parts of the solution – because it happen – you will interact with other pieces – all the time.
  3. Take full responsibility for what you produce


    You created it, it is your baby. The same way you bring a child into this world – you worry and care for it. You send them to day care (where someone looks after them) but if there your kid has a fever – you will come and get them – and nurture them back to health.
  4. Assume the worst will happen (eventually it will!)


    Nothing is perfect (and again if someone says it is – don’t believe them). Thinking out of the box and out of your comfort zone, not cutting corners and hoping for the best – but rather planning for the worst case scenario – will make the product you are creating a better one – and you a better developer as well (by the way).

Not being part of the Pillar (or being Samson)

image
  1. The Operations Engineers job cant be that hard


    I mean how hard can it be to spin up a server, create a DNS record, run a Load balancer in front and have 5 million people hit the site at the same time. What could possibly go wrong?

    This is a quote I just saw yesterday on Twitter.

  2. My application is the only one that counts


    I am writing something that will change the world. And I heard that this new fangled database will help me out – so that is what I will use, for a number of reasons. It is quicker for me to produce like this, the performance is very good, and I like trying the new stuff of course. I don’t care that every other application is using another database – because adapting to that standard is boring and more difficult. And of course I don’t care about what kind of resources my application needs – just give me whatever you have. I need it all!
  3. It worked in Vagrant, in DevStack, on my laptop


    I tested it, I ran it and I got it to work. My unit tests passed. My acceptance tests passed. I got green on my dashboard. So obviously it is not something wrong with the application. It is the “other guy”.

    I also don’t care how the application is maintained over time, my job was to get it work, so obviously that when you upgrade – you will need to restart services – but hey – they was not part of the original specs and requirements.
  4. My application works perfectly – and never fails


    I invested time in writing this baby, and I made sure that everything is perfect, runs like well oiled engine, purrs like a kitten, a masterpiece. I checked that everything works, and every possible scenario was covered (at least those that I thought of).

    And of course the assumptions that I took will always be true (such as – hardware never fails, networks are always available, I have an unlimited amount of resources available to me, all the time)
Next Up we will look at Pillar #2 – Operations Engineers.

The Three Pillars of DevOps

I apologize for plagiarizing the holy concept of The Three Pillars, but I do think that a foundation needs to be laid down for a healthy DevOps culture to thrive and even survive. And I would like to share with you some of my thought on this.

To be true to the agile methodology it would only be appropriate that instead of talking about pillars I prefer to talk about personas.

These three personas are:

  1. Developers
  2. Operations Engineers
  3. Management

If I was going to have to lay it out in a graphical form – I guess it would be something like this.

The 3 Pillars of DevOps

In order for a healthy DevOps culture ALL three of these need to exist – and work in sync.

In the upcoming posts – I will go through each of these pillars and how they should play a part
(but also more importantly – how they should not) in this highly abused and misused term…

DevOps.

image

Follow the rest of this series here:

Part 1 – The Three Pillars of DevOps
Part 2 – Pillar #1 - Developers
Part 3 – Pillar #2 - Operations Engineers
Part 4 – Pillar #3 - Management
Part 5 – Bringing it all together

2015-08-26

PowerShell Profile Tricks for Better VMware Management

My new post on some PowerShell Profile tricks for VMware has been published on the
Petri IT knowledgebase.

As an IT pro, we rely on scripts to manage our VMware environment, which helps us be more efficient throughout our work day. In this article, I’d like to share some PowerShell profile tricks that are specific to VMware. These are tricks that I use on a daily basis, which I think you’ll find helpful .. ..

Read the full post

2015-07-29

OpenStack Summit Voting - By the Numbers

I love diving into numbers – especially when it has something to do with technology conferences.

But before I do that I would to bring to your attention my two sessions that I have submitted for the upcoming summit (Shameless Plug.. )

Me Tarzan, you Jane (or Operators are not Developers)

Welcoming Operators to the OpenStack Jungle.

A year ago I set out on a journey on trying to help the OpenStack developer community understand the other (and sometimes not well understood) side of the OpenStack community, its users.

Users is a subjective term, depending on who you ask - it could mean the end user using an API or a GUI to deploy a new instance but it also includes those who operate the cloud, maintain it and sweat blood an tears to just allow the end-user to do what he wants.

There is distinct separation today between the two entities - but the good part is that they are slowly coming together.

This talk will describe how you an Operator can make a difference - be it small or large - in OpenStack.

The topics we will go over here are:

  • Initiatives
  • User Committee
  • WTE
  • ISV
  • Monitoring & Logging
  • Large Deployments
  • ... and many more....

Operator specific activities:

  • Ops Tags
  • IRC
  • Heaven forbid - committing code

Expect some interesting stories, some horror stories but we are all aiming for the same thing.

The pot of gold at the end of the rainbow.

OpenStack - High Availability (as a Service) - Fact? Fiction (or maybe something in between)?

Installing an OpenStack cloud used to be a complex task. We have evolved over time and made this a lot easier and more palatable for operators. Operating an OpenStack cloud on the other hand is whole different ball game..

Operators want stable systems and resilient systems, and if the infrastructure services can scale that would only be an added benefit.

But today, OpenStack as a result of its culture, and its history, is a collection of parts, pieces and solutions using multiple different technologies, and architectures.

One of the pain points is naturally high availability for the services which are provided today in a number of different ways.

This talk will propose one possible future direction with which this could be addressed.

By providing a central HA service for all OpenStack projects.

This session will describe a proof of concept for such solution showing making use of cloud friendly technologies that can could take the level of operations to whole new dimension.

 

I did this kind of an exercise for VMworld a few years ago - By the Numbers - VMworld 2013 Session Voting, and I thought it would be interesting to see what the numbers are for this OpenStack Summit.

Of course without some insight as well – the numbers would be quite boring.

There are a total of 1504 sessions that you can cast your vote upon.

That is a huge number of sessions. Perhaps too many. There is no way to go over the list in a defined amount of time. I think that most of the people are going to vote only if they are sent directly to a specific link. Which means this will be targeted votes as a result of someone asking you specifically vote for a specific session. Not really an ideal process but I guess that is the price we all have to pay, as a result of OpenStack becoming more and more popular.

Here are the number of sessions submitted for each track:
(Please Note – these are based on the pure number of submissions – and not what has been accepted. This is not an exact science – there could be a number of reasons why they are ranked like this – but these are my thoughts and ideas on the data below)

session numbers

What is the most popular track

Operations - the one part that the OpenStack community is still struggling to get their input back into the projects. For a number of reasons. Be it inclusion, methodology, culture or mindset.

For me this is / should be (also for everyone that is involved in OpenStack as well) a bright and shiny beacon. Claiming that this is either the most important aspect or the most pressing need that people want to hear about or want to talk about might be going a bit far, but it is way, way up there.

OpenStack Summits should be about the technology, not about how keep the bits and bytes up and running, deployed and working in an efficient manner.

Next up on the list. Community and How to Contribute. Way down there in at the bottom. Is that because people already know how to do it? Because people have given up on trying?

This is something that the community as a whole should invest more in making it part of the culture and making the bar much more accessible to all.

Hands on Labs. The number of sessions proposed as labs is growing. Maybe it is time to think about a centralized solution specifically for the summit?

Neutron (a.k.a. Networking). The hardest part about a cloud is the networking part. I have said this before and will continue to preach it from the rooftops.

I took the liberty of creating a word-cloud of from all the words in all the submissions according to recurrence.

Word Cloud

It makes you think..   :)

You have only one more day to vote – go and make your voice heard!

2015-07-20

Hybrid vs. Public - Google Joins OpenStack

This was a big piece of news last week. There are some that even are suggesting (but not really) that the Google stock jumped drastically as a result of this announcement, I personally find that just a good joke (and some wishful thinking).

A snippet from the Google post:

As we look to the future of computing in the enterprise, we see two important trends emerging.

The first is a move towards the hybrid cloud.  Few enterprises can move their entire infrastructure to the public cloud. For most, hybrid deployments will be the norm and OpenStack is emerging as a standard for the on-premises component of these deployments.

For me this is a gauntlet thrown directly in the face of AWS.

Amazon has always pushed that everything should and can run in the public cloud. They have never believed in the hybrid cloud model. It was all AWS or nothing. I have never agreed with this statement, and still don’t.

There will always be things that need to run in house – for a number of possible reasons:

  • security 5222217854_95141a55b0_z
  • regulation
  • the nature of the workload
  • politics

AWS has tried over the years to provide adaptations/solutions and tools to ease the journey into the private cloud – such as VPC or Direct Connect to enable you to extend your datacenter to the public cloud – but still have it feel as if it was in-house. But that is not hybrid.

I think that this is a great move on Google’s part and their ‘bear-hug’ of OpenStack.

It will be interesting to see what this collaboration brings into the OpenStack world.

2015-07-13

Registration is Open for the OpenStack Tokyo Summit

Even though we still do not know what the next release of OpenStack will yet be called (due to some community naming issues) – this is still an event I am very much looking forward to.

Registration is open for the event.

openstack_tokyo

Early bird tickets are $600 (until August 31st, 2015)

2015-06-28

Downloading all sessions from the #OpenStack Summit

A question was just posted to the OpenStack mailing list – and this is not the first time I have seen this request.

Can openstack conference video files be downloaded?

A while back I wrote a post about how you can download all the vBrownBag sessions from the past OpenStack summit.

Same thing applies here, with a slight syntax change.

You can use the same tool – youtube-dl (just the version has changed since that post – and therefore some of the syntax is different as well).

Download youtube-dl and make the file executable

curl https://yt-dl.org/downloads/2015.06.25/youtube-dl \
-o /usr/local/bin/youtube-dl
chmod a+rx /usr/local/bin/youtube-dl

The videos are available on the OpenStack Youtube channel.

What you are looking for is all the videos that were uploaded from the Summit, that would mean between May 18th, 2015 and May 30th, 2015.

The command to do that would be

youtube-dl -ci -f best --dateafter 20150518 \
--datebefore 20150529 https://www.youtube.com/user/OpenStackFoundation/videos

The options I have used are:

-c - Force resume of partially downloaded files. By default, youtube-dl will resume downloads if possible.
-i  - Continue on download errors, for example to skip unavailable videos in a playlist.
-f best - Download the best quality video available.
--dateafter - Start after date
--datebefore – Up until date specified

Be advised.. This will take a while – and will use up a decent amount of disk space.

Happy downloading !!

2015-06-01

The OpenStack Summit Kilo Summit - Recap

I have been home for just over a week from my trip to Vancouver for the OpenStack Kilo Summit (or Liberty Design Summit – take your pick).

It was a whirlwind of week, jam packed with sessions, conversations, meetings, presentations and community events.



There were a number of insights that I took with me and I would like to share with you in this post, and also in some upcoming posts in the future.

1. OPs is a real thing


There was a dedicated OPs track at the summit. Before we go into what actually happened there – I would like to clarify what I mean as OPs and if this is any different to the perception of how it is defined by the OpenStack community.

For me an operator is primarily the person that has to actually maintain the cloud infrastructure. This could be a number of things:
  • Create packages for installing OpenStack
  • Actually installing OpenStack
  • Making sure that OpenStack keeps up and running
  • Monitors the infrastructure
  • Upgrades the infrastructure

There are also end-users, and these are the people that actually use OpenStack:
  • Provision instances
  • Deploy applications on top of those instances
  • Create tenants/users/projects

For the OpenStack Community – sometimes these two groups are one and the same, and in my honest opinion they definitely are not – and should not be treated as such. It is taking time, but I think that the community is starting to understand that there are two distinct groups here, which have very different sets of needs, and should be catered to quite differently

There was a significant amount of discussion of how Operators can get more involved, and honestly I must say that the situation has improved – drastically – in comparison to the situation 12 months ago.

There are a number of working groups, the Win The Enterprise WG, the Product WG, the Monitoring and Logging WG, all of these have been meeting over the last year to try and hash out ways of getting more involved.

One of the interesting discussions that came out as a result (well perhaps not directly but I assume that I had something to do with it) of me running for the OpenStack TC was how does the OpenStack Community want to acknowledge those people that are not committing code but are contributing back into the community. To get some more background – I refer you to these threads on the mailing list.

Something that kept on coming up again and again in sessions was that the Project groups are looking for more and more feedback from the people operating and deploying OpenStack, but the process of getting that feedback is broken/not working/problematic.

I do understand that the TC (and OpenStack) would like to protect the most valued resource that OpenStack has – and that of course is the people writing the code.

But there has to be an easier way of allowing people to submit the feedback – and perhaps there is…
A way for Operators/Users to submit feature requests.

2. Vendors are involved in OpenStack – and they are here to stay


They are not there for the good of their hearts. They are there because they want to make money, and a lot of it. That is one (but not the only) reason why they contribute to open source projects.

OpenStack is no different. Each and every one of the vendors involved (and I will not name companies – because the sheer size of the list is just too long) are there to increase their market share, their revenue, their influence.

And that is a difficult dance to master. They are the ones providing resources to commit code and there are times where the agenda behind that is not purely community driven. This post – sums it up pretty well.

As OpenStack has grown he says its turned into a corporate open source project, not a community-driven one. He spent a day walking around the show-floor at the recent OpenStack Summit in Vancouver and said he didn’t find anyone talking about the original mission of the project. "Everyone’s talking about who’s making money, who’s career is advancing, how much people get paid, how many workloads are in production," McKenty says. "The mission was to do things differently."


OpenStack is not a small community project any more – where everyone knows each other by name/face/IRC handle. It has grown up, come of age.

8027924487_8de68d940d_z
For better or for worse. Stay tuned for more.

As always please feel free to add your thoughts and comments below.

2015-05-17

Integrating OpenStack into your Jenkins workflow

This is a re-post of my interview with Jason Baker of opensource.com

Continuous integration and continuous delivery are changing the way software developers create and deploy software. For many developers, Jenkins is the go-to tool for making CI/CD happen. But how easy is it to integrate Jenkins with your OpenStack cloud platform?

Meet Maish Saidel-Keesing. Maish is a platform architect for Cisco in Israel focused on making OpenStack serve as a platform upon which video services can be deployed. He works to integrate a number of complementary solutions with the default out-of-the-box OpenStack project and to adapt Cisco's projects to have a viable cloud deployment model.

At OpenStack Summit in Vancouver next week, Maish is giving a talk called: The Jenkins Plugin for OpenStack: Simple and Painless CI/CD. I caught up with Maish to learn a little more about his talk, continous integration, and where OpenStack is headed.

Interview


Without giving too much away, what can attendees expect to learn from your talk?

The attendees will learn about the journey that we went through 6-12 months ago, when we looked at using OpenStack as our compute resource for the CI/CD pipeline for several of our products. I'll cover the challenges we faced, why other solutions were not suitable, and how we overcame these challenges with a Jenkins plugin that we developed for our purposes, which we are open sourcing to the community at the summit.

openstack_summit_logo

What affects has CI/CD had on the development of software in recent years?

I think that CI/CD has allowed software developers to provide a better product for their customers. In allowing them to continuously deploy and test their software, they can provide better code. In addition, it has brought the developers closer to the actual deployments in the field. In the past, there was a clear disconnect between the people writing the software and those who deployed and supported it at the customer.

How can a developer integrate OpenStack into their Jenkins workflow?

Using the plugin we developed it is very simple to integrate an OpenStack cloud as part of the resources that can be consumed in your Jenkins workflow. All the users will need is to provide a few parameters, such as endpoints, credentials, etc., and they will be able to start deploying to their OpenStack cloud.

How is the open source nature of this workflow an advantage for the organizations using it?

An open source project always has the benefit of having multiple people contributing and improving the code. It is always a good thing to have another view on a project with a fresh outlook. It improves the functionality, the quality and the overall experience for everyone.

Looking more broadly to the OpenStack Summit, what are you most excited about for Vancouver?

First and foremost, I look forward to networking with my peers. It is a vibrant and active community.

I would also like to see some tighter collaboration between the operators, the User Committee, the Technical Committee, and the projects themselves to understand what the needs are of those deploying and maintaining OpenStack in the field and to help them to achieve their goals.

One of the major themes I think we will see from this summit will be the spotlight on companies, organizations and others using the products. We'll see why they moved, and how OpenStack solves their problems. Scalability is no longer in question: scaling is a fact.

Where do you see OpenStack headed, in the Liberty release and beyond?

The community has undergone a big change in the last year, trying to define itself in a clearer way: what is OpenStack, and what it is not.

I hope that all involved continue to contribute, and that the projects focus more on features and problems that are fed to them from the field. It is fine line to define, and usually not a clear one, but something that OpenStack (and all those who consider themselves part of the OpenStack community) have to address and solve, together.

2015-05-13

Some Vendors I Will Visit at the OpenStack Summit

At all technology conference I always like to go on to Floor / Marketplace / Solutions Exchange – where vendors try to get your attention and market their product.

Going over the list of vendors from Summit site, the list below are some of the less know companies (at least to me) that caught my eye and I would like to go over during the summit and see what they have to say.

** The blurb I posted is something that I found on each of the respective sites, and does not necessarily provide a comprehensive overview of what each company offers **

openstack_summit_logo

Stackato
Stackato allows agile enterprises to develop and deploy software solutions faster than ever before and manage them more effectively. Stackato provides development teams with built-in languages, frameworks and services on one single cloud application platform, while providing enterprise-level security and world-class support.

Akanda
Akanda is the only open source network virtualization solution built by OpenStack operators for real OpenStack clouds. Akanda eliminates the need for complex SDN controllers, overlays and multiple plugins for cloud networking by providing a simple integrated networking stack (routing, firewall, load balancing) for connecting and securing multi-tenant OpenStack environments.

Appcito
Appcito Cloud Application Front-End™ (CAFE) is an easy-to-deploy, unified and cloud-native service that enables cloud application teams to innovate faster and improve user experiences with their applications.

Appformix
Operators and developers can use AppFormix’s versatile software to remove and prevent resource contention among applications from the infrastructure without being invasive to applications. The real-time, state driven control provided by AppFormix’s intuitive dashboard allows efficient management of all I/O resources. For deeper control and customization, access to API driven controls are also easily accessible. Plan infrastructure intelligently and remove the guess work involved in managing finite server resources to create fully optimized data center infrastructure.

Caringo
Caringo Swarm leverages simple and emergent behavior with decentralized coordination to handle any rate, flow or size of data. Swarm turns standard hardware into a reliable pool of resources that adapts to any workload or use case while offering a foundation for new data services.

Cleversafe
Cleversafe’s decentralized, shared-nothing storage architecture enables performance and capacity to scale independently, reaching petabyte levels and beyond.

GuardiCore
Covering all the traffic inside datacenters, GuardiCore offers the only solution combining real-time detection of threats based on deep analysis of actual traffic, real time understanding, mitigation and remediation.

OneConvergence
One Convergence Network Virtualization and Service Delivery (NVSD) Solution takes a policy driven approach and brings in the innovative concept of “Service Overlays” to go along with “Network Overlays” to virtualize networks and services. The solution innovates and extends SDN with Service Overlays for delivering L4 to L7 services with higher-level abstractions that are application friendly.

Quobyte
Quobyte turns your servers into a horizontal software-defined storage infrastructure. It is a complete storage product that can host any application out-of-the-box. Through fault-tolerance, flexible placement and integrated automation, Quobyte decouples configuration and operations from hardware.

Scality
The RING is a software-based storage that is built to scale to petabytes with performance, scaling and protection mechanisms appropriate for such scale. It enables your business to grow without limitations and extra overhead, works across 80% of your applications, and protects your data over 200% more efficiently at 50–70% lower cost.

Scalr
The Scalr Cloud Management Platform packages all the cloud best practices in an extensible piece of software, giving your engineers the head start they need to finally focus on creating customer value, not on solving cloud problems.

StorPool Storage
StorPool is storage software. It runs on standard hardware – servers, drives, network – and turns them into high-performance storage system. StorPool replaces traditional storage arrays, all-flash arrays or other inferior storage software (SDS 1.0 solutions).

Stratoscale
Stratoscale’s software transforms standard x86 servers into a hyper-converged infrastructure solution
combining high-performance storage with efficient cloud services, while supporting both
containers and virtualization on the same platform.

TransCirrus
Core, storage and compute nodes connecting via Extreme Networks

Tufin
Security Policy Orchestration for the World's Largest Enterprises.
Managing security policies on multi-vendor firewalls & cloud platforms.

2015-05-11

Get Ready for the OpenStack Summit

The OpenStack community is converging on Vancouver next week for the bi-annual summit for all things OpenStack.

openstack_summit_logo

I am glad to be joining the event and I would like to share with you a short outline of what public events and activities I will participating in.

The rest of my time will be spread out over the Cross-Project workshops, the Ops sessions, other sessions and activities.

I am really looking forward to this event and please feel free to come and say hello.

2015-04-27

Why I Decided to Run for the OpenStack Technical Committee

As of late I have been thinking long and hard about if I can in some way contribute in a more efficient way into the OpenStack community.

Almost all of my focus today is on OpenStack, on its architecture and how to deploy certain solutions on top such an infrastructure.

What is the Technical Committee?

It is a group of 13 elected people by the OpenStack ATC’s (Active Technical contributors – a.k.a the people that are actively contributing code to the projects over the last year). There are seven spots up for election for this term, in addition to the six TC members that were chosen 6 months ago for a term of one year.

The TC’s Mission is defined as follows:

The Technical Committee (“TC”) is tasked with providing the technical leadership for OpenStack as a whole (all official projects, as defined below). It enforces OpenStack ideals (Openness, Transparency, Commonality, Integration, Quality...), decides on issues affecting multiple projects, forms an ultimate appeals board for technical decisions, and generally has technical oversight over all of OpenStack.

On Thursday I decided to take the plunge. Here is the email where I announced my candidacy.

This is not a paid job, if anything it more of a “second” part-time job – a voluntary part-time job. There are meetings, email discussions on a regular basis.

There are a number of reasons that I am running for a spot on the TC.

Diversity

In my post The OpenStack Elections - Another Look, I noted that operators were not chosen to for the board. This is something that I think is lacking in the OpenStack community today. The influence that the people who are actually using and deploying the software is minimal if at all. The influence they have is mostly after the fact (at best) and not much of an input of what they would like to have put into the product.

openstackI am hoping to bring in a new perspective to the TC, to help  them understand the needs of those who actually deploy the software and have to deal with it day in and day out. There are valid pain points that they have, and in my honest opinion they feel they are not being heard or not being taken into consideration, at least not enough in their eyes.

Acceptance of others

The people who vote are only those who contribute code. Those who have committed a patch to the OpenStack code repositories. That is the definition of an ATC.

It is not easy to get a patch committed. Not at all (at least that is my opinion). You have to learn how to use the tools that the OpenStack community has in place. That takes time. I tried to ease the process with a Docker container to help you along. But even with that, it still seems (to me) that to get into this group of contributors takes time.

It is understandable. There is a standard of doing things (and rightfully so) so the chances of you getting your change accepted the first time are slim, for a number of reasons that I will not go into in this post.

I think that the definition of contributor should be expanded and not only limited to a those who write the code. There are a number of other ways to contribute.

I know that this will not be an easy “battle to win”. I am essentially asking the people to relinquish the way they have been doing things for the past 5 years and allow those who are not developers, those who do not write the code, to steer the technical direction of OpenStack.

I do think this will be in the best interest of everyone to extend the reach of OpenStack community, to branch out.

More information on the actual election that will run until April 30th can be found here. If you are one of the approximate 1,800 people who is an ATC, you should have received a ballot for voting.

It will be interesting to see the results which should be out in a few days.

As always your thoughts and comments are appreciated, please feel free to leave them below.

2015-04-20

OpenStack Israel CFP Voting is Open

I would like to bring to your attention that the voting for the sessions for the upcoming OpenStack Israel Summit on June 15th, 2015 is now open.

Make your voice heard and participate in setting the agenda for the event!

image

You can find more information and the presentation that I gave last year in this post Recap - Openstack Israel 2014 #OpenStackIL and for your convenience I have embedded the recording below.

2015-04-06

The Chef’s Special and Trusted Solutions

It is funny where one gets the idea for a blog post from. I was sitting in a restaurant last month in San Jose, one of the only two kosher ones in the area, and I ordered the chef’s special.

Chef's SpecialSo it was grilled salmon, mashed potatoes and grilled vegetables. It was tasty, the flavors blended very well. Which got me thinking.

What made the chef choose that combination? Why those three? Obviously he had tested different combinations. Pasta, rice, noodles, different kinds of vegetables, different spices. But he chose that combination. Tried and tested.

If I had the to choose what I would have wanted with that dish, it probably would have been rice, not potatoes.

So what does this have to do with trusted solution? But before that – a bit about what I think a trusted solution is.

A trusted solution can be something like a vBlock, a solution that has been validated, Hardware which is tested for a certain workload, guaranteed to provide the proper performance and to work.

It could be the VMware HCL, where Q&A has been done to test if things work and work properly. It could be a validated design from one of the vendors – a Cisco CVD, a Flexpod or another solution.

On the other hand I could also have asked for a mix and match on the menu, what I preferred, what I know that I like, I have tried before.

That I could compare to a home built solution, or perhaps a whitebox that is not on the HCL or a slightly different vendor. Will it work – probably – but it something that is probably unique to your liking.

There are advantages to both options, both can serve you well, both are tasty.

(The things that pop into your mind..)

What do you prefer? What do you think suits your business better? The Chef’s special or your own personal favorite mix and match? And why?

Please feel free to leave your comments and thoughts below.

2015-03-30

A Triangle is Not a Circle & Some Things Don’t Fit in the Cloud

Baby Blocks

We all started off as babies, and I am sure that not many of you remember that one of the first toys you played with (and if you do not remember - then I am sure those of you with kids have probably done the same with your children) was a plastic container with different shapes on the lid and blocks that were made of different shapes.

A triangle would only go into the triangle, a circle in the circle, a block in the block and so on.

This is a basic skill that teaches us that no matter how hard we try, there are some things that just do not work. Things can only work in a certain way (of course coordination, patience and whole lot of other educational things).

It is a skill that we acquire, it takes time, patience, everyone gets there in the end.

And why am I blogging about this – you may ask?

This analogy came up a few days ago in a discussion of a way to provide a highly available database in the cloud.

And it got me thinking….

There are certain things that are not meant to be deployed in a cloud environment because they were never meant to be there in the first place. The application needed an Oracle database and it was supposed to be deployed in a cloud environment.

What is the default way to deploy Oracle in highly available configuration? Oracle RAC. There are a number of basic requirements (simplified) you need for Oracle RAC.

  1. Shared disk between the nodes.
    That will not work in a cloud environment.
    So we can try using dNFS – as the shared storage for the nodes – that might work..
    But then you have to make an NFS mount available to the nodes – in the cloud.
    So let’s deploy an NFS node as part of the solution.
    But then we have to make that NFS node highly available.
  2. Multicast between the nodes - that also does not work well in the cloud.
    So maybe create a networking environment in the cloud that will support multicast?
    Deploy a router appliance in the cloud.
    Now connect all the instances in the cloud into the router.
    But the router poses as a single point of failure.
    Make the router highly available.

And if not Oracle RAC – then how about Data Guard – which does not require shared storage?

But it has a steep licensing fee.
And you have to find a way for managing the virtual IP address – that you not necessarily will have control over.
But that can be overcome by deploying a VRRP solution with IP addresses that are manually managed.

ENOUGH!!!

Trying to fit a triangle into a square – yes if you push hard enough (it will break the lid and fit).
If you cry hard enough – Mom/Dad will come over and put it in for you.

Or you come up with half-assbaked solution like the one below…

blocks

Some things will not fit. Trying to make them fit creates even more (and sometimes even bigger) problems.

In this case the solution should have been - change the code to use a NoSQL database that can be deployed easily and reliably in a cloud environment.

As always your thoughts and comments are welcome.

2015-03-26

Installing OpenStack CLI clients on Mac OSX

I usually have a Linux VM that I use to perform some of my remote management tasks, such a OpenStack CLI commands.

But since I now have a Mac (and yes I am in enjoying it!!) I thought why not do it natively on my Mac. The official documentation on installing clients is on the OpenStack site.

This is how I got it done.

Firstly install pip

easy_install pip

Now to install the clients (keystone, glance, heat, nova, neutron, cinder, swift and the new OpenStack client)

pip install python-keystoneclient python-novaclient python-heatclient python-swiftclient python-neutronclient python-cinderclient python-glanceclient python-openstackclient

First problem – was no permissions

No Permissions

Yes you do need sudo for some things…

sudo –H pip install python-keystoneclient python-novaclient python-heatclient python-swiftclient python-neutronclient python-cinderclient python-glanceclient python-openstackclient

Success!

Success!

Or so I thought…

Maybe not...

Google led me here - https://github.com/major/supernova/issues/55

sudo –H pip uninstall six

uninstall six

And then

sudo –H easy_install six

reinstall six

And all was good

nova works

nova list

Quick and Simple!! Hope this is useful!

2015-03-19

Deploying the VCSA 6.0 Appliance directly into vCenter

Hey… Is that even possible? It seems that it is not – at least that is what I heard this week over Twitter.

The documentation also says the same thing.

Documentation

When trying to put in a vCenter as the target for deployment it will throw an error.

Error

I actually find this really silly and a really weird move on behalf of VMware. Why limit this to connecting directly to an ESXi host?

Also I am quite intrigued to know what is the benefit of using such a tool for deployment. I do understand that VMware wanted to provide a generic tool that could be used on any platform to deploy a vCenter Server. If you look at the ISO that is provided for download – you will see a folder structure there for all platforms in the vcsa-cli-installer

Multi Platform

But this got me thinking. The VCSA is appliance after all – which means it is probably an OVF – like most of VMware’s appliances.

Disclaimer – This is probably not supported – definitely not endorsed by VMware – so use it at your own risk!

I went and did some detective work. The ISO is about 3GB in size which means that it actually has to be appliance is probably there somewhere. It was not hard to find.

In the VCSA folder you will find a file vmware-vcsa which is almost 2GB in size.

vmware-vcsa

It is obvious that the file is not an OVF – but probably an OVA – because of its size.

So for my test I copied the vmware-vcsa file and added the .OVA extension to the file

Create OVA

I then proceeded to deploy this appliance as I would any virtual appliance. I even went so far as to use the vSphere Client!

I was skeptical to see if there was actually anything extra that was put into the installer – because we all know that it most of the customization is provided within the OVF itself. I checked to see if the same functionality is available from both the new installer and a regular vSphere client deployment. The feature parity seems to be equal at least – with some addiitonal functionality that is available only when deploying as a virtual appliance.

Such as Inventory Location

Inventory

Choice of Cluster / Host

Host/Cluster

And most importantly – the option to deploy to a Distributed vSwitch – something that is not possible when directly to a host. It would only recognize a Standard vSwitch

dvSwitchvSwitch Only

All the rest was mostly all the same.

Size1 Size2

IP_1 IP_2SSO1 SSO2Embedded1 Embedde2

Credentials1Credentials2

Network3 Network4Network5

Summary1 Summary2

Now of course there a things that are not visible in the regular installation interface – things that related to the upgrade.

Upgrade1Upgrade2Upgrade3

So there you have it – Deploying a vCenter Server Appliance – directly into an existing vCenter.

Sometimes the stuff we are used to is the stuff that is also the easiest way to do things.

If anyone has any insight to problems that might occur using this method please feel free to leave them in the comments below, and of course – please feel free to leave any other thoughts or comments as well.