2012-05-30

Which sessions I voted for - VMworld 2012

1222 sessions – That is a lot. How do you wade through that amount?

That is what the filter is there for. So you can filter out the sessions on several criteria – but to do everyone justice – I decided to do this meticulously – and go track by track and note which sessions I chose and why.

The criteria for my choices were:

  1. Does the topic interest me?
  2. Have I heard the presenter before?
  3. Is the presenter also a well known figure?

I went track by track to choose my picks for the sessions I would like to see at VMworld 2012

I have a session up for voting  as well – so please feel free to add it the list below.

It is long list – and I hope it will be useful to you. You can find the spreadsheet here

Just to clarify.. I was not asked to promote any sessions – these are my choices. They are based on my areas of interest – and should not be taken as an endorsement for one session over the other.

I hope you find it useful!

2012-05-29

VMworld Call for Papers Voting is Live

The voting for VMworld Call for Papers is now Open.

Cody Bunch and myself have submitted a session – based on the customer stories behind the
vExpert HoL that we are designing for the upcoming VMworld.

If you would like to see the story behind the lab – we would appreciate your Thumbs Up for the session.

1996 Managing Your Day-to-Day Administrative Tasks with vCenter Orchestrator

VMworld Session Voting

There are 1222 different sessions that you can vote for – so wading through them all – can be tiresome.

I will be posting my choices in a future post.

2012-05-21

vExpert Hands on Lab

Yes we experts are a crazy bunch really we are. Not only do we blog about virtualization, read virtualization, breathe virtualization and immerse ourselves in technology but most of all we enjoy what we do.

Approximately three weeks ago, a post was made on the vExpert community forum with a offer - but I would call it a challenge. As you all know Call for Papers for VMworld 2012 is slowly coming to an end (deadline is May 18th). There will be a large number of submissions (last year there were ~1000) so the chances of getting a session accepted are not that high, but it was still worth a shot.Hands on Labs

What was this challenge you ask? The vExperts were given the opportunity to submit a proposal for a HoL that would used for VMworld and perhaps at other events as well.

The timeline was crazy. The expected number of hours to be put into the preparations was insane, but still there were more than 10 different submissions of which two were chosen.

Why do we do this? It is all about giving back to others, helping others to experiment with technology and educating ourselves and others as well so we can all benefit.

I have always felt that vOrchestrator has not been given the attention it deserves. Automation is and will be the key to bigger and larger environments and PowerCLI is and has been the rising star over the last couple of years. I love it as I am sure many others do as well.

The theme of our HoL is “Conducting your Environment with vOrchestrator”, and if you have not yet guessed it will focus entirely around vOrchestrator.

I have prepared a story board that is scenario-based with real day-to-day use cases that you would encounter in most organizations.

Of course this lab would not be complete without having the man who wrote the book, Mr. vOrchestrator  himself, Cody Bunch on board and involved, and I was thrilled that he accepted the challenge. Without his deep involvement this lab would never have gotten of the ground so thank you Cody.

So here goes, time to prepare a lab that can potentially be taken by more than 20,000 people.

Congratulations to Luca Dell'Oca and Andrea Mauro on the other vExpert Lab that was accepted:
Virtualize Business Critical Applications - Oracle RAC

It will be fun, nerve-wracking, stressful but most of all a wonderful learning experience which is why I love it!!

So here we go.

2012-05-15

OpenIndiana Installation walkthrough - Part 3

This is Part 3 of a series of posts explaining how to configure OpenIndiana as NAS storage device. The series is made up of the following parts:
  1. Background information about OpenIndiana and OS installation
  2. Network configuration and Setting up storage
  3. Presenting storage to your hosts with iSCSI and/or NFS
  4. Performance testing
At the end of Part 2 we had the network set up, VMware Tools installed, additional disk space added to the VM, and a zpool created.
What we will go through in this part is:
  1. Configuring NFS and iSCSI services
  2. Creating a Volume for iSCSI
  3. Create an NFS folder
  4. Create a LUN and share it with an iSCSI
  5. Export NFS Volume and set export parameters
  6. Mount NFS and iSCSI storage
So first we need to turn on the iSCSI and NFS services. NFS is just to start a service, iSCSI has a bit more to it.
svcadm enable -r iscsi/target:default
svcadm enable network/nfs/status

enable iscsi and nfs
Now we create the target with
itadm create-target
You can check the status with
itadm list-target
create target
Create the volume and the NFS folder (I will create a small 2GB Volume) and then query for status

zfs create -V 2G disk1/iscsi_1
zfs create disk1/nfs_1

create volumes
zfs list
list volumes
Here you can see disk1 is 5GB (USED+AVAIL), disk1/iscsi_1 is 2GB is size (thin provisioned) and the disk1/nfs_1 has 3GB space left in total
Now we create the iSCSI LUN with
sbdadm create-lu /dev/zvol/rdsk/disk1/iscsi_1

and add it to the masking of the iSCSI initiator with the GUID that was just created
stmfadm add-view 600144f0ccf40a0000004fb114700002
Just to check that all is set correctly list the LUNs
stmfadm list-lu

List the masking
stmfadm list-view -l 600144F0CCF40A0000004FB114700002
iSCSI LUN
One last thin left to do is to set the NFS export permissions with the following (you will need either the FQDN of the host or the correct IP for the export permissions)
zfs set sharenfs=on disk1/nfs_1
zfs set sharenfs=root=msaidelk-esx.maishsk.local disk1/nfs_1
zfs get mountpoint,sharenfs disk1/nfs_1

NFS shares
And that is it – the storage configuration is complete.
Now we go over to the ESX and mount the storage.
First the iSCSI volume
Add target
After a rescan of the adapter I now have a new 2GB LUN
New LUN
On to NFS.
Add NFS
We now have a new NFS storage mount
New NFS
There you have it a fully working storage appliance on OpenIndiana.
In Part 4 I will show some Performance statistics I am getting out of this VM.

2012-05-14

OpenIndiana Installation walkthrough - Part 2

This is Part 2 of a series of posts explaining how to configure OpenIndiana as NAS storage device. The series is made up of the following parts:
  1. Background information about OpenIndiana and OS installation
  2. Network configuration and Setting up storage
  3. Presenting storage to your Hosts with iSCSI and/or NFS
  4. Performance testing
We ended Part 1 with a newly installed OS and a login screen. Now it is time to configure and start to use the OS.
I found the configuration a good learning process. I do not call myself a Linux Expert, but I do know my way around most Linux Distributions, but OpenSolaris – is not one that I have ever played with – so there was a steep learning curve involved until I found the information I needed, and collected here in this post.
So.. What are we going to do in this part?
  1. Install VMware Tools (so we that the VMXNET3 adapter will be recognized)
  2. Configure networking
  3. Allow SSH to OpenIndiana
  4. Add virtual storage to OpenIndiana and create a ZPool
Login to the OS with the user you created in the precious part and then su – to root user.
Login
First things first – is install VMware Tools – otherwise we cannot configure the network. From the vSphere console mount the VMware Tools ISO in the guest. Extract the tarball and install Vmware Tools with
tar zxvf /media/VMware\ Tools/vmware-solaris-tools.tar.gz and then
./vmware-tools-distrib/vmware-install.pl --default
Untar VMware Toolsinstall tools
Here is where I hit my first snag. The tools would not configure properly – I was presented with a error like the one below
VMware Tools Error
The solution that I found to solve this was to create the file that it was complaining about
touch /usr/lib/vmware-tools/configurator/XFree86-3/XF86_VMware
After that VMware Tools installation and configuration went smoothly.
A reboot is now needed.
The commands in OpenIndiana are quite different from any other linux distribution. To check which network card(s) are installed in the system use the dladm command (in my VM there are two).
dladm
As you can see the NICs have strange names. I felt more comfortable using the a naming convention like all other Linux distributions and changes the name of the links – with the syntax below.
dladm rename-link vmxnet3s0 eth0
rename link
The interface by default does not receive its IP from DHCP – in fact from I saw it is not even active. In order to set the NIC to receive it’s IP from DHCP this needs to be configured with the ipadm command.
ipadm create-addr –T dhcp eth0/v4dhcp
configure dhcp
To see what IP was assigned to the VM.
ipadm show-addr eth0/v4dhcp
If you have no DHCP on the subnet and would like to configure a static IP for the interface
ipadm create-addr -T static -a local=192.168.168.5/24 eth0/v4addr
Working in a console session is not as convenient as a remote putty session to the VM – and here I hit snag #2. I was presented with an error:
"Error connecting SSH tunnel: Incompatible ssh server (no acceptable ciphers)".
 ssh error
This post led me to the solution for this problem. 
Add this into the /etc/ssh/sshd_conf on the VM and restart the ssh service (service sshd restart does not work!!)
Ciphers aes128-cbc,blowfish-cbc,aes256-cbc,3des-cbc
and then svcadm restart ssh

Setting up DNS is also not so bad.
Create the resolv.conf file with vim /etc/resolv.conf and put in the domain suffix and your DNS server IP
resolv.conf
Then configure the nsswitch.conf file so that the resolution will be done through DNS.
cp /etc/nsswitch.dns /etc/nsswitch.conf
You can check that you resolution is working with
ping google
Now it is time to update the OS with pkg
I thought similar to apt-get (which it is) I would run pkg update but I was returned an error.
pkg update error
First you will need to update the pkg package itself with pfexec pkg install pkg:/package/pkg
Which will update the package itself and then you can run a OS update
Update pkg package
pkg update
We will install one addition package as well for iSCSI for later.
pkg install iscsi/target
iscsi/target
Now we have an updated system, and it is time to add some storage to the VM (for the sake of this tutorial I only added a 5GB disk). Power off the OS.
poweroff
Add a new Hard disk, 5 GB Thick Lazy Zero to a new SCSI adapter and power on the machine.
Adddisk
ThickNew SCSI adapter
Finished
First we will find the name of the disk that was added.
cfgadm -la 2> /dev/null
list disks
c3t0d0 is the first disk where the OS is installed and the new that was added is c4t0d0 – this is the one we will use.
First we need to create a zpool, but what is a zpool you may ask?
Storage pools
Unlike traditional file systems, which reside on single devices and thus require a volume manager to use more than one device, ZFS filesystems are built on top of virtual storage pools called zpools. A zpool is constructed of virtual devices (vdevs), which are themselves constructed of block devices: files, hard drive partitions, or entire drives, with the last being the recommended usage. Block devices within a vdev may be configured in different ways, depending on needs and space available: non-redundantly (similar to RAID 0), as a mirror (RAID 1) of two or more devices, as a RAID-Z (similar to RAID-5) group of three or more devices, or as a RAID-Z2 (similar to RAID-6) group of four or more devices.
Thus, a zpool (ZFS storage pool) is vaguely similar to a computer's RAM. The total RAM pool capacity depends on the number of RAM memory sticks and the size of each stick. Likewise, a zpool consists of one or more vdevs. Each vdev can be viewed as a group of hard disks (or partitions, or files, etc.). Each vdev should have redundancy because if a vdev is lost, then the whole zpool is lost. Thus, each vdev should be configured as RAID-Z1, RAID-Z2, mirror, etc. It is not possible to change the number of drives in an existing vdev (Block Pointer Rewrite will allow this, and also allow defragmentation), but it is always possible to increase storage capacity by adding a new vdev to a zpool. It is possible to swap a drive to a larger drive and resilver (repair) the zpool. If this procedure is repeated for every disk in a vdev, then the zpool will grow in capacity when the last drive is resilvered. A vdev will have the same capacity as the smallest drive in the group. For instance, a vdev consisting of three 500 GB and one 700 GB drive, will have a capacity of 4 x 500 GB.
In addition, pools can have hot spares to compensate for failing disks. When mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the case of the failure of an entire chassis.
Storage pool composition is not limited to similar devices but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to diverse filesystems as needed. Arbitrary storage device types can be added to existing pools to expand their size at any time.
The storage capacity of all vdevs is available to all of the file system instances in the zpool. A quota can be set to limit the amount of space a file system instance can occupy, and a reservation can be set to guarantee that space will be available to a file system instance.
(Source: Wikipedia)
If you would ask me to summarize this in my own words – a zpool is a group of one or more disks that can make up a disk volume. So lets create the first zpool
zpool create disk1 c4t0d0
create zpool
and check that it succeeded  with zpool status disk1
zpool status
That is the end of Part 2 – next up is how you can present storage to your Hosts with iSCSI and/or NFS.

OpenIndiana Installation walkthrough - Part 1

This is Part 1 of a series of posts explaining how to configure OpenIndiana as NAS storage device. The series is made up of the following parts:

  1. Background information about OpenIndiana and OS installation
  2. Network configuration and Setting up storage
  3. Presenting storage to your Hosts with iSCSI and/or NFS
  4. Performance testing
I have enjoyed using Nexenta Community Edition for a while – really I have. It is a great product, but there have always been a few annoying things that I could not get around.
  1. VMXNET3 support
  2. High CPU Usage.
I have seen a number of posts from several sources on the benefits of ZFS and what it can do for your storage access. So I decided to try out OpenIndiana. Nexenta is moving over to Illumian.
From the What is Page:
Q: What's the relation to the Nexenta Core Platform (NCP)? Is this still a "Debian user-space on top of an OpenSolaris kernel"?A: No, this is not Debian. Nexenta hopes that most former NCP users will find illumian to be even more useful than NCP was. There were many limitations on how much Debian familiarity could be maintained in NCP. Our experience with NCP taught us that what we really need is a useful collection of externally maintained packages, in versions that all work together. It hurt more than it helped to try to keep that set of versions in "lock step" with particular Debian versions. It turned out to be much more practical to instead keep "in step" (version wise) with that set of packages maintained in the illumos-userland gate.
Q: What's the relation to the NexentaStor product?
A: NexentaStor is a commercial binary distribution built on top of the community-developed illumos and illumian projects.
From what I understand – I could very well be completely wrong – OpenIndiana is very similar to Illumos (which in turn is very similar to Opensolaris).

What do you get with OpenIndiana?
  • ZFS – the last word in filesystems
  • Zones – a Lightweight Virtualization Technology
  • SMF – the Service Management Facility for software lifecycle control
  • IPS – a next generation network based package management system
  • FMA – the Fault Management Architecture
  • COMSTAR – an enterprise SCSI target system supporting iSCSI/iSER/FC/FCOE
  • Crossbow – a next generation fully virtualized high performance network stack
  • DTrace – an extensive, deep diagnosis and debugging framework
  • Boot Environments – transactional operating system upgrades with rollback
  • Role Based Access Control – RBAC allows granting least-privilege access to processes and users
  • IP Multipathing – IPMP provides high availability networking and greater bandwidth
  • Integrated L3/L4 kernel mode Load Balancer
  • Integrated VRRP IP failover facility
Lately I have seen more and more posts about the very nice performance one can achieve with a ZFS (if configured correctly), so I decided to try it out for myself – and of course document the process.
Download the ISO from here.
Create new VM with 4GB of RAM, 1 vCPU, a 12GB disk (LSI Logic Parallel) and 1 VMXNET3 vNIC
Set the OS to Other Linux – Open Solaris 64-bit, and boot your VM from the ISO.
LSI LogicGuest OSBoot from ISO
After the VM boots you will be prompted with some questions, keyboard layout, what you would like to do (Install OpenIndiana of course) and then comes the welcome screen.
Boot screenKeyboard LayoutInstallimage
Choose the disk, and if you would like you can manually configure the partition layout, I just chose the default. Put in the hostname, Time Zone, change the date/time, and define your regular and root accounts and passwords.
Choose DiskFdiskHostnameTimeZoneDate/TimeUser Accounts
You are presented with a Summary and then sit back and wait for the installation to complete (~5 minutes)
SummaryGoingGoing..Installation...Done!
Reboot
RebootSplashLogin

Enough for Part 1 – we will continue soon with Part 2 on how to configure networking, add disk space and add that to the operating system.