Jul 092011

I have been looking at a few different backup products recently and found AppAssure Replay 4.

Some key benefits that stood out to me were:

  • 8GB / Minute backup and recovery speeds
  • Block Level snapshots.
  • Integrated duplication reduces backup storage foot print by 80% or even more.
  • Integrated replication to reduce storage costs and enable off-site and cloud-based backups

What I found quite cool is that if you choose to you can restore back to bare metal or a single email from a single recovery point. This quite a flexible recovery strategy.

Being able to mount a Windows drive back to a backup recovery point (say a c: drive) so you can access the data without having to restore the entire backup. This saves on additional storage.

Virtual standby environments create a VM (straight to a vSphere env) from a recovery point, but what makes this cool is that it automatically updates with the latest recovery point. As you can imagine this means being able to keep a running backup of target server.

Replication across HTTP of the recovery points so at the remote site you will have access to the entire backup of your server at your remote site.

Battling with software vendors like Veeam will be difficult but Replay 4 is a viable alternative and has some cool features and worth considering if you are looking for a backup and replication solution.

AppAssure CEO Naj Husain discusses Replay,

Post to Twitter

Jun 272011

I listened to the excellent Infosmack podcast focusing on a deepdive into Blade Servers vs Rack Servers. I guess it had the desired effect as it really got me thinking. Not so much about the main objective of the podcast, comparing blades to rack mount servers, but rack servers vs traditional blades vs Cisco UCS.

Over the last 6 months I have been neck deep in the Cisco UCS platform from both a blades and rack mount servers perspective. It struck me that many challenges raised by the panel are addressed with UCS.

Each topic I touch on bellow is probably a blog post in its own right so I have skimmed over them. My goal is to highlight vendors are aware of these issues and are actively working to resolve them.

Id like to highlight this does not go into ‘what is UCS’ is this post, for that I recommend:



Life Cycle for Chassis:

Nigel raised a very real concern for server architects and engineers around the longevity of the blade environment. With traditional rack and ever tower servers replacing them for the latest and greatest was an easy task. However when you introduce a blade environment element of longevity in delivered into the infrastructure. The blade chassis is a fixture that can be 2 or 3 times the life of the server and I/O components that the house. So how do venders get round this? Well the answer is to make the chassis as basic as possible. With the Cisco UCS 5100 series chassis you get some power, front to back airflow and a midplane. This midplane can handle up to 1.2Tb of aggregate throughput (Ethernet). This midplane is both the point of failure and the life cycle point of failure for a chassis. All other parts are easy to upgrade or replace however this midplane is built into the chassis and is not a quick fix should it fail or need to be upgraded.

The chassis midplane supports two 10-Gbps unified fabric connections per half slot to support today’s server blades, with the ability to scale to up to two 40-Gbps connections using future blades and fabric extenders. For this reason I’m fairly confident that the Cisco UCS 5100 series chassis would be future proof for a significant amount of time.

For me all of this shows, that while adding complexity to things, the concern over longevity and upgrades should be minimal. Especially as UCS is easily managed with only 1 non hot-swappable unit (chassis midplane).

NOTE: traditional blade chassis have inbuilt management etc. This adds an additional point of aging over UCS blades.


As I touched on above the midplane is a single point of failure in the 5100 series chassis. Within the chassis this is the only component that if a failure did occur would result in the loss of a chassis. You could ask yourself ‘why would you put all your eggs into one basket’, for me the risk is very small. However no self respecting Architect/Engineer would recommend putting this kind of risk into a production datacenter. This is when you would recommend multiple blade chassis. Now we get some testing questions:

Am I actually saving on rack space?

Have I increased my failure risk %?

Have I added additional cost and complexity?


There will be plenty of times when the answers to those questions will leave little choice but to go for rack mount servers. But for me this will only be for small business or small projects. When consolidating racks of servers or considering cloud based architectures, blades make allot more sense.


Big, Big issue this one. Just like when virtualisation first came round the questions of ownership caused issues. With virtualisation it became obvious (or evolved) that a new role was necessary, this was possible because virtualisation can only cover each discipline only to a certain level, leaving the SAN, Network and Compute teams to remain segregated. UCS now muddies the water because the network encroaches further into the network engineers realm further than ever before (especially when you add the Nexus 1000v into the mix). The same goes for the SAN and Compute.

This is not solved with UCS, if anything it can be exasperated. However careful planning and understanding can go along way to possibly improve the existing relationships between the disciplines.

Rack Space & Co-Location

Its hard to beat blades when it comes to space in a rack. Its fairly obvious when you look at a Cisco B230 M2 and how powerful it is for a half width blade that you can fill a rack with a very large amount of compute. Co-location of racks and blades is of course possible but with UCS you can manage both from the UCSM console.


Where things get complicated with blades and rack space is when it comes to power & aircon. A common rack setup from a power point of view would be twin 16 amp feeds. This should be enough to fully populate a rack with chassis. However you get into issues with air con and managing to meet what are often relatively low BTU’s (british thermal units). I once worked on datacenter that had plentiful electricity but could not populate more than 1 fully populated chassis and 1 half height chassis in a single rack. Unless you have a brand new datacenter build with blades in mind you are unlikely to be able to fit out a chassis environment (like the pic above).


One big advantage with UCS is that both blades and rack servers can be managed in the same way through UCSM via the Fabric Interconnects. In traditional rack server topology each server is a point of management, in traditional blades each chassis is the management point and with UCS this is aggregated to another layer, the Cisco 6100 series fabric interconnects.

UCSM is a Linux based OS run from the ROM and delivered through a webserver hosted on the FI. This software is the coalface of UCS and allows for the centralised management of every chassis connected to the FI. Where this become appealing in a cloud environment is it allows for a similar topology to VMware vCenter in that it can be contacted through and API and all components connected to the FI are treat like objects. If we compared this with traditional blades chassis each chassis is the management point so meaning an individual connection to each chassis. When you start looking at allot of chassis this becomes a bottleneck and logistical problem.


Are blades as dense as their rack equivalents? Well the guys on the podcast discussed what is quite a common occurrence. The perception of being able to pack more compute into a rack server over a blade server is a complex one. There are more rack servers that can out gun the blades but that number is falling. Then you have to look at things like how many Us is the rack server? e.g. Two 4u servers packed full of compute will probably lose out to 2 UCS chassis packed with RAM and the new westmere Intel chips.

Cisco can also utilise there patented memory extension technology and increase the memory amounts without increasing the DIMM slots.


Obviously this will change from deployment to deployment but in general if you plan carefully you can top trump the rack mount equivalent.

One point raised in the podcast was around local disk. Now local disk has been minimised in virtual environments generally just to host the hypervisor. However with VDI venders utilising local disk I can see this being a potential issue with blades going forward. Having said that if companies like Atlantis Computing are working on running VDI desktops directly out of memory, and with memory density only set to get higher this potentially a SAN less environment (blog post pending on that Winking smile).

The Virtual Interface Card (VIC) is a converged network adaptor (Ethernet & FC) that is designed with virtualisation in mind. VN-link technology enables policy-based virtual machine connectivity and mobility of network and security policy that is persistent throughout the virtual machine lifecycle, including VMware VMotion. It delivers dual 10-GE ports and dual fibre channel ports to the midplane.

Cable Management

So lets think of a fairly common example. 10 x Dell R710 2u servers with and additional PCI network card with quad ports and an additional PCI HBA with 2 dual ports. Lets assume every PCI card port is full.

4 NIC ports + 2 HBA ports x 10 servers = 40 Ethernet cables and 20 fibre. (this doesn’t include any management ports or the dual power supplies for each server)

When comparing traditional blade chassis this is reduced as you can add switch’s internally to the chassis however this model will not always work any you may need to use pass-through modules which will keep the number of Ethernet and fibre cables high.

With a UCS blade system this is reduced significantly with the introduction of FCoE. This FCoE strategy only operates between the chassis and the FI’s allowing for up to 40GBps up to the FI’s. A maximum of 8 twinax cables per chassis (4 per 2100 series fabric extender).


Nigel mentioned wireless as a possible alternative or future. Personally I think this is not round the corner technology wise but is within the realms of possibility.


This is where Cisco UCS can fall down because of the Fabric Interconnects. However concerns over how much a empty chassis will impact on rack server vs blade server capex are a little unfounded. Where things get expensive with Cisco UCS blades are with the fabric interconnects and the fabric extenders. Each can be purchased in singular formats (i.e. a single FE per chassis and single FI managing) however this introduces the single points of failure that we want to avoid.

Post to Twitter

May 092011

VCE today announced a new Vblock infrastructure platform at EMC world. VCE have taken the opportunity to refresh the storage component of the Vblock (as EMC brought out its VNX range recently https://vmackem.co.uk/?p=465) and also change the basic offerings.

In addition to the new storage options, there will be expanded RAID types and more blade and fabric options.

VCE has received criticism due to what was perceived as a lack of range of the Vblock offering. To address this and to evolve the product line they have enhanced the range of offerings.

For a start they are changing the naming conventions,

With the introduction of the new Vblock Series called 300, VCE will also be changing the name of the Symmetrix based Vblock 2 to the Vblock Series 700. The current Vblock Infrastructure Platforms 0, 1 and 1U will remain unchanged. I would expect that these would get updated at some point this year as the product line evolves.

300 Series:


Using the EX, FX, GX and HX tags apply to the 300 series to differentiate between the scales of deployment now means that greater choice and flexibility is now given to the customer.

700 Series:


The 700 series is aimed at Enterprise customers and is configurable with its compute stack (blade choices) meaning it can be scaled to meet requirements.


Vblock Platform Series:



Expect to see more Vblock products added to the line throughout the year as VCE attempt to cover all possible customer requirements.


Main Vblock Components:



For more information on the Vblock update or Vblocks in general then please see the bellow:

Post to Twitter

Mar 292011

Cisco Systems has announced its intent to acquire Service Catalog and Self Selvice Portol software provider newScale.

“newScale is a leading provider of software that delivers a service catalog and self-service portal for IT organizations to select and quickly deploy cloud services within their businesses.”

For those working in the cloud technology area this will not have come as a shock. To me it seems a very natural progression after Cisco acquired Tidal Software.

Check out the official announcement bellow:


Once Cisco get the Tidal and newScale software in line in such a way they can be seen as one cloud solution I think they will have a very credible competitor to the other cloud software on there hands. And along with Cisco’s UCS it gives a very attractive package to customer wanting an comprehensive cloud solution.

Post to Twitter

Mar 252011

Rather non descriptive and annoying error when setting up a 1000v for the first time. You can also see the “HTTP/1.1 400 Bad Request” along with the error message.


Incorrect port


You are connecting to a ESX server and no the vCenter server.


Change the port in vCenter to port 80.

‘Administration -> vCenter Server Settings  -> Ports’



Point it to the correct vCenter server.

Post to Twitter

Feb 072011

I’m not a sales/marketing person so this post is not aimed at trying to sell anybody anything. I thought that those people interested in cloud technology and offerings would be interested to know about my companies new private cloud offering.

Announced at Cisco Live Europe last week the VDC Private (Virtual Data Centre) is a flexible private cloud that fits around the business need of the customer.

Here is the Register Article:


If you are interested in VDC Private drop me a line and I can discuss in more detail.

Post to Twitter

Jan 192011

If you receive an ‘authentication’ error when trying to log in as ‘sysadmin’ when trying to log in to the UIM web interface, then this is due to the sysadmin account being locked. This can be caused by entering an incorrect password more than 5 times although I have seen this happen off a straight reboot.

To resolve this issue:

  1. Log in to the UIM server.
  2. Open browser (assuming you have this available).
  3. navigate to http://uim.local:8880/jmx-console.
  4. In the filter box enter VC and click search.
  5. Select service=SecurityService
  6. Locate Void resetMasterAdminUser() and click invoke.

This will unlock the sysadmin user. It may also be worth restarting the vcmaster service to ensure that UIM picks up the unlocked account.

Post to Twitter

Jan 182011

Today sees what EMC is calling ‘Its biggest ever launch’ with the VNX product line. Heres an overview of the product range.

The VNX family consists of two product series—VNX and VNXe.

  • The VNX series is EMC’s next generation of midtier products, unifying Celerra (NS) and CLARiiON (CX4) into a single product brand and extending all the Celerra and CLARiiON value to VNX series customers. It is targeted at the midtier to enterprise storage environments that require advanced features, flexibility, and configurability. The VNX series provides significant advancements in efficiency, simplicity, and performance.

New benefits of the VNX series include:

– Support for file (CIFS and NFS), block (Fibre Channel, iSCSI, and Fibre Channel over Ethernet), and object

Simple conversions when starting with a VNX series block-only system and simply adding file services or starting with a file-only system and adding block services.

– Support for both block and file automated tiering with Fully Automated Storage Tiering with Virtual Pools (FAST VP)

Unified replication with RecoverPoint support for both file and block data.- Updated unified management with Unisphere now delivers a more cohesive, unified user experience.

– Allowing each X-Blade to support up to 256 TB (besides the VNX5300, which will support up to 200 TB)

  • The VNXe (entry level) series is a new offering that opens up incremental market opportunity for EMC and is designed for and targeted toward small-to medium-size businesses (SMB), commercial customers, remote office/branch office (ROBO), departmental/branch offices, or for customers who are IT generalists—smaller environments with limited storage expertise.


EMC’s new VNX family is comprised of two series:

· VNX series for the middle to high-end of the midrange market (5000 class and 7000 class)

· VNXe series for the lower end of the market (3000 class)

The table below outlines the VNX family and the current model that each VNX series replaces.




Model Name

Current Product

VNX family

VNX series




Celerra VG8

Celerra VG2



CX4-960 and NS-960






CX4-480 and NS-480

CX4-240 and NS-480

CX4-120 and NS-120

CX4-120 and AX4

VNXe series




CX4-120 and NS-120 and AX4 (iSCSI) and NX4

The current CLARiiON CX4 series and Celerra NS integrated platforms and their associated (Fibre Channel) drives will continue to be offered for some period of time. With the VNX series, customers get the best of both product lines—with block only, file only, or unified—in a single, multiprotocol offering.

As with most new product introductions, EMC will continue to offer previous generation products, including Fibre Channel disk, I/O, and software upgrades, for some time after the VNX series has been made available to the public.

Unisphere update

It is important to position Unisphere as the central management platform for the VNX family. That said, there are some unique capabilities available for the VNX series and VNXe series to address unique requirements.

The VNX series will be managed by Unisphere 1.1, which introduces a single system view of unified, file, and block systems with all features and functions available in a single interface. Unisphere 1.1 will also be compatible with older systems, including CLARiiON and Celerra systems running DART 6 or higher.

The VNXe series will be managed by Unisphere 1.5. Unisphere 1.5 for the VNXe series was designed for an IT generalist who has experience as a system or network administrator and with typical applications such as Microsoft Exchange, VMware, Hyper-V, and shared folders (CIFS/NFS). Every storage-related task is accomplished using natural language without the complexity of storage jargon. The result is that the user is able to take immediate advantage of the VNXe’s features without additional certification or training.

While both VNX and VNXe series systems will offer integrated support pages in the management interface, there will be major differences in the level of services delivered. VNX will be aimed at traditional direct users. VNXe will accommodate service-enabled partners, including the following highlights:

· VNXe’s Unisphere interface will have a few links to eServices on an EMC support page disabled when a service partner sells VNXe. The icons that will not appear in Unisphere are Chat, Manage Support Contacts, Service Center, and Customer Replaceable Parts. If the product is supported by EMC, Unisphere will have these icons enabled in Unisphere.

· Whenever the VNXe is routed to a support page that requires input from the customer, VNXe will automatically populate those fields for the user.

– For example, if a part fails and is under warranty with EMC, the part replacement form will be populated by VNXe with the part information as well as the customer‘s ship-to information for the replacement part. VNX series systems will provide this same capability through the Unisphere Service Manager.

All in all I think this is a great move from EMC to wrap there offerings into a single management methodology.

Post to Twitter

Jan 032011

I have been pretty quiet over the last couple of months both on my blog and on twitter. Anyone that knows me will know this is quite unusual.

While I have missed the interaction with my fellow geeks I needed to re-charge my batteries and get my focus back. I had been contracting for over 2 years and felt it was the right time to take a short brake and spend the time with my family. Over 2 months later and I feel refreshed and ready to take on anything that can be thrown at me. Also spending 24/7 with your family can really make you want to go back to work. (my other half does not read my blog 😉

So while quite a few of you reading this may already be aware that I have accepted a new role I thought I would let the rest of my readers know. I have left the uncertain and quite frankly unfriendly world of contracting behind me and have gone back to the warm embrace of permanent work. While I’m sure Mr tax man will also be rubbing his hands in anticipation at this news I felt the opportunity was far to great to turn down.

I am now a Virtualisation Specialist/Consultant for BTInet. I wont go into great detail about BTInet but if you would like to find out more about the company please visit the website bellow.

BTInet Home

So expect lots of posts on Clouds, UCS, vBlocks, Flexpods and VMware. You may also see some posts on VMware’s rivals also.

I am looking to keep my hand in with the communities as networking is a fantastic way to meet like minded people and contribute my own opinions. All being well (and if I’m aloud) I will be attending the London VMUG in Feb and the next vBeers.

I am very exited about 2011 and it couldn’t have got off to a better start. I’m sure January is going to be a mentally busy month for me but I’m sure it will be immensely enjoyable also.

Post to Twitter

Oct 162010

So another VMworld Europe is over and the majority of attendees will be left with a sense of satisfaction and a slightly painful hangover. Also some with iPads, net-books and countless T-shirts, USB keys, pens and even sponsored energy drinks.

This years show was widely criticized when it was announced for being so close to the main event in San Fran and it was anticipated that the number of attendees and vendors would have been effected. This was true to a certain extent but other factors compensated.

The good:

Tech Preview

The technology presented by VMware at this show was not the staggering list of features and new products we had in Cannes however it was more than enough to get our teeth into. The focus was mainly around vCD and its peripherals (vSheild, Chargeback, Networking etc etc). We also saw plenty on View 4.5 and its evolution. I also particularly liked the look of project ‘Horizon’ which will be VMware SaaS solution arriving some time next year.

The vendors had plenty of tech to keep us all interested between Breakout Sessions and it was interesting to see how the landscape is changing from a vendor perspective. Some very interesting solutions to the old VDI I/O bottle neck issue seemed to grab my attention and the guys at Atlantis Computing were on the full marketing warpath. I also liked what HP/3par are trying to do and the direction that venture takes will be hugely interesting and could make a huge mark in the VDI space. Perhaps is the tide is theoretical tide is swinging back towards SAN than local disk?? I think I’m edging into dangerous territory and getting away from the point….

I’m a big fan of UCS and that and he vBlock were really interesting to get a good look at in the flesh. While im not as much of an ITIL guru as Steve Chambers (anyone following Steve will know this) I am still very much of that way of thinking so Service Manager really captured my attention and vCloud request manager also looked to fit the cloud model very well.

Attendee figures

Paul Maritz announced that this has topped 6000, a record for the Europe event.

Vendor attendance

Compared to the Cannes show last year (seems allot longer) I did not notice a great deal of vendors (that I cared about anyway) being missing from the event.


Copenhagen is an inspiring an beautiful city with excellent transport and plenty of accommodation. Its night life left there to be plenty to do outside the event.

2010-10-14 09:41:58 +0200

Solutions Exchange

Overall the Bella Centre was more that adequate for the event. As it was scaled down somewhat this year there was more than enough room to host such an event.


This year seemed to have more vendor parties than ever. I was pressed frequently by attractive young girls trying to get me to go their party. It was plainly obvious that those girls wouldn’t actual be there though.  I hit the VMUG party on the Monday evening and the Veeam party Wednesday and despite the tiredness had a great time at both.

2010-10-14 09:41:31 +0200


This year year in San Fran we saw the Labs being delivered by a cloud service provider for the very first time. This theme was continued in Copenhagen and I was very pleased to see that the Lab team reached there goals on VMs created and destroyed and people attending.

I found the labs them selves to work very well and they were responsive and I had not issue. One person next to me had all sorts of issues however and had to keep getting an engineer over to help, I initially though bad things about the lab he was doing but when he asked what ‘putty’ was and what ‘ssh’ was to the response to his first question I realised it was more likely user error.

2010-10-15 23:32:50 +0100

Breakout Sessions

I really liked the format of a few of the breakouts where it was basically an intimate 15 or 20:1 ratio to the tech lead. This was far for more informative and I felt I got an honest and frank answer to most of the questions I asked. I also like the ‘Who want to be a millionaire style surveys at the beginning.

Social Media and Bloggers Lounge

I was blown away by this as I did not expect there to be level of attention around this. I always knew that this area was popular and the amount of people trying to get into the social scene (me included) has exploded in the last 18 months (I think I may have said that in my interview a few times haha). You only have to look at the VMUGs and there popularity along with the vbeers events to see how much people enjoy spending time in the company of like minded people.

I was very pleased to have been involved with this and meet some really cool (geek cool that is) people that I will defiantly keep in contact with.

2010-10-15 23:34:21 +0100

The bad:

Tech Preview

While there was plenty to consider with the announcements (specifically Horizon, View 4.5 and vCD) there was no getting away from the fact these technologies had been announced and being used for a fair number of weeks. This gave enough time for everyone to find the failures and missing features that most VMworld’s would not experience. I know I may be a minority on this looking at the % of attendees but the advanced guys were all feeling a little deflated. As a result the Breakout Session often felt like pure marketing and not technical. Perhaps I’m being a little over critical here but I can not honestly say I went away thinking I have learned anything new that I didn’t already know from reading the announcements or playing with the products prier to the show.

Breakout Sessions

As a result of the lack of technology announcements the Breakout Session often felt like pure marketing and not technical. Perhaps I’m being a over critical here but I can not honestly say I went away thinking I have learned anything new that I didn’t already know from reading the announcements, documentation or playing with the products prier to the show.


While on the whole i liked the labs and there structure I thought that the queues were allot worse than Cannes. This is in part to the ‘cloud’ way it was done, Cannes had separate banks of desks for each lab. This meant that if you arrived at a lab that had a huge queue you could do your second or 3rd choice. I found the queuing in Copenhagen frustrating and as a result I didn’t get as many labs done as Id have liked. Many be if id been more dedicated then id have arrived at 8 every day.


While I dont think the venue was bad I though the food was a far cry to that in Cannes. Im not a fussy eater however the lack of variety and the peculiar food combinations meant that I was left with bagels most days. When I did try the hot food (lamb curry I think) it was warm at best.


The main party lacked something this year while basically a similar setup to the Cannes (games machines etc) it lacked a certain intimacy you got at last years venue and felt a little like we were all cattle in a barn being made to force drink and much larger as possible. Not to say i didn’t at least try and drink as much as i could ;-). Id say the place was only half full by 10:30 as most vendors had dragged people off to bars. Also the pre main event entertainment was a little boring and the 60’s theme was very odd. On the bright side I was with great company and any party is only as good as what you make it.

Solution Exchange

Much smaller exchange from Cannes. While I don’t think there was a radically reduced vendor attendance the size of the booths were much reduced. As was the free gifts. The iPad giveaways from the bigger vendors draw crowds but the usual gift were reduced to pens and hats.

The not so great:

Simons pink, flowery shirt.

Celebrity spotting (Simon Long)

and my interview.

David Owen (@vMackem) on VMwareTV

Suggestions For 2011

All in all I enjoyed this years VMworld immensely and most of the bad points I highlighted are not issues I would think would put me off. If I had to say which was the best out of Cannes and Copenhagen I would probably have to give it to Copenhagen. The whole social networking scene has made it impossible (if you wanted to that is) not so have access to even the most famous faces in the industry.

I’m fairly confident that Copenhagen has been booked in for next year so I’m not going to suggest changing that however I feel the following would make the experience better:

  • At least having one big announcement at the Europe event. This wont leave us feeling like the poorer cousins over the pond.
  • Placing twitter names and blog addresses clearly on the badges. While not everyone have these it would be great for those that do to identify people from there virtual persona.
  • Have the labs accessible through wireless and allow people to use their laptops throughout the venue. Thee wireless was pretty good (apart form the odd drop outs) so I think this would be achievable with some reconfiguration on the lab side.
  • Split the party into multiple rooms like Cannes.

Post to Twitter

Twitter links powered by Tweet This v1.8.3, a WordPress plugin for Twitter.