We are merging the blog with our new website, so in future all new posts will be going to millennia.it/blog.
At some point this site will close down (probably when I need to make my next subscription payment!)
We are merging the blog with our new website, so in future all new posts will be going to millennia.it/blog.
At some point this site will close down (probably when I need to make my next subscription payment!)
The events in Paris could not go without comment, and I think this blog entry sums it up nicely.
Another SSL hack follows Heartbleed to keep our systems management people awake at night.
Modern society needs copius amounts of energy to survive. Renewables as they stand cannot replace what we have today, and this excellent article explains why. A must read in the argument over the way forward for future energy production, something to which the IT industry is inextricably linked.
Pick up a research paper on battery technology, fuel cells, energy storage technologies or any of the advanced materials science used in these fields, and you will likely find somewhere in the introductory paragraphs a throwaway line about its application to the storage of renewable energy. Energy storage makes sense for enabling a transition away from fossil fuels to more intermittent sources like wind and solar, and the storage problem presents a meaningful challenge for chemists and materials scientists… Or does it?
Guest Post by John Morgan. John is Chief Scientist at a Sydney startup developing smart grid and grid scale energy storage technologies. He is Adjunct Professor in the School of Electrical and Computer Engineering at RMIT, holds a PhD in Physical Chemistry, and is an experienced industrial R&D leader. You can follow John on twitter at @JohnDPMorgan. First published in Chemistry in Australia.
Several recent analyses of the…
View original post 1,723 more words
Everybody seems to think they know the answer, but sometimes I wonder if they even understand the question.
Hot on the heels of the launch of VMware’s EVO:RAIL, and somewhat more under the radar, Maxta have announced Maxdeploy, in which they seek hardware partners for their software only hyperconvergence solution.
Maxta CEO Yoram Novick has been quoted as saying “It’s very clear that customers don’t want to buy storage software and be their own integrators”.
Well, yes, the ones you talked to maybe, but there is no one size fits all solution in this space, and so the answer ain’t as easy as cosying up to SuperMicro and thinking all is well.
From my experience an appliance pre-configured and loaded with hyperconvergence goodness is a really quick way to get up and running – principally because I just don’t have the time to play system integrator and work out all the permutations of chassis, motherboard, CPU, RAM, storage, NICs, BIOS, firmware, etc., etc. that I need to develop a stable system.
In this way I can see Maxta’s point, and perhaps for their target market this works out, but there are organisations out there that think very differently.
They are the large organisations that carry such deep discounts on commodity hardware that they laugh in the face of the prices put forward when these commodity bits and pieces are converted into appliances. For them it is all about the software, how it works, how it performs, how it’s supported, how it gives them ROI.
They can pull in an order for any configuration of commodity server to run it on at the stroke of a pen, but they don’t own the IP of the software that ties it all together, and they often as not would like the ability to choose their hypervisor as well, by the way.
So while you watch some hyperconvergence solutions binding you to potentially inflexible hardware platforms, such as the 4 node minimum scale block of EVO:RAIL, don’t lose sight of those that might be going the other way.
The software defined datacentre will always need hardware, but nobody said it HAD to be exactly a certain type of hardware.
Don’t save all your excitement for the appliance, save it for the software; because if that doesn’t deliver then neither will your infrastructure solution.
“Software Defined” has become the epithet of the Nutanix solution, but you will always need hardware and hardware will always fail.
Recently we installed a relatively new NX-3460 for a Proof of Concept (POC) with all four nodes showing as up and running in the Prism management with no alerts. However, the storage total looked a little light, so on investigating the hardware section of the management interface we noticed 3 HDDs were missing from node A of the appliance. Not failed, just not there!
Reseating made no difference and there were no failed lights on the disks. Swapping disks into other bays showed that the disks themselves were not at fault. It’s important to note that this wasn’t a case of an in use storage system losing disks, which would have thrown errors, but as no storage pool was defined initially these just didn’t come online at all on setup and so didn’t show in the resultant storage pool when created – hence the low total readings that alerted us to this issue.
The SATA connecter was assumed to be the problem so a swap out node was arranged. When the replacement node arrived, the SATA DOM from which the node boots was moved to the replacement and the node replaced and booted. During this time the cluster on the other 3 nodes continued in ignorance with just a few alerts complaining of the node A’s disappearance.
This did not solve the problem – the three disks stubbornly refused to be seen. It was decided, therefore, that a total chassis replacement (as this carried the passive mid-plane into which all the nodes hot plug) was the only option, so it was duly ordered for next day delivery.
That night both SSDs in node C went offline which took out the node itself as the metadata disks were lost. However, the Controller VM didn’t actually fail as it boots on the SATA DOM and neither did the ESXi host – also present on the SATA DOM. The cluster continued in an initially degraded but still fully functional state! Data held on node C had duplicate blocks created on the surviving nodes for data consistency.
So now this still working cluster had 5 disks in trouble – 3 HDD on one node and 2 SSD on another, but it didn’t even stop for breath. Once the automatic recovery of node C was complete, the cluster wasn’t even degraded and could actually have taken another hit (say if node A played up again).
The chassis was swapped out the next day by simple replacement, one at a time, of the nodes and disks. This was the only point (about 30 minutes) when the cluster was actually down! All disks were then confirmed visible again, and although the SSDs were more of an issue than just a case of coming back online and needed support intervention to resuscitate, eventually all four nodes and all disks were up and happy.
Conclusion: Nutanix software continued with a working cluster even during a few days of multiple disk failures and even a collapsed node. No data was lost at any point.
The total outage of 30 minutes was just when the entire chassis was being swapped out.
Nutanix software is totally prepared for the inevitable hardware failure and the Energizer Bunny just keeps on going and going…..
First of all an update to my blog from yesterday: I am very grateful for an almost immediate reach out from , CEO at Nutanix, and a subsequent 20 minute phone call to discuss the concerns I had raised. This only goes to show what a different breed Nutanix are as an organisation from anything I have come across before in that this kind of engagement is even possible, let alone natural; although I can’t help thinking that my ability to get this kind of visibility with a nine figure run rate company only has a limited life time – at least until I’ve developed my own million dollar run rate with Nutanix solutions.
Think I just heard an Amen from their SVP of Sales there 😉
In the interest of balance I’d like to direct readers to two pertinent blog entries, one from Dheeraj, and the other from Steve Kaplan, their head of channel. There are strong points in there based on experience, and while I’ll still be cautious until I see what develops it is time to move on and concentrate on the good as there is much work to do.
One of the points Dheeraj made, both to me and in the blog, was basically a refutation of my assumption they were moving to a software plus hardware HCL type model, which is interesting. All the more so because of this announcement the same day from VMware, in which the HCL model is obviously pushed forward very strongly.
I have made a strong presentation in the past about the pain points I have suffered building my own solutions, even with HCL stated parts, and although there are many many people out there that want to do just that, in the Enterprise space I work in many just want to get a working trouble free solution in and running and get on with their jobs. This is why NetApp and EMC, despite their price points, have so much of the storage market.
I am in no way invalidating the VSAN product, it will have high traction in the SME space I feel, but for me and for a lot of other busy people the appliance approach to converged, web-scale, IT infrastructure has a lot going for it, and sometimes that convenience is worth paying for.
I run an Infrastructure-as-a-Service solution, and in Nutanix I see an IaaS supplier – helping me move on and tackle the configuration of VMware / Hyper-V / Citrix, etc. without having to worry about whether I’ve bolted my infrastructure together properly, and when I want to scale I don’t need to do anything but decide how many nodes it will be by.
The are great things ahead in this IT development, and in the Dell deal I think Nutanix may have just fired their second stage rockets – destination Moon.
First of all kudos to Nutanix for their internal decision to inform the channel prior to the press release, which meant I could make this article timely rather than days after the event. Few vendors consider their channel is anything to do with their internal decision making, and while we may have no veto or input to a business decision, acknowledging we have a part to play is very important.
We may only be the size of tug boats compared to a supertanker, but supertankers can’t dock without tug boats.
If you have read my recent post you would have seen me live through my experiences with Dell equipment, how I supported the requirement of a Nutanix appliance to run their product, and why I thought it made sense. So hearing that Dell were going to OEM the Nutanix software in an appliance of their own caused more than a little trepidation for me. After all going from a heavily tried and tested platform tuned by a software vendor to one where software was applied to hardware with different components and firmware certainly dilutes my primary message in defence of the Nutanix appliance.
However, taking a step back it is important to reinforce that Nutanix is a software company, and indeed has been criticised for being ‘proprietary’ because it required a purchase of their appliance and there was no software only SKU. Starting to expand the range of hardware that runs the Nutanix software re-emphasises the “software-defined on commodity hardware” message of Nutanix – at the end of the day you will always need hardware, and it needs to be as generic as possible.
For the sake of their amazing support it is in the interest of Nutanix to enforce a strict reference architecture on any OEM hardware deal, so that the issue of finger pointing does not resurrect itself and kill the “one back to pat” USP that Nutanix has enjoyed to date with it’s one stop central support model. Vendors like VCE claim a one stop support system, but on escalation problems go back to their respective vendor 3rd levels and then it’s just like any other server/SAN solution. Level zero call centres really aren’t the solution.
So that’s my view on where I see Nutanix coming from with this decision, what about from the channel perspective?
I’ve always been guarded about the function of a channel with any vendor, and their actual buy in compared to what they say. My personal experience of Dell is that they talk a good game, but in practise as a small reseller trying to sell into a large customer you don’t already have a foothold in is almost impossible. Their account managers are generally at war with each other over commission, and all sorts of dodgy goings on seem to occur with their deal registration system.
I’ve had a 6 figure deal I engineered yanked out from under me when it was discovered the client had a deal reg posted to them by their account manager – even though they had no involvement in the project. Although I wasn’t actually banned from the sale Dell made sure the client’s direct prices were 5% lower than I could get even if I deal registered the solution. It’s not the only time this has happened, but was the one that sticks with me the most, for obvious reasons!
The practice of signing every business up as a partner, effectively wiping out the margin potential of any actual partner, is rife in Dell and shows a complete contempt for the efforts of a partner to resell Dell solutions into large organisations. It is now generally acknowledged that reselling PCs is pointless, as you can often buy them cheaper on the public website than we get in “premier” pricing as partners. Even with regard to servers it seems unless you can break the £10k deal reg minimum pricing is unlikely to be competitive with their regular deals or what any reasonable size business could get direct.
Then when you try and actually register a deal it’s often denied because one appears for that customer tied to their account manager – in one case I had one denied because it was SIMILAR to another company name at a completely different address! Dell deal registrations appear from this to be generic and long lasting, not project centric like Nutanix, although when filling out a Dell deal registration it implies otherwise.
This isn’t a problem only I have experienced either, and I’ve had a lot of conversations with Dell partners who feel similarly aggrieved, including Enterprise ones.
So while I broadly agree with the direction Nutanix is taking – ultimately to a software only SKU as long as you use HCL infrastructure – I am in a position of selling a product with a defined hardware price, against what is basically now a competitor present in my entire target market that has the ability to loss lead it’s hardware to snag a deal.
Nutanix have informed me that everything, including from Dell, needs to go through the Nutanix deal reg protection system and therefore there will be no pricing advantages, but so many organisations have already purchased Dell equipment at some point that I imagine I have much tougher sells ahead of me; my Nutanix marketing may just lead to somebody picking up the phone to their Dell account manager, rather then responding to me.
I also have to wonder how Nutanix can guarantee that they can provide a level playing field to their direct partners, when a major partner has the ability to independently price hardware.
So I have to trust my in depth knowledge of the product – I run it in production after all – and my belief in and focus on Nutanix rather than having a range of competing products I resell, combined with my solution based approach to customer requirements, will enable me to continue to get a return for the very heavy financial and personal commitment I and my business have made to this vendor.
Time will tell.
This is useful to know when you hit that 32GB limit and don’t want to go to trouble of migrating to a new server. Of course this isn’t an issue now in 2012.
Hit a little issue in my lab today, It happens that I went ahead and installed Windows Server 2008 R2 Standard for a bunch of my Lab VM’s. Now the issue is that I need Windows Server 2008 R2 Enterprise Edition to support the Windows Failover Clustering feature.
So long story short, I didn’t want to have to fully rebuild my Lab VM’s. So I went looking around and found a very nice way to in place upgrade to Enterprise Edition.
The command that we are going to use is the DISM.exe command (Deployment Image Servicing and Management Tool), that is available in Windows 7 and Windows Server 2008 R2. You can find out more about the Tool HERE
View original post 438 more words
Article updated June 10th 2014 – scroll to Update section below for updated comments:
Software defined seems to be the latest buzz phrase doing the rounds recently; software defined storage, networking, datacentres, hitting the marketing feeds and opinion pieces as terms like Cloud are now considered mainstream, and not leading edge enough for the technology writers and vendors looking for the next paradigm.
Because Nutanix supply their hyperconverged compute and software solution with hardware there have been many comments that their product isn’t truly software defined; but it is, despite the hardware, and this is why.
In everything that they do Nutanix are a software company. Their product is the Nutanix Operating System (NOS), which forms part of the Virtual Computing Platform. They do not produce any custom hardware, everything that NOS runs on is commodity x86 hardware, no custom ASICs, drives, NICs, boards, etc. The reason they provide hardware with their software solution is very simple – supportability and customer service.
I run a modest hosting company and being extremely budget conscious (as in, I didn’t have any!) I looked for the cheapest route to market that I can, while still feeling somewhat secure about the service I provide. The problem is that this is a lot harder than you may think, and in the complex world of virtualisation hardware compatibility is still very much there; it may be abstracted away from the guest VMs, but the poor old infrastructure manager has it in spades.
Last year I had two problems that showed this in high relief:
The first was a BIOS issue we encountered soon after buying 4 identically specified Dell PowerEdge 715s in May 2013. It was not long after they entered production that we began seeing random PSoDs (the good old VMware PURPLE screen of death when it kernel panics) on these servers. Multiple calls to both Dell and VMware resulted in attempts at hardware replacement, then it surfaced that a bug had been introduced into the BIOS at version 3.0.4.
This took 3 months and two attempts to be finally fixed at 3.1.1, but even then we had some (but different PSoDs) occurring at random. After another two months this eventually was identified to the SD cards we used to boot ESXi and an interaction with an SD card on the DRAC remote management card. Disabling the DRAC SD card stabilised the system, some 7 months after purchase.
The second was the need to go to SSD storage for workloads which were soaking our poor Equalogic and other HDD SANs. Again budget was so limited that we turned to building a SuperMicro box, filling it with Samsung 840 Pro drives (at the time not Enterprise but recently reclassified in use as they are so brilliant at what they do), and putting Starwind (on Win2K8R2) across the top. This solution worked brilliantly and has never failed, but is at the end of the day single controller by it’s nature – so it was always at the back of my mind that a nasty shock was waiting for me if a DIMM popped or even if Windows crashed (hardly unlikely in my experience).
Before you ask, no we couldn’t afford HA at that time – all those extra disks, duplicate chassis, and increased StarWind license fees made it eyewatering. We have today created just such a beast in our new facility, and the HA nature of it now goes some way to assuaging the fact that it still is a home grown all flash SAN solution – just one with a better uptime potential.
So when the opportunity arose to create a second site that our production services could move to then it was time to bite the bullet. After a number of investigations including Maxta and Simplivity (VSAN still in beta at this time) I decided on Nutanix to deliver this new platform.
Now nobody can accuse Nutanix of being cheap – it is targeting an Enterprise space and I am definitely not Enterprise. In fact I may have one of the smallest companies on Earth that has a production implementation of this, it took a significant bite out of my turnover let alone IT budget. However what attracted me was that the hardware was fully tested and compliant, such that should a problem occur there was no finger pointing from the “software” vendor and wasted days before actually engaging support on the problem.
This in a nutshell is where the value lies in the Nutanix appliance. It is fully tested and certified to work with their software so that should you call support they get right on what software problem it could be and don’t start blaming BIOS, firmware, RAID card types, NICS, drives, etc., etc., ad nauseam.
It may be that one day Nutanix has a software only SKU available – it can be done today technically but isn’t on the price list – but they would have to enforce a rigorous HCL to make sure their incredible support levels weren’t diluted. In this case the hardware may not be that cheap to independently source as people reckon, and you need third party support on top (included for hardware and software with the appliance), and of course if Nutanix aren’t selling hardware they either take a hit on their bottom line or increase their software prices – nobody is in this for kudos, it’s business!
In summary I chose Nutanix for one reason above the obvious raft of Enterprise features I was gaining, to be able to sleep nights knowing my home grown or loosely cobbled together hardware solution wasn’t going to go bump in the middle of the night.
A number of comments and articles have subsequently appeared (this being a prime example http://www.theregister.co.uk/2014/06/09/hyper_converged_kit_what_for/) which this blog entry seems to be pre-written to debunk. However one point I didn’t make, because my tests were subsequent to posting this entry, was the time to deploy.
I took a bare, out-of-the-box, Nutanix block, racked it, connected to 10G switch, and powered it up. I had their Foundation software already running on my laptop, so when the nodes came online they appeared in the Foundation console through IPv6 link local automatic scanning.
I then hit the install button and over the course of the next 40 minutes all I had to do was watch a blue progress bar run from 0% to 100% – and at 100% I had a fully configured Nutanix cluster with running and base configured (as in accessible) ESXi5.5 hosts.
In the following 20 minutes I installed a vCenter 5.5 appliance on one of the hosts, and on power up did the base configuration on port 5480, logged into vCenter on VMware Client, added the hosts, and therefore in 1 hour had a functioning ESXi5.5 cluster.
The Foundation software can do this to 5 blocks (20 nodes) simultaneously! So it’s reasonable to assume that even pretty large VMware clusters can be built out of the box in a day.
Can anybody else really do that?
There won’t be many.
Also, although this was VMware the same can be done on Hyper-V, with Foundation installing and base configuring Windows 2012R2 in parallel on the nodes.
For KVM it’s even quicker as the nodes are pre-installed out of the factory and all you need to do is build the Nutanix cluster (in parallel), so theoretically you are looking at 20 minutes.