Nutanix and the deal with the Dell-vil – a personal view

First of all kudos to Nutanix for their internal decision to inform the channel prior to the press release, which meant I could make this article timely rather than days after the event. Few vendors consider their channel is anything to do with their internal decision making, and while we may have no veto or input to a business decision, acknowledging we have a part to play is very important.

We may only be the size of tug boats compared to a supertanker, but supertankers can’t dock without tug boats.

If you have read my recent post you would have seen me live through my experiences with Dell equipment, how I supported the requirement of a Nutanix appliance to run their product, and why I thought it made sense. So hearing that Dell were going to OEM the Nutanix software in an appliance of their own caused more than a little trepidation for me. After all going from a heavily tried and tested platform tuned by a software vendor to one where software was applied to hardware with different components and firmware certainly dilutes my primary message in defence of the Nutanix appliance.

However, taking a step back it is important to reinforce that Nutanix is a software company, and indeed has been criticised for being ‘proprietary’ because it required a purchase of their appliance and there was no software only SKU. Starting to expand the range of hardware that runs the Nutanix software re-emphasises the “software-defined on commodity hardware” message of Nutanix – at the end of the day you will always need hardware, and it needs to be as generic as possible.

For the sake of their amazing support it is in the interest of Nutanix to enforce a strict reference architecture on any OEM hardware deal, so that the issue of finger pointing does not resurrect itself and kill the “one back to pat” USP that Nutanix has enjoyed to date with it’s one stop central support model. Vendors like VCE claim a one stop support system, but on escalation problems go back to their respective vendor 3rd levels and then it’s just like any other server/SAN solution. Level zero call centres really aren’t the solution.

So that’s my view on where I see Nutanix coming from with this decision, what about from the channel perspective?

I’ve always been guarded about the function of a channel with any vendor, and their actual buy in compared to what they say. My personal experience of Dell is that they talk a good game, but in practise as a small reseller trying to sell into a large customer you don’t already have a foothold in is almost impossible. Their account managers are generally at war with each other over commission, and all sorts of dodgy goings on seem to occur with their deal registration system.

I’ve had a 6 figure deal I engineered yanked out from under me when it was discovered the client had a deal reg posted to them by their account manager – even though they had no involvement in the project. Although I wasn’t actually banned from the sale Dell made sure the client’s direct prices were 5% lower than I could get even if I deal registered the solution. It’s not the only time this has happened, but was the one that sticks with me the most, for obvious reasons!

The practice of signing every business up as a partner, effectively wiping out the margin potential of any actual partner, is rife in Dell and shows a complete contempt for the efforts of a partner to resell Dell solutions into large organisations. It is now generally acknowledged that reselling PCs is pointless, as you can often buy them cheaper on the public website than we get in “premier” pricing as partners. Even with regard to servers it seems unless you can break the £10k deal reg minimum pricing is unlikely to be competitive with their regular deals or what any reasonable size business could get direct.

Then when you try and actually register a deal it’s often denied because one appears for that customer tied to their account manager – in one case I had one denied because it was SIMILAR to another company name at a completely different address! Dell deal registrations appear from this to be generic and long lasting, not project centric like Nutanix, although when filling out a Dell deal registration it implies otherwise.

This isn’t a problem only I have experienced either, and I’ve had a lot of conversations with Dell partners who feel similarly aggrieved, including Enterprise ones.

So while I broadly agree with the direction Nutanix is taking – ultimately to a software only SKU as long as you use HCL infrastructure – I am in a position of  selling a product with a defined hardware price, against what is basically now a competitor present in my entire target market that has the ability to loss lead it’s hardware to snag a deal.

Nutanix have informed me that everything, including from Dell, needs to go through the Nutanix deal reg protection system and therefore there will be no pricing advantages, but so many organisations have already purchased Dell equipment at some point that I imagine I have much tougher sells ahead of me; my Nutanix marketing may just lead to somebody picking up the phone to their Dell account manager, rather then responding to me.

I also have to wonder how Nutanix can guarantee that they can provide a level playing field to their direct partners, when a major partner has the ability to independently price hardware.

So I have to trust my in depth knowledge of the product – I run it in production after all – and my belief in and focus on Nutanix rather than having a range of competing products I resell, combined with my solution based approach to customer requirements, will enable me to continue to get a return for the very heavy financial and personal commitment I and my business have made to this vendor.

Time will tell.

Posted in Uncategorized | Tagged , | Leave a comment

DISM Windows Server 2008 R2 Change Edition

This is useful to know when you hit that 32GB limit and don’t want to go to trouble of migrating to a new server. Of course this isn’t an issue now in 2012.

Rick's Tech Gab

Hit a little issue in my lab today, It happens that I went ahead and installed Windows Server 2008 R2 Standard for a bunch of my Lab VM’s. Now the issue is that I need Windows Server 2008 R2 Enterprise Edition to support the Windows Failover Clustering feature.

So long story short, I didn’t want to have to fully rebuild my Lab VM’s. So I went looking around and found a very nice way to in place upgrade to Enterprise Edition.

The command that we are going to use is the DISM.exe command (Deployment Image Servicing and Management Tool), that is available in Windows 7 and Windows Server 2008 R2. You can find out more about the Tool HERE

  • First of all go ahead and on the server you want to run this command open up PowerShell as an administrator.
  • Click on the “Start Button” Type Power, PowerShell will then…

View original post 438 more words

Posted in Uncategorized | Tagged | Leave a comment

Nutanix – defending the hardware appliance in a “software defined” world

Image

Article updated June 10th 2014 – scroll to Update section below for updated comments:

 

Software defined seems to be the latest buzz phrase doing the rounds recently; software defined storage, networking, datacentres, hitting the marketing feeds and opinion pieces as terms like Cloud are now considered mainstream, and not leading edge enough for the technology writers and vendors looking for the next paradigm.

Because Nutanix supply their hyperconverged compute and software solution with hardware there have been many comments that their product isn’t truly software defined; but it is, despite the hardware, and this is why.

In everything that they do Nutanix are a software company. Their product is the Nutanix Operating System (NOS), which forms part of the Virtual Computing Platform. They do not produce any custom hardware, everything that NOS runs on is commodity x86 hardware, no custom ASICs, drives, NICs, boards, etc. The reason they provide hardware with their software solution is very simple – supportability and customer service.

I run a modest hosting company and being extremely budget conscious (as in, I didn’t have any!) I looked for the cheapest route to market that I can, while still feeling somewhat secure about the service I provide. The problem is that this is a lot harder than you may think, and in the complex world of virtualisation hardware compatibility is still very much there; it may be abstracted away from the guest VMs, but the poor old infrastructure manager has it in spades.

Last year I had two problems that showed this in high relief:

The first was a BIOS issue we encountered soon after buying 4 identically specified Dell PowerEdge 715s in May 2013. It was not long after they entered production that we began seeing random PSoDs (the good old VMware PURPLE screen of death when it kernel panics) on these servers. Multiple calls to both Dell and VMware resulted in attempts at hardware replacement, then it surfaced that a bug had been introduced into the BIOS at version 3.0.4.

This took 3 months and two attempts to be finally fixed at 3.1.1, but even then we had some (but different PSoDs) occurring at random. After another two months this eventually was identified to the SD cards we used to boot ESXi and an interaction with an SD card on the DRAC remote management card. Disabling the DRAC SD card stabilised the system, some 7 months after purchase.

The second was the need to go to SSD storage for workloads which were soaking our poor Equalogic and other HDD SANs. Again budget was so limited that we turned to building a SuperMicro box, filling it with Samsung 840 Pro drives (at the time not Enterprise but recently reclassified in use as they are so brilliant at what they do), and putting Starwind (on Win2K8R2) across the top. This solution worked brilliantly and has never failed, but is at the end of the day single controller by it’s nature –  so it was always at the back of my mind that a nasty shock was waiting for me if a DIMM popped or even if Windows crashed (hardly unlikely in my experience).

Before you ask, no we couldn’t afford HA at that time – all those extra disks, duplicate chassis, and increased StarWind license fees made it eyewatering. We have today created just such a beast in our new facility, and the HA nature of it now goes some way to assuaging the fact that it still is a home grown all flash SAN solution – just one with a better uptime potential.

So when the opportunity arose to create a second site that our production services could move to then it was time to bite the bullet. After a number of investigations including Maxta and Simplivity (VSAN still in beta at this time) I decided on Nutanix to deliver this new platform.

Now nobody can accuse Nutanix of being cheap – it is targeting an Enterprise space and I am definitely not Enterprise. In fact I may have one of the smallest companies on Earth that has a production implementation of this, it took a significant bite out of my turnover let alone IT budget. However what attracted me was that the hardware was fully tested and compliant, such that should a problem occur there was no finger pointing from the “software” vendor and wasted days before actually engaging support on the problem.

This in a nutshell is where the value lies in the Nutanix appliance. It is fully tested and certified to work with their software so that should you call support they get right on what software problem it could be and don’t start blaming BIOS, firmware, RAID card types, NICS, drives, etc., etc., ad nauseam.

It may be that one day Nutanix has a software only SKU available – it can be done today technically but isn’t on the price list – but they would have to enforce a rigorous HCL to make sure their incredible support levels weren’t diluted. In this case the hardware may not be that cheap to independently source as people reckon, and you need third party support on top (included for hardware and software with the appliance), and of course if Nutanix aren’t selling hardware they either take a hit on their bottom line or increase their software prices – nobody is in this for kudos, it’s business!

In summary I chose Nutanix for one reason above the obvious raft of Enterprise features I was gaining, to be able to sleep nights knowing my home grown or loosely cobbled together hardware solution wasn’t going to go bump in the middle of the night.


 

Update:

A number of comments and articles have subsequently appeared (this being a prime example http://www.theregister.co.uk/2014/06/09/hyper_converged_kit_what_for/) which this blog entry seems to be pre-written to debunk. However one point I didn’t make, because my tests were subsequent to posting this entry, was the time to deploy.

I took a bare, out-of-the-box, Nutanix block, racked it, connected to 10G switch, and powered it up. I had their Foundation software already running on my laptop, so when the nodes came online they appeared in the Foundation console through IPv6 link local automatic scanning.

I then hit the install button and over the course of the next 40 minutes all I had to do was watch a blue progress bar run from 0% to 100% – and at 100% I had a fully configured Nutanix cluster with running and base configured (as in accessible) ESXi5.5 hosts.

In the following 20 minutes I installed a vCenter 5.5 appliance on one of the hosts, and on power up did the base configuration on port 5480, logged into vCenter on VMware Client, added the hosts, and therefore in 1 hour had a functioning ESXi5.5 cluster.

The Foundation software can do this to 5 blocks (20 nodes) simultaneously! So it’s reasonable to assume that even pretty large VMware clusters can be built out of the box in a day.

Can anybody else really do that?

There won’t be many.

Also, although this was VMware the same can be done on Hyper-V, with Foundation installing and base configuring Windows 2012R2 in parallel on the nodes.

For KVM it’s even quicker as the nodes are pre-installed out of the factory and all you need to do is build the Nutanix cluster (in parallel), so theoretically you are looking at 20 minutes.

Posted in Uncategorized | Tagged , , , , , , | 4 Comments

Carbon Positive Campaign!

Climate of Sophistry

“In a time of universal deceit, telling the truth is a revolutionary act.”  – George Orwell

The Carbon Positive Campaign

Please join the fun, exciting, life-affirming, environment benefiting, life-creating campaign for carbon dioxide!

The Carbon Positive Campaign!

Within 50 years, in the time of our children and grand-children, we will have improved the environment and created a lush, green, productive, life-filled planet Earth with all of the beneficial green plant food carbon dioxide we are re-adding and giving back to the environment.

The carbon that is currently trapped in hydrocarbon fuels used to be life!  It used to be carbon circulating in the global biospheric process of life, and sustained a lush, green planet, that could support huge creatures like dinosaurs in the past.  Today, that carbon-based life has been trapped underground and has formed hydrocarbon fuels that humans can access.  Humans get to use that old carbon, which…

View original post 415 more words

Posted in Uncategorized | Tagged , , | Leave a comment

Disaster Recovery – or saving your a$$

When disaster strikes have an escape plan!

When disaster strikes have an escape plan!

If you are responsible for the data in your business, whether it be your job or your business, then it is your a$$ that is on the line should disaster strike.

Much has been written about business continuity and disaster recovery, but it is fair to say that in many – if not most – organisations it is way down the list of concerns as it is perceived to be a low risk and therefore not worth the likely expenditure. However you may be surprised and even quite shocked how even a common hardware failure, in a certain combination, can invoke a disaster scenario; and if you are not prepared for it the consequences can be catastrophic for your business, and maybe even terminal for your job!

We provide disaster recovery services from remote sites into our data centre utilising VMware and Zerto replication software. We (or should I say an organisation we work with) have recently experienced just such a run of bad luck; while obviously redacting the name of the organisation involved, we feel that by sharing this experience it will serve to highlight just how far up the ladder of importance BC/DR should be in a world where IT is not just part of your business, it IS your business!

+++++

Company X had taken a considerable amount of time to be convinced that having an offsite DR solution would be beneficial to their business, and this had been highlighted by an application failure as a result of which we were informed they stood to lose 6 figures per DAY in revenue from such an outage. This is itself we thought would add some urgency to the remediation, but alas this is not always the straightforward outcome.

Unfortunately there then followed several weeks of contract negotiation in which the lawyers looked to get their pound of flesh nit-picking over contract clauses that had little material impact – insofar as we were concerned anyway. In fact some clauses were introduced that we felt actually benefitted us – go figure.

In any case the contract was duly signed with a live date of September 1st and we started the process of building the DR site and installing Zerto at the client site ready to replicate data. Indeed some test replication of data had already been completed in the first week after contract signature to prove the links and processes.

On the evening of August 7th three disks in the customer SAN simultaneously failed. It appears to have been a data error that threw the disks offline rather than a full hardware failure, but the effect was the total destruction of 1 of the 4 LUNs on the SAN. This LUN contained a number of virtual servers, including unfortunately the backup systems and vCenter server, as well as Mail and other critical systems – such as their primary DB server.

So, three disks and the entire site was out. Due to the capacity issues (another story) on site there was little room to recover everything if indeed the backup system could be recovered. The backup data was safely ensconced on a different piece of storage – but as it was a dedupe store without the backup software it was not coming back any time soon. This was designed for specific data recovery, not disaster recovery.

First piece of luck: using Zerto we had replicated their main DB server to our site, this contained not only much of their critical business data but also the vCenter database. Their application server was still up at the original site so we managed to get their main app back online utilising a VPN between sites and a DNS change, just in time for a very important submission deadline the following day. Had the DB not been available offsite that deadline would have been missed – lesson 1.

Second piece of luck: we had also replicated their print server which served a dozen remote depots and was configured with 80+ printers. Again all that was needed was a new set of VPNs configuring and a DNS change and everybody could print without having to wait for the print server to be recovered – lesson 2.

We were then free to rebuild the backup server, connect to the data repository and recover vCenter to make life a little easier managing what was left of their main site. Here again made so much easier because the database for it was already up and running – lesson 3.

The primary lesson though was that had the DR been in place on August 1st all we had to do was enact a failover and the whole site would have been up and running just a few DNS changes later. They would have had no system outage as the failure occurred overnight, and the ongoing problems recovering their 600GB mail server would not have been an issue. How much the nit-picking lawyers cost them, along with the time to enact even when a decision had been made is not likely to ever be truly calculated. All because of 3 lousy spinning disks; not a fire, not an earthquake, not an outbreak of war.

So the next time you consider disaster planning, perhaps you should consider what actually will put you in a disaster situation, and you may realise that the increased probability of it happening makes mitigation a much more realistic prospect.

It is not so much can you afford it as can you afford to ignore it, although in the above case the implementation costs are likely to be a fraction of the resulting cost of the 3 disk barf – even including the extra out of contract work we had to do to get them back from the brink.

For more info on how we can help you plan your VMware BC/DR with Zerto replication software mail info@millennia.it today.

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

True VMware BC/DR with Zerto

Image001

One of the great advantages of virtualisation is the ability to dynamically create virtual servers quickly to meet a specific requirement. This had always been difficult with physical servers, and when the complexities of Business Continuity (BC) and Disaster Recovery (DR) were added the cost and resource requirements skyrocketed.

This often led to BC/DR being considered by only the largest organisations, which left smaller organisations dangerously exposed. It has long been touted about that 90% of companies that suffer major outages and data loss fail to survive their disaster, with all the implications for business owners, employees, customers and suppliers alike.

You can’t afford the luxury of leaving your most critical business asset – data – to chance.

However as many have found out just virtualising your server estate does not introduce a panacea that solves your BC/DR headaches. While creating the virtual servers may be relatively easy, effectively transferring them to an alternate site in an emergency can still be a complex task, and getting users online quickly with minimal data loss is key to a successful BC/DR strategy.

The “Cloud”

The Cloud as a dynamic pool of resources has been around for a long time, although widely available commercial versions have been more recent to market. The Cloud gives an end user the opportunity to have a BC/DR system without having to invest in a second site with duplicate hardware and everything that goes with it (power, cooling, staff, etc.).

However once again the existence of the resource does not automatically mean it is easy to incorporate with an end user network for implementation of BC/DR services. The replication of data to the remote site needs to managed and relatively easy to implement or it becomes a further roadblock to uptake.

Introducing Zerto – the FIRST VMware vCloud aware BC/DR system

Zerto is a hypervisor based data replication system that tightly integrates with VMware vSphere (version 4.0 and up) and is fully compatible with VMware vCloud Director (version 1.5 and up). This is something not even VMware have in their arsenal yet, although I don’t doubt Site Recovery Manger has designs on being number two. Zerto is multi-tenanted so that service providers can provide DR services for multiple organisations and keep data safe and separate.

Our experience of Zerto is as part of our newly launched Cloud DR as a Service (DRaaS), a service which we honestly believe would have been too complicated and expensive to contemplate without this software set from Zerto.

Image002

The diagram above is an example of a customer to Cloud DR arrangement. Note how the customer can run older versions of vSphere and still be compatible; as there is no storage replication involved the entire service is storage agnostic, and doesn’t even need to be specified in the design – it just underpins where the VMs are located.

Zerto is deployed as several components as follows:

·         The Zerto Virtualisation Manager (ZVM) is deployed at each site and can either be a stand-alone VM or installed on the vCenter server (recommended stand alone on large environments). The ZVM is responsible for managing the replication system, and the link to vCenter. It is accessed via a plugin to vCenter for single pane of glass management.

·         The Zerto Replication Appliance (ZRA) is an appliance installed on each host that is part of the protection system. It is bound to the host by affinity rules and starts and stops with the host during maintenance. This appliance intercepts the IO stream from a VM and splits it so that the IO is diverted away to the remote site in real time. This avoids snapshots and scheduling and provides Continuous Data Protection (CDP) with journaling, to enable point in time VM recovery at any point up to 5 days previous to the failover invocation.

·         Optionally a Cloud Connector appliance can be deployed to allow integration with vCloud Director, so VMs can be migrated into and out of vDCs under the same management system. This is not required for vCenter to vCenter protection.

VMs can be replicated individually or as Virtual Protection Groups (VPG), which allow grouping of VMs that are application aware and provides a consistent point of recovery (as in a web application and database server for example).

The screenshot below shows the Zerto vCenter plugin during one of our testing phases.

Image003

Here you can see that replication can be bi-directional and although there are only two sites shown here there can be multiple sites in the replication network, and they can replicate in any direction and to any site as required.

Look closely and you will see that the Recovery Point Objective (RPO) is in the order of a few seconds. Many DR solutions would be happy with an RPO of 15 minutes but this is far better than that. Recovery Time Objectives (RTO) are in the order of minutes, with our tests we have recorded as low as 43 seconds to have a protected VM booted up 90 miles away in a second DC and ready to log on. However 5-15 minutes of RTO would be a more likely target, depending on the amount of VMs to be recovered.

In the Cloud the sites can be masked so that a customer can only see the vCloud resources they have access to, and not the network of any other customer. This provides the multi-tenant support that is very important to service providers like ourselves.

WAN compression is also available to minimise the data being sent on a slower WAN link to a remote site.

Our experience of Zerto

As part of the 14 day trial Zerto allow they hand over a DR test plan that details all the scenarios that are possible with Zerto, to ensure everything works and all features are explored. It is during this testing that we discovered a range of features that exceeded even the expectations we had developed during an earlier demo of the product. This is rare in our experience as demos often hide work arounds and features that are more road map than they lead you to believe.

The features available include:

·         Controlled failover for “Disaster Avoidance”. Not all disasters are unexpected, and indeed some  – such as major weather disruption – can be seen days in advance. Even major maintenance slots could be seen as a disaster to an always on application, so it is good to do a controlled migration to the alternate site so users can keep working. Zerto allows for a controlled move that can automatically re-protect to the original site so it can move back after the danger has passed. The move can be rolled back if there is a problem in implementation (say a network connection doesn’t come up due to a configuration problem) or committed to make the alternate site live. A scheduled move deletes the original VM as it is not a disaster and the alternate site becomes the primary, hence the importance of rollback/commit options.

·         Test failover without disruption of live service. A test failover can be done at any time, multiple times, to test the failover mechanism works and this will not affect the protection state of the live VM. Zerto can be configured to disconnect the network of the test VM so that it does not cause issues by connecting to the live network. When a test is completed the test VM is just destroyed.

·         Remote cloning of a VM. At any time, and without disruption to the performance of the live VM, a clone can be taken to a point in time and recovered at the remote site in a matter of seconds. This allows for helpdesk investigation and can mitigate data corruption issues without downtime to the main system. It also enables spawning testing systems from live copies of an application to help in tracking down issues.

·         VSS support. By means of a VSS agent on the VM the normally crash consistent VM recovery can be made application consistent for applications such as databases and mail systems.

·         Pre-seeding. If a VM is TB in size replicating the initial copy over the WAN could be very difficult. It is possible to take a copy of the VM to portable media and transfer to a remote site. This base copy is then used to apply a delta synchronisation, for a much quicker initial sync of a large virtual server.

·         Full vMotion and Storage vMotion compatibility. VMs can be moved from one host to another while being protected and this does not affect the ability of Zerto to protect the VM. Similarly a VM disk can be extended and Zerto will automatically extend the disk of the replica to maintain compatibility.

·         Manual checkpoints. As well as a specific point in time the user can define a manual checkpoint across all VMs, and then recover to that checkpoint to guarantee a specific point in time recovery. These checkpoints can be VSS aware if the VM has the agent installed.

·         Dynamic networking. The IP of the recovered VM can be changed to suit the new network on which it resides during the recovery process. This is automatic and allows for the VM to be online as quickly as possible in a failover situation. Boot orders can also be defined so, for example, a DB server doesn’t come up after an application server requiring to connect to it. However if IP addresses have changed care must be taken; it is advised that DNS is used to locate the DB server and the appropriate DNS server entries made in the network settings of the recovery VM. That way the IP change can be handled by the application through DNS lookups.

Performance and Recovery Objectives

Because Zerto takes a copy of the data in real time via the ZRA you do not get the overhead of constant snapshotting of a VM to enable replication of its disk. Once the data hits the protected VMs disk it is not affected in any way by its protection state. Depending on the amount of data being written to the protected VM there may be a delay in the IO stream while it is being replicated, so it is important in busy environments to monitor VM performance and make sure the VRAs have enough resources so that they don’t cause a bottleneck on the protected system.

We generally see RPOs in the order of 10 seconds or less on our systems, and RTOs never more than 6 minutes in testing. This, combined with the ease of deployments and management of the Zerto system, makes BC/DR finally feasible for the SME.

Millennia® are part of the Zerto Cloud EcoSystem (ZCE) – http://www.zerto.com/news-events/press-releases/33-cloud-service-providers-join-zerto-cloud-disaster-recovery-ecosystem/-  and provide both DR to Cloud services, and consultancy for Enterprise site to site BC/DR utilising VMware vSphere and Zerto. For more info visit our site at http://www.millennia.it/zerto.aspx and drop us a line.

Posted in Uncategorized | Tagged , , , | Leave a comment

The Great Inflation Lie Laid Bare

Image002

“Official” statistics now put Consumer Price Inflation (CPI) at around 4.5% in the UK and a little under 4% in the US as measured by the CPI-U data. However no matter who you talk to they never can quite understand why these published figures never seem to match the relentless upward march in prices they see in the real world.

It is almost impossible to track how prices have changed over the years because successive Governments on both sides of the pond had continually tinkered with the figures, generally in the downward direction.

This is beneficial for the debt-laden Governments because they are able to hide real world inflation away from the masses while quietly inflating away their debt bubbles. This, as ever, rewards the profligate at the expense of the prudent and results in the purchasing power of fiat paper currency being eroded more rapidly than one would care to consider.

The Gold Standard

In order to clearly demonstrate how far we have diverged  from reality in the last 30 years then we merely have to look at precious metals such as gold and silver. These both had a big run up in prices in the turbulent 70s and only really perform well in times of inflation and uncertainty. Because of constant tinkering with the former and copious BS from central bankers making us think the latter was a thing of the past, these commodities basically flat lined until just a few years ago.

In fact the most famous indicator of the turning point was the sale of 395 tons of gold by Gordon Brown between 1999 and 2002. The price has never been lower since that time, and it also coincided with the Great Top in western stock markets.

Gold has recently made huge gains from the $250 – $300 an ounce Gordon sold for, and recently topped $1,900 an ounce. This has led many to say a new all time top has been reached, but in terms of inflation-adjusted real value is this actually the case? This is where a comparison between real and “official” inflation becomes so stark.

Because gold is priced in US$ I’m going to use inflation statistics of the US to indicate the point. Official figures for CPI (CPI-U) are available from the Bureau of Labor Statistics, but an alternative “real” CPI has also been produced by John Williams at ShadowStats.

ShadowStats basically takes the CPI as it was during Jimmy Carter’s tenure as president, and continues to calculate CPI in same way as it was done back then. No fiddling, no “adjustments”, just the same base calculations year on year. This allows us to compare where gold may peak if was to reach the same inflation adjusted price as it did in 1980, when it briefly hit $850 an ounce.

Where are we now?

If you take the official CPI-U figures from then to now you would arrive at a figure of around $2300 – $2400 per ounce. This is pretty near to where we have got to and certainly seems to back the calls for a top in the gold market. However when the ShadowStats CPI figures are applied to the peak gold price of 1980 it shows that today, to equal this record top in the price of gold, it would have to exceed $15,000 an ounce!

From this it is not hard to see where the purchasing power of your savings has gone, and is actually worth merely a seventh of even what the US Govt admits has been eroded away in the last 30 years.

So it’s not hard to see why houses have leapt from £10,000 to £200,000 in that time, and it does clearly show that deposits in a bank, whatever the headline interest, bleed away into the debt mountain the Govt creates over time. At the end of the day capitalism is designed for the few, to be paid for by the many.

All that Glisters…

Despite all the hype around gold perhaps the biggest move yet to be made is by its poor relation, silver. Silver also participated in speculation and in 1979 reached nearly $50 an ounce. It is unlikely that such an equivalent price would be reached again today as this was done on huge margin, and the resulting unwinding of the price in 1980 lead to the price crashing – wiping out the Hunt brothers in the process on Silver Thursday (27 March 1980).

However the price is currently sitting around $40 again, so on an inflation adjusted basis just how far COULD it run?

Well, based on CPI-U this would be around $130 to $140 an ounce – more than triple where it is currently – and on the ShadowStats index it would be approaching $350 an ounce.

So the “poor man’s gold” may yet have its day.

In conclusion

Never, EVER, believe predictions of deflation – unless it is consumer electronics or something similar that benefits from technological development and mass production. Deflation is an invented instrument used by central banks and governments to hide their practice of inflating away the ever increasing debt pile. Unfortunately even this scheme cannot last forever, and something really dramatic will be needed to dig us out of the hole we’ve all collectively stuck our heads in.

Posted in Uncategorized | 1 Comment