Tag Archives: IDC

Storage stronger than your average sector

gartner_logo_02

Bucking economic trends as recently as September

Storage Station noted that Gartner and IDC are seeing about 10% growth in storage revenue in the third quarter of 2008.  That’s pretty recent evidence that storage continues to resist macroeconomic momentum.

While it’s clearly not a boom time given the layoffs announced by EMC, Pillar, WD, Hutchinson and others, it’s a relatively positive trend.

These days we take what we can get.

The Cloud is purifying storage

Cloud storage companies are peeling away everything but the bytes

More learnings from the IDC Enterprise Disk Storage Consumption Model report

Since 2005, the Cloud’s share of storage capacity has grown from less than 5% to almost 20%.  Yet the Cloud’s share of storage revenue has risen to only about 5% in the same time period. 

How can “Content Depots”, as IDC calls them, build data centers for one fourth the cost (storage-wise) of conventional corporate data centers?

They use more high-capacity SATA drives, but that’s only part of the answer.  Rather than using off-the-shelf enterprise storage systems, they are often building their own storage system/server contraptions to strip away anything that doesn’t add value as a big bucket for internet bits.  

Granted, these systems aren’t going to support your everyday ERP system.  But these changes should be watched closely.  Innovation by these end-users will migrate back to the rest of the industry to give us all more bytes per buck. 

It already is – in many of the latest greatest storage upstarts’ products.

The rise of the Sterver

Are storage systems and servers becoming one and the same?

In their August Storage Consumption Model , IDC’s Richard Villars talks about several intriguing trends that are changing the way storage is designed, distributed and used. 

The one that caught my eye is what he calls the “serverization” of storage platforms – the addition of computing power to storage systems, especially for mass-scale clustered storage data factories.

A parallel trend is that mainstream server storage capacity is growing incredibly fast.  Storage capacity (often measured in terabytes) now stands next to processing power as cornerstone specifications for most servers.   

It begs the question:  is the difference between a server and a storage system becoming insignificant? 

I think we need a new name for systems that blur the line. Let’s call them Stervers.

Thoughts?  Is this a substantial shift, or just a reinvention of the same old processing/storage partnership?

SATA drives may have peaked in the enterprise

SAS drives get bigger and smaller to take share from SATA for business applications

IDC data from InfoStor shows this year and next are the golden age of SATA drives in the enterprise. 

It’s not that the trend for high capacity storage abates in the future; it’s that SAS drives are expanding their capabilities to replace SATA in many applications. 

Why settle for an interface originally designed for PCs if you can get the same thing in SAS for a little bit more?

SATA drives won’t go away of course – they still provide the most capacity for the dollar.  If it’s good enough for an application, people will continue to use it. 

Have you made the jump to SAS?  Why or why not?

What’s your digital footprint?

You generate much more information than the files and messages you create

emc-footprint-calc.gif

EMC now has a digital footprint calculator on their website.  It estimates how much information is being created, stored and replicated by one’s daily life.  This is an eye-opening exercise that points out all the myriad ways we each generate digital information, far beyond the obvious powerpoints, emails and digital movies.

Similarities to Carbon Footprint 

It has similarities with the Carbon Footprint concept, and of course is directly related due to the power needed to store the kept info I caused to exist in the world.  Another similarity: much of the information created on my behalf was not at my bidding.

I highly recommend the related IDC-EMC forecast of worldwide information growth through 2011.

According to EMC, I’m generating about 8 GB a day – over half a TB so far in 2008.  Beth Pariseau’s and Chuck Hollis’s footprints are here.

What’s your digital footprint?  Comment back and we’ll compare notes.

Don’t be a copycat

Deduplication prevents businesses from repeating themselves

Clever marketing from Overland Storage: contrasting their de-duplication solution with a copy machine.  For me it brings home the essence of what de-dupe is all about.

overland-copy_machine.png

As I posted yesterday, a major headache and expense for business data protection today is redundancy, meaning copying the same files over and over and over again.  Deduplication is one of those technologies whose value is pretty easy to explain:  it makes only one copy of everything, reducing the capacity required by 10 to 20X.

Management is limited, data is not 

Rather than reduce capacity requirements, that frees up more data to be created, used, saved, distributed.  Storage demand is not limited by the amount of data created, but by the ability of consumers and businesses to effectively manage it.

Case in point: according to EMC and IDC, 2007 was the first year that the data generated and replicated in the world exceeded the storage available to keep it. 

Less data leads to more storage 

I’ll say that again: Deduplication leads to more storage.  Agree or disagree?  Tell me why.