Skip to content

Server sales increase due to cloud expansions!

Server sales increased in first quarter, mainly due to continued investments in the Hyperscale server infrastructure that power public and private clouds. This was revealed in a Gartner report released this week which also mentioned that the first quarter server shipments grew by 13% year on year to 2.7 million units, while revenue grew by almost 18% to $13.4 billion.

The server sales report prepared by Gartner also revealed that the growth was due to the strong demand from the so-called hyperscale area in the US.

Hyperscale is a term used to describe distributed systems that use thousands of servers to power cloud and big data infrastructures.

The growth in server sales was powered by factors such as increase in sales of optimized racks, blades, density-optimized and tower servers.

HP and Dell emerged as winners in this revenue and volume segment of servers. IBM stood at third position followed by Lenovo and Cisco systems. Huawei Technologies was the fourth position. Inspur Electronics emerged as a big surprise occupying the fifth place in the latest server sales report created by Gartner.

Lenovo which was crawling towards the second position in the last years sales report prepared by the same research firm, met with a disappointment this year. Lenovo’s numbers were vastly affected due to acquisition of IBM’s x86 server business.

Research finds SSDs struggle in virtual servers environment!

Solid State Drives are a hit in the environment of business storage applications and this is due to the fact that they consume less power and vastly boost performance. And for this reason, these drives are also being used in serving multiple simultaneous I/O requests or in hosting several virtual machines simultaneously.

But a new research shows that consumer SSDs struggle to fulfill these tasks over the long term due to their garbage collection algorithm. To know more let’s go for a briefing given by the whitepaper sponsored by UseNix and written by Jaeho Kim & Donghee Lee from University of Seoul and supported by Sam H Noh from Hongik University.

It is a known fact that running multiple virtual servers on a hard drive will lead to performance degradation, as each Virtual Machine will compete for access to a single spinning disk. RAID can help in solving this problem to a certain extent, but using multiple disks introduces further difficulties.

An SSD can prove handy in these situations as it can easily manage this high end workload with ease. But what researchers have found recently is that the garbage collection routines that run on modern day Solid State Drives actually make them a poor fit for these virtual workloads.

The problem arises in two forms- First, as with the most workloads, an SSD ages and so its performance degrades. This is where garbage collection and other algorithms are specifically designed to restore performance. But the researchers report that garbage collection runs into problems precisely because of the disconnect between what the SSD sees as idle blocks ripe for picking and the needs of the host VM running on top of the host OS, which is ultimately running on the SSD. And one has to understand that data mix can take place internally because of concurrent virtual machines. This means, data can sit side by side within the same block of NAND flash and garbage collection can impact all of the virtual machine sessions simultaneously.

The research authors who have written the whitepaper conclude that a guaranteed set number of IOPS is not possible when multiple VMs are hosted on a single consumer SSD. While they don’t test multiple SSDs or some of the high-end commercial offerings available from Intel or other vendors, nothing about the architecture of those drives suggests they would be unaffected. The absolute level of performance might be much higher, but the same tendency to mix I/O from different VM sessions at the drive level is going to plague the results.

What’s the solution then?

For those looking out for a solution, the researchers reply that the only way to come out of the situation is to go for better SSD controllers which will drive performance in the future.

It is a known fact that hard drive controllers do not track as many variables as the SSD controllers do. Whereas, an SSD controller manages 8-16 parallel channels to NAND and make decisions about garbage collection, wear leveling, and error recovery. Therefore, maximizing the performance of the internal interface means dealing with a number of variables in real time.

In recent times, there has been a lot of buzz related to long-term barriers to NAND flash scaling. By following a technique of shrinking processors to microprocessors, increased performance at lower processor nodes was observed. But the same performance boost may not be possible by shrinking NAND flash to smaller geometries, as in practical, this step tends to make it slightly slower. Fewer electrons trapped per cell means that some of the compensatory mechanisms slow things down a bit, as does increased error checking.

The performance increase seen in shrinking SSDs, leaving out the achievements of Samsung’s 3D NAND, have come from improved SSD controllers, from the usage of SLC blocks to speed data transfers, and increased drive bandwidth.

So, the researchers in this whitepaper recommended the use of controllers for VMs, in which specific, dedicated blocks of NAND flash within the same physical drive are set aside for each individual VM. They claim that in theory, a future drive controller could offer that kind of fine grained capability, though it would likely be reserved for commercial hardware for obvious reasons.

In the end, what the whitepaper claims is that by using better SSD controllers in virtual machines delivery of better performance is possible and not by boosting drive bandwidth.

To know more details, follow the following white paper link on SSDs struggle on Virtual Servers due to Garbage Collection

Google Photos to provide free unlimited cloud storage for storing photos and videos

Google, the internet juggernaut ruling the web has announced sensational news at Google’s IO 2015 Keynote. The world renowned web services provider is launching a new service named Google Photos which will offer free unlimited cloud storage for both photos and videos of users.

Google Photos will be free unlimited cloud storage for photos and videos with a caveat that the photos will have a maximum of 16-MP size, while video is limited to 1080p footage.

Google Photos will be rolling out on Friday i.e. May 29th, 2015 on Android, iOS and web browsers. Once this application is downloaded all your photos will be syncing to the cloud from your phone, tablet or even a PC webcam.

If in case you need storage for images over 16 megapixels, then you can store them happily on your Google Drive account.

Google Photos will have an impressive pinch to zoom feature on smart phones for starters, letting the user to quickly move from single photos, to thumbnails representing days, weeks, months or even years.

The photos on this platform get tagged automatically, organizing your photos by person, place or object. With the help of search tool and related algorithms, photos can be searched through names, locations and image recognition features. Like when you type ‘Snowstorm in Milwaukee’ all the photos assigned with this name on partial or full note will get listed out.

In a similar fashion, videos can also be stored on this platform, and can be edited in an automated way.

Sharing is also too simple on this platform. The user can simply select a single photo, or tap and drag to select multiple images. They can then send the link to anyone, whether they have Google Photos or not. The recipient can just click on the link and go to a webpage displaying those photos.

What’s more amazing is that if they have Google Photos, then a single tap can save those photos to their own library of images. It will also allow sharing to Facebook and Twitter.

You can start using this service with your current Google username and password which works universally.

 

Sony to develop Blu Rays filled cold storage used by Facebook!

Sony, a world renowned company has announced that it is all set to develop cold storage devices based on Blu-Ray Discs. These devices are currently being used by Facebook for storing cold data. To strengthen its path in this segment, Sony has acquired Optical Archive, a start-up that spun out of Facebook that uses Blu-Ray Discs technology in an innovative way.

Optical Archive has presented to the world a storage system that offers 1 Petabyte of data storage into a single cabinet filled with 10,000 Blu-Ray optical disks. The solution also employed a robotic retrieval system similar to those used to retrieve tape from archived storage units. Although, this wasn’t the purpose of using Blu Ray discs, it did give the discs a renewed purpose.

However, it has to be notified over here that Blu-Ray is not ideal for primary storage because data can’t be retrieved instantly. But can be used for applications were stored is rarely used.

Facebook has announced recently that it is observing savings of up to 50% compared with the hard disks. It also gave a slight briefing that it is observing 80% less energy consumption than usual cold storage racks which normally consume.

As per the available records, each disc is certified to retain data for 50 years and can battle with factors such as IOPS with ease.

Now, after acquiring Optical Archive, Sony is interested in further developing this technology to a new fold. The company wants to use the same media to develop this storage technique in order to accommodate more data.

More details are awaited!

 

IBM reports that single data breach cost has risen to $3.8 million

IBM in association with Ponemon Institute has reported that single data breach cost has reached to $3.79 million which is a 23% rise. The report also suggests that the per-record cost of a data breach reached to $154 this year, a 12% increase from last year’s $145.

Loss of business was significant, and growing, part of the total cost of data breach. As per the report status, higher customer turnover, increased customer acquisition costs, and a hit to reputations and goodwill added up to $1.57 million per company, up from $1.33 million from the previous years.

The Ponemon Institute report was prepared after analyzing the results from 350 companies in 11 countries, each of which had suffered a breach over the past year.

Data Breach costs varied for different countries. For example the US had highest per-record cost of $217 and was followed by Germany at $211. India and other Asian countries cost per breach was termed at $56 per record.

When the results were sorted as per the industry, the highest costs were in the healthcare industry, at an average of $363 per record.

The following are the factors which influenced the breach costs-

  • Unavailability of an incident response team paved way to data breaches in a company. And if these teams were present ahead of a data breach, a cost reduction on per-record cost by $12.60 can be observed.
  • Using encryption extensively reduced costs by $12 and employee training regarding data breaches and their impact on business can reduce the costs by $8.
  • If in case, a business continuity management personnel were part of the incident response team, costs fell by $7.10.
  • CISO leadership lowered costs by $5.60 and board involvement cot down costs by $5.50.
  • Cyber insurance presence which is now on great demand in IT sector has also sneaked into the report. The Ponemon Institute report said that by presence of cyber insurance cut down in costs by $4.40 can be observed.

Factors that increased costs were the need to bring in outside consultants, which added $4.50 per record. If there were lost or stolen devices, costs increased by an average of $9 per record.

And the single biggest factor which increased costs was if a third party was involved in the cause of a breach. That increased costs by $16, from $156 to $170 as reported in the first paragraph.

IBMs data breach report prepared in association with Ponemon Institute found that cost rise in this segment was directly proportional to time. It justified this concept by adding a few instances which are as follows-

  • It was found in their study that it took respondents 256 days on an average to spot a breach caused by a malicious attacker, and 82 days to contain it.
  • Breaches caused by system glitches took 173 days to spot and 60 days to contain.
  • And those caused by human error took an average of 158 days to notice, and 57 days to contain.

For more details click on IBM Data Breach report 2015

Australian Parliament seeks better backup and disaster recovery services!

The Australian Parliament Services has said it needs to improve disaster recovery and data management services in its premises. Following a power outage, which led to a meager data loss, the Parliament House data Center has issued a notification seeking sophisticated and up-to-date solutions in this segment.

In the Budget, the agency has given Australian $3.031 million to improve IT security in parliament, and an additional Australian $7.7 million for improved network and IT security for the electorate offices of members of parliament.

The agency is also looking at funding for disaster recovery management in order to improve disaster recovery capabilities.

The IT funding available will be used to improve the IT infrastructure and services in the said premises.

A spokesperson for the department told that the power outage occurred on Friday Night during scheduled maintenance of power backup systems. Though, it was a planned service disruption, some data available on the network was lost as per our reliable sources.

In order to avoid such situations in future, the Australian Parliament was ready to invest on the latest backup and disaster recovery in their IT’s storage segment.

Market for Cloud Storage Gateways dazzles due to bouquet of benefits!

Cloud Storage Gateways are software or hardware based appliances located on customer premises and serve as a bridge between local applications and remote cloud based storage. These gateways provide basics protocol translation and simple connectivity to allow the incompatible technologies to communicate transparently. A gateway can make cloud storage appear to be as a NAS filer, a block storage array, a backup target or even an extension of the application itself.

The need for a bridge between a cloud storage system and an enterprise application arose because of an incompatibility between the protocols used for public cloud technologies and legacy storage systems.

Benefits of using these cloud storage gateways

Cloud storage gateways can be used for archival purposes via cloud platforms. This merges with the concept of automated storage tiering, in which data can be replicated between fast, local disk and cheaper cloud storage to balance space, cost and data availability requirements.

Nowadays, companies are trusting cloud storage vendors more, due to the fact that the service logic of these platforms is maturing. That is why other use cases are emerging in this segment. One is collaboration, where data is replicated via a cloud storage gateway to a cloud based storage location for incorporation into a team’s online workflow. Interesting….isn’t it?

Cloud storage gateways are also useful for private or hybrid cloud scenarios. With more companies wanting the elasticity and cost benefits of cloud storage while retaining control of their data, these devices can present a useful path to integrate legacy applications with cloud-based storage accessed over local networks.

Market of these Cloud Storage Gateways

According to a new research report presented by MarketsandMarkets, Cloud storage gateway market is expected to grow from $909.2 million to $3579.2 million by 2020 with a CAGR growth of 31.5% for the forecast period.

Conclusion

As cloud storage gateways make users enjoy the benefits of the cloud while still allowing them to control their data and operations, these storage devices are becoming a hit among SMBs which require hybrid storage as an on-premises necessity driven asset.

Follow

Get every new post delivered to your Inbox.

Join 76 other followers