Virtualization is changing the way businesses are purchasing and using their computing hardware. By simply spinning up a new Virtual Machine on existing hardware, an enterprise could save a lot from new hardware purchase. This is due to the fact that it helps in cutting down the need of provisioning a new and expensive physical server to match a new workload. However, we cannot ignore the presence of physical resources, as they still tend to act as a backbone to any data center.
Virtualization has allowed us to abstract workloads from the underlying hardware. But in the absence of a solid foundation of hardware resources, bottlenecks and resource contention issues will crop up and can cripple a data center to a major extent. Moreover, in the absence of a reliable physical resource the ease with which server administrators can provision a new VM can instead lead to VM sprawl, causing major networking headaches. Therefore, this issue needs to be closely watched and sorted out.
Nowadays, modern servers are being shipped with massive amounts of memory, multiple network cards and support for solid-state storage. With all the options available, it’s hard to know what you need on a current note and what is needed in future.
Therefore this article gives an overview to those who are in a look out for the best hardware for virtualization.
Processor and Memory- The search for the best hardware supporting server virtualization will always begin with the selection of server memory and computing resources. More often than not, memory will be the limiting factor in the number of virtual machines a server can host. And, of course, a shortage in either RAM or processing power can directly affect performance. In many cases, organizations choose to repurpose existing hardware for virtualization. But at the same time, they do know what to look for when it’s time to replace aging servers. In generalized terms, when you shop for any server, your buying decision should more focus on CPUs. For virtualization hosts, the more number of cores trumps the speed of each core almost every time. In many cases, you’d be stunned to know how many virtual servers you can squeeze onto a box running 1.7 GHz cores as long as there are plenty of cores to be had. Therefore, if the consumer has a budget to outfit more boxes with 2.93 GHz of chips, then they can go for them by all means. But at the same time, similar performance can also be obtained by running AMD Opteron 4000 series CPUs running anywhere from 1.7 GHz to 2.2 GHz per core at 6 cores per CPU. A few servers with two of those processors can take a medium size virtualization framework surprising far. In the end the fact is that faster the CPU, the faster will be the server’s computing performance. In normal server operating conditions, CPUs often stay nearly idle for a significant portion of their operating cycles, and even when they’re tasked, slowdowns in other subsystems can cause speedy CPUs to wait while data is being retrieved from disk, RAM or the network. If the choice is between a 6-8 or 12 core CPU, at a lower clock speed and a 4 or 6 core CPU at a faster clock speed, always go with the higher core count.
Coming to the memory part, it is always wise to pack as much RAM as you can afford into the CPU. The amount of RAM is the biggest limiting factor in how many virtual servers you can run. Packing a 64GB of RAM or more into a server with 12,16 or 24 cores makes an awful lot of sense, even though RAM pricing jumps at the higher densities. Sometimes, just a 4GB or a 8GB DIMMS gives us an impression that they are much more expensive than a pile of 2GB DIMMS. But you don’t want to be forced to buy another physical server to just to distribute a RAM load. Then you not only have to shell out for the new server, you need to shell out for additional license as well.
Rack Redundancy- The basic advice is to have enough space to include enough physical servers to survive the loss of a single server. If the implementation is large enough, then it is better to have enough space to include several physical servers. The server should have a flexible rack space to suitably support regular maintenance. For instance, if you cannot take a physical host offline for even 5 minutes to replace a failed DIMM, because the remaining servers cannot adequately handle the RAM or processing load cause by the loss of that server; then you are in big trouble, as it takes out the prime benefit of server virtualization i.e. reduction in scheduled downtime away from you. So, better plan carefully in this aspect.
Storage factor- Every virtualized platform will have a built on shared storage. Without this each server is essentially a silo, and the VMs running on those siloed servers cannot be protected against physical server failure. Plus, building and expanding the virtualized infrastructure gets hardware and more tedious without a shared storage. So, always make sure that the shared storage solution is robust and redundant. Whether you plan on using iSCSI, NFS or Fibre Channel, take a good look at your disk I/O needs before you start buying switches, HBAs and disk. In many cases, SATA drives are more than adequate for general purpose server virtualization, and in some cases, NFS will outperform iSCSI for day-to-day computing needs. This may lead you in a different direction than your storage vendor wants to go, but unless you’re talking about a heavy transactional disk workload, you probably don’t need to go for an SSD or even a SAS based array. In fact, unless you’re talking about pushing 10G to each server, the use of these speedier storage mechanisms may be pointless. And with the proliferation of cheap disk, don’t stick to traditional RAID 5; instead go for a RAID 6 or RAID 10 on your array. Yes, you will be giving up space, but the performance and reliability of those choices make them worthwhile.
Network Connections- On the networking side, please don’t forget that it is far less expensive to aggregate multiple 1G copper links than it is to implement 10G. But 10G will give you monstrous growth potential on future aspect. Just remember that it is simple and possibly cheaper to upgrade those servers with 10G NICs later than trying to deal with a smaller number of fast servers that are overburdened with their virtual server loads. General- purpose virtual servers won’t make much use of 10G for either normal server traffic or disk I/O, but highly transactional applications will, so try to find that balance based on your needs.
Finally, one has to remember that server virtualization consolidates infrastructure into fewer physical units. So, the better equipped are you in dealing with any one of those component failure, the better off you are overall.
So, the real benefit of using server virtualization for your computing needs can only be attained when you support it with the right hardware.
Researchers from University of Southern California’s Viterbi School of Engineering have found out that internet sleeps in some parts of the world like any other living creature. The researchers studying how big internet is have found this latest definitive about the Internet and said that this finding will help scientists and policy makers develop better systems to measure and track internet outages such as the one which appeared in New York during Hurricane Sandy.
“The internet is important in our lives and businesses, from streaming movies to buying online. Measuring network outages is a first step to improve Internet reliability,” said John Heidemann, research professor at the USC Viterbi School of Engineering Information Sciences Institute.
The research also revealed a fact that in some countries like United States and Europe, the broadband is always active. But it also mentioned that in some countries like Asia, South America and Eastern Europe, internet access is only available for 15-16 hours a day. The rest of the time, it either takes a power nap or dozes off for good amount of time, only to come alive the next day.
These interesting facts were revealed in an astonishing way by Heidmann, which conducted the probe in collaboration with USC Lin Quan and Yuri Pradkin. These results will be presented at the 2014 ACM Internet Measurements Conference on November 5th, 2014.
The study reveals that they are around 4 billion IPV4 internet addresses. Heideman and his team pinged about 3.7 million address blocks (representing over 750 million addresses) every 11 minutes over the span of two months, looking for daily patterns.
The research was funded by the Department of Homeland Security; Science and Technology Directorate; HSARPA; Cyber Security Division via the Air Force Research Laboratory; Information Directorate and Spawar.
Google, the internet juggernaut known for its innovative web services is all set to come up with a new email service called “Inbox”. This new Google application will be out of Googolplex and will have a different layout, designed to focus on what really matters.
Google Android and Apps head Sundar Pichai, revealed some info on “Inbox” and stated “We got too much email, inboxes are time-consuming to manage and the truly important info often gets overlooked in the clutter, particularly when accessing email from smart phones. So, Inbox will be a sure shot single answer to all these concerns”.
Pichai calls the new email “Inbox” service as incredibly innovative. He added that the service will be based on an email categorizing feature introduced last year in Gmail. For example, all the purchase receipts made by the user or his/her bank statements will be neatly grouped together in the new service of “Inbox”.
Pichai added that present Gmail service lacks the feature of categorizing the emails in an automated way. So, he feels that users are missing important information in the clutter, particularly when accessing email from smart phones.
Probably, Google is in plans to target “Inbox” specifically to smart phone users in future. But currently, it doesn’t specify any such information in its media revelation.
The other big INBOX feature will be ‘Highlights’, where all important email such as flight or event info will be available as per priority.
Many technologists feel that the feature ‘Highlight’ of Inbox looks convincing on paper. But they are in an opinion that Google may face many challenges to get the user interface equation right, so as to not create a confusing mess of information snippets grabbed from multiple messages.
The other highlight in Google’s INBOX email will be “ Assist” which will let people set reminders that are triggered at pre-determined times. For instance, if the user needs to call a store manager at a particular point of time, they can customize Assist feature in such a way that it reminds them about the call. If the user is unable to take the reminder at that time, they can snooze the reminder for a specific amount of time or till they are finished with what they were occupied. This seems to be a clone to some apps on Google Play, where reminders of birthday and anniversaries can be prioritized.
As of now, Google has picked up only 5000 users for its “Inbox” testing spree. The company sent an invite early this summer to some of its patrons.
From this month end, Google will offer ” Inbox” to the world through invitation. To join, please log on to www.google.com/inbox/ and simply use your Gmail address by sending a request to email@example.com
All you lucky chaps out there will get the service, after which, you can integrate your current Gmail account emails automatically migrated to Inbox service.
Although, Google is offering this service as a separate product, testers can use their Gmail username for now. And all their messages, labels and contacts from Gmail will populate into the “Inbox” service.
Google plans to formally unveil its “Inbox” in mid 2015.
Will it succeed is now a ‘B’illion dollar question ( not million)?
Software Defined Storage (SDS) is now being witnessed as a latest trend in the storage world. At the same time, it is also creating confusion among the storage seekers, since the concept is still evolving. Factually speaking, this term is floating and every company connected to the data storage field is coming up with its own definition which is in fact in align with their product-line.
But in simple terms, SDS is separation of control-plane operations from the storage array or appliance. The separation of control operations from data operations changes how new features and complex functions are added to the storage pool. Control operations will reside on server virtual memory instances. This means they can be distributed throughout the server farm and can be created or removed to match the needs and workloads of an enterprise IT environment.
In simple terms and as most of the IT professionals are expecting software defined storage is a platform, where the hardware in the storage appliance is intellectually driven by automated software. Therefore, this approach will make the appliances, as simple storage devices driven by smart software.
Some data storage appliance vendors are also using the phrase “Software Defined Storage” when talking about their hardware products and when they have added storage virtualization technology into their hardware’s storage controllers or storage servers.
Now, for those who are not aware of the benefits of a Software Defined Storage platform, here are some points meant to enlighten
- SDS tunes the storage system in such a way that the user can make the best possible use of the available storage media like the rotating disks, flash cache and flash storage modules. This operation takes place in such a way that applications will not be aware of media which is supporting their work in most cases.
- With the help of a SDS platform, a combination of storage devices can be achieved which make sense considering an organizations workloads, budget and performance requirements.
- SDS can provide more highly available storage without also requiring the purchase of expensive, special purpose storage devices.
- With the intelligence of software, the available storage resources can be assigned and reassigned as organization’s workloads change over time.
- Storage can be shared between Linux, Windows and UNIX workloads even though the operating system identifies the centralized storage as a different storage resource.
- With the help of SDS, new storage technology can be added to and used alongside of older forms of technology without requiring application changes.
Now, for those IT pros who are interested in having a Software-Defined-Storage platform in their enterprise IT environment, here’s a recommend.
StoneFly, Inc. is offering enterprise class solid-state storage through its USS Hyper Converged appliance to produce the ultimate software-defined virtual computing solution. Use of Virtualized operating system allows complete hardware utilization and considerable reduction in power & cooling costs.
StoneFly Hyper Converged Unified Storage and Server is a hybrid appliance which incorporates Flash and rotating disk as storage media. Therefore, flash based storage inclusion helps to deliver massive IOPS meant for high performance environments and enterprise hard drives addition helps to achieve low cost high capacity storage adhering to the principles of software defined solutions.
For more details call 510.265.1616 or click on StoneFly USS
Apple has issued a new security warning to its iCloud users amid reports that some hacking groups are trying to intrude into its network and are in a mind set to steal passwords and other data from people who use the popular service in China.
Apple has also issued a media statement that as stated in certain sections of media, its servers are not compromised and its whole network is secure and intact. Apple posted these details on Tuesday on its support website, but did not mention China or provide any details on the attacks.
But a popular Online resource added further saying that some Asian users and most of the Chinese internet users have begun seeing warnings that indicate they had been diverted to an unauthorized website when they attempted to sign into their iCloud accounts. This diversion indicates that there is a “man in the middle” to attack who could allow a third party to copy and steal the passwords that users enter when they think they are signing into Apple’s service. Hackers could then use the passwords to collect other data from the user’s accounts. There is a possibility that a repeat of Apple iCloud Naked Celebrity photos leak witnessed last month could dole out again in this scenario.
As per some reliable sources, this series of network attacks on Apple iCloud from China is being carried out by an activists group, which seems to be dead against the selling of Apple iPhone 6 and Apple iPhone 6 Plus in China. The group of activists wants to reveal to the world that Apple’s new OS loaded iPhone 6 mobiles phones can be hacked, even if the company claims that its latest breed of smart phones has a software with enhanced encryption features to protect Apple users data.
However, California based Apple Inc said in its post that the attacks have not affected users who sign into iCloud from their iPhones or iPADS, or Mac computers while using the latest MAC operating system and Apple’s safari Browser. At the same time, Apple suggested to its users that they should verify whether or not they are connecting to a legitimate iCloud server by using the security features built into Safari and other browsers such as Firefox and Google’s Chrome. If they sign in via the said browser, the browser will show a message that warns users when they are connecting to a site that doesn’t have a digital certificate verifying that it is authentic.
If in case, users get a warning from their browser that they are visiting a website which has an invalid certificate, then they should not proceed and should make an exit immediately.
Apple suggested that the fresh round of network attacks appear unrelated to an episode last month in which hackers stole unclothed photos of few celebrities who saved their intimate moments on the IP Storage platform of Apple. As soon as the leak episode started last month, there was a lot of buzz on social media which maligned Apple’s iCloud services to a certain extent. It was then that Apple informed FBI to conduct a probe on the leak.
FBI and Apple later came up with a press statement that hackers had obtained the user passwords through so-called “phishing attacks” or by guessing at the security questions that allowed access. The company issued a media statement that its servers were not breached in anyway.
Now, in order to avoid a similar embarrassment, Apple has issued a pre-warning to all its iCloud users to go for its services through trusted sources.
StoneFly, the pioneer of iSCSI Storage and a subsidiary of Dynamic Network Factory has come up with a Hyper-Converged Flash based appliance which is a Virtual Computing Platform well capable of producing a software-defined data center.
StoneFly USS Hyper Converged Appliance takes a different and much simpler approach to converged architecture by incorporating local direct-attached storage for faster performance and greater flexibility. Each node in a new StoneFly Cluster includes flash based storage to deliver massive IOPS for high performance as well as enterprise hard disk drives for low-cost high- capacity storage adhering to the principles of software defined solutions.
The basic principle of StoneFly USS solution is to radically simplify the traditional infrastructure of data centers. This can be achieved by provisioning each volume as iSCSI, Fibre Channel, or NAS. Its hypervisors allow multiple Virtual Machines (VMs) to run on a single physical host and mediate all I/O operations including read and write request to centralized NAS and SAN storage arrays which are typically used to provide shared storage for all of the VMs.
StoneFly implements all control logic as a software based service running on enterprise class Solid-State storage. Virtual Storage Controllers run on each cluster node improving scalability and resilience, while preventing performance bottlenecks since the storage and control logic is now local to the new guest Virtual Machines.
“We are bringing out the next generation of hyper converged software defined virtual infrastructure into the marketplace at the lowest IOPS per dollar,” said Mo Tahmasebi, CEO and President of StoneFly, Inc.
He added that StoneFly USS can help you in replacing your datacenter by migrating your existing Windows and Linux physical servers into virtual machines hosted on the StoneFly USS.
If needed, the migration of data will be handled by expertise from StoneFly, Inc.
Here’s what you can do with a StoneFly USS
- Your enterprise IT team can quickly spin up new virtual machines on the StoneFly USS.
- This appliance can be used as iSCSI storage for your physical machines, virtual machine and as global shared back-end object/block/image storage for OpenStack.
- With the help of StoneFusion Operating system which makes this storage software-defined, creation of numerous delta based snapshots to back up the iSCSI volumes is possible. Then with the use of same patented and award-winning network operating system recovery of mountable read-write snapshot volumes is made possible.
- With the help of StoneFly USS run by StoneFusion Operating System, synchronous replication of all Virtual Machines and storage to a second on premise USS appliance can be achieved for the purpose of business continuity.
- Optimize your data with StoneFly’s optional enterprise-level features which include data deduplication, encryption, thin provisioning and more.
- Scaling out of storage and virtual machines across multiple USS nodes is possible.
- With the help of high performance hardware RAID, protection of data to increase system uptime is highly possible.
So, for all of you, who want a storage appliance which can dual purposely serve as a server and scalable storage, StoneFly USS will be the best choice to make. This software defined storage appliance can be used as a hyper-converged infrastructure solution to consolidate all server and storage system needs into an easy to manage appliance. With the help of virtualized operating system, complete hardware utilization and considerable reduction in power/cooling costs can be achieved.
Nowadays, the trend is to go for a hybrid storage which can quench the high performance needs with an SSD presence and basic high-capacity storage needs with the help of a disk.
So, for those enterprises which need a tiered storage, StoneFly Unified Storage & Server will be a wise choice to make as it offers an automated tiered storage environment in the presence of SSD and Disk and runs intellectually on a software-centric architecture.
For more details call 510.265.1616 or click on StoneFly USS Unified Storage & Server Hyper Converged Appliance
Data Storage market is already being flooded with a myriad hard drive classes and models. On a recent note, IDC estimated that for every couple of years, atleast one new company is entering into this business and becoming ripe. Therefore, it becomes a daunting task to select the right hard drive for a NAS solution.
Although, this selection is a pain for a NAS appliance manufacturer, this article will explain some of the major differences between various HDD classes available in the market, and what considerations a NAS solution provider makes, while manufacturing a Network Attached Storage Solution. This article will also help those who are interested in building their own NAS solution for home or business purpose.
Currently, there are four major hard drive classes on the market, each specifically designed for different applications, workloads, MTBF and power-on hours.
Desktop Hard Drives- Desktop Drives are designed for notebook PCs and desktops where usually a single drive is installed. Most desktop drives are more affordable, but seldom come with vibration protection, making them more vulnerable in multi-drive RAID environments where vibration from other drives and the system chassis can affect both drive health and system performance. When these drives are installed in NAS systems, desktop drives are suitable for situations where data is not often accessed, such as serving small group of users, who occasionally save or access documents of the drive, or as a backup storage destination which only requires a few hours of activity each day.
Enterprise Drives- These drives are manufactured with enterprise application needs in mind. So, these drives are useful for more advanced technology or superior components to provide better performance, POH, MTBF, Vibration protection and error correction. When installed in NAS systems, enterprise drives are suitable for environments that require high data availability and consistent throughput even when moving large amounts of data. This means enterprise drives are more appropriate for businesses with numerous employees accessing files simultaneously from databases, servers, or virtual storage systems.
NAS drives- For users who find desktop hard drives not durable and enterprise drives hard to afford, NAS drives can prove as a good option. These drives are specifically designed for NAS usage and are being offered by only few companies. They often feature better durability, balanced performance, and power consumption when compared to desktop drives. Note- Some NAS drives lack vibration sensors and may not be suitable for multi bay and rack systems. So, better you seek details more from the manufacturer regarding specification and usage before making a purchase.
Surveillance Drives- Two big companies offering hard disk drives are offering surveillance drives to accommodate 24/7 demands of long video recordings. These drives are optimized for sequential write operations, but offer lower random access performance. For surveillance station users who use NAS solutions for backup and storage solutions to NVRs, Video management systems and video servers, NAS solutions tucked with these drives are excellent match. But some surveillance drives lack vibration sensors and may not be suitable for multi ay and rack systems.
- Note 1- MTBF means mean time between failures and is a statistic used by manufacturers to state the reliability of hard drives. Often the higher the MTBF, the lower chance of failure.
- Note 2- POH means Power-on Hours, which is the length of time in hours that electrical power is applied to a device. For hard drives, two categories are used. 8/5 means 8 hours per day, 5 days a week and 24/7 means 24 hours a day all round the year.
Therefore, by keeping the above said four aspects in mind a NAS solution can be smartly designed.
However, for all you enterprise IT professionals, who want a NAS solution for enterprise needs, here’s an advice.
It is always better to go for a NAS solution which is built by well known NAS solution provider like StoneFly, Inc. If you want scalability and redundant factor high in your solution, it is always better to rely on a Network Attached Storage manufacturing vendor.
An enterprise class NAS solution such as StoneFly Twin Scale-Out Appliance (TSO) delivers unprecedented performance, redundancy and scalability. It can prove as a cost-effective disk based solution that offers accessibility and data integrity advantages over tape. Performance that scales as capacity grows enables high-volume throughput and maximum interoperability and flexibility with multi-protocol support. A high performance NAS access enables easy consolidation of archives across multiple application and compute environments. With the help of RAID protection, availability, resiliency and drive-rebuild times will be higher in enterprise NAS solutions.
Therefore, for enterprise needs, it is always wise to rely on a purpose build enterprise NAS system.