Software Defined Storage (SDS) is now being witnessed as a latest trend in the storage world. At the same time, it is also creating confusion among the storage seekers, since the concept is still evolving. Factually speaking, this term is floating and every company connected to the data storage field is coming up with its own definition which is in fact in align with their product-line.
But in simple terms, SDS is separation of control-plane operations from the storage array or appliance. The separation of control operations from data operations changes how new features and complex functions are added to the storage pool. Control operations will reside on server virtual memory instances. This means they can be distributed throughout the server farm and can be created or removed to match the needs and workloads of an enterprise IT environment.
In simple terms and as most of the IT professionals are expecting software defined storage is a platform, where the hardware in the storage appliance is intellectually driven by automated software. Therefore, this approach will make the appliances, as simple storage devices driven by smart software.
Some data storage appliance vendors are also using the phrase “Software Defined Storage” when talking about their hardware products and when they have added storage virtualization technology into their hardware’s storage controllers or storage servers.
Now, for those who are not aware of the benefits of a Software Defined Storage platform, here are some points meant to enlighten
- SDS tunes the storage system in such a way that the user can make the best possible use of the available storage media like the rotating disks, flash cache and flash storage modules. This operation takes place in such a way that applications will not be aware of media which is supporting their work in most cases.
- With the help of a SDS platform, a combination of storage devices can be achieved which make sense considering an organizations workloads, budget and performance requirements.
- SDS can provide more highly available storage without also requiring the purchase of expensive, special purpose storage devices.
- With the intelligence of software, the available storage resources can be assigned and reassigned as organization’s workloads change over time.
- Storage can be shared between Linux, Windows and UNIX workloads even though the operating system identifies the centralized storage as a different storage resource.
- With the help of SDS, new storage technology can be added to and used alongside of older forms of technology without requiring application changes.
Now, for those IT pros who are interested in having a Software-Defined-Storage platform in their enterprise IT environment, here’s a recommend.
StoneFly, Inc. is offering enterprise class solid-state storage through its USS Hyper Converged appliance to produce the ultimate software-defined virtual computing solution. Use of Virtualized operating system allows complete hardware utilization and considerable reduction in power & cooling costs.
StoneFly Hyper Converged Unified Storage and Server is a hybrid appliance which incorporates Flash and rotating disk as storage media. Therefore, flash based storage inclusion helps to deliver massive IOPS meant for high performance environments and enterprise hard drives addition helps to achieve low cost high capacity storage adhering to the principles of software defined solutions.
For more details call 510.265.1616 or click on StoneFly USS
Apple has issued a new security warning to its iCloud users amid reports that some hacking groups are trying to intrude into its network and are in a mind set to steal passwords and other data from people who use the popular service in China.
Apple has also issued a media statement that as stated in certain sections of media, its servers are not compromised and its whole network is secure and intact. Apple posted these details on Tuesday on its support website, but did not mention China or provide any details on the attacks.
But a popular Online resource added further saying that some Asian users and most of the Chinese internet users have begun seeing warnings that indicate they had been diverted to an unauthorized website when they attempted to sign into their iCloud accounts. This diversion indicates that there is a “man in the middle” to attack who could allow a third party to copy and steal the passwords that users enter when they think they are signing into Apple’s service. Hackers could then use the passwords to collect other data from the user’s accounts. There is a possibility that a repeat of Apple iCloud Naked Celebrity photos leak witnessed last month could dole out again in this scenario.
As per some reliable sources, this series of network attacks on Apple iCloud from China is being carried out by an activists group, which seems to be dead against the selling of Apple iPhone 6 and Apple iPhone 6 Plus in China. The group of activists wants to reveal to the world that Apple’s new OS loaded iPhone 6 mobiles phones can be hacked, even if the company claims that its latest breed of smart phones has a software with enhanced encryption features to protect Apple users data.
However, California based Apple Inc said in its post that the attacks have not affected users who sign into iCloud from their iPhones or iPADS, or Mac computers while using the latest MAC operating system and Apple’s safari Browser. At the same time, Apple suggested to its users that they should verify whether or not they are connecting to a legitimate iCloud server by using the security features built into Safari and other browsers such as Firefox and Google’s Chrome. If they sign in via the said browser, the browser will show a message that warns users when they are connecting to a site that doesn’t have a digital certificate verifying that it is authentic.
If in case, users get a warning from their browser that they are visiting a website which has an invalid certificate, then they should not proceed and should make an exit immediately.
Apple suggested that the fresh round of network attacks appear unrelated to an episode last month in which hackers stole unclothed photos of few celebrities who saved their intimate moments on the IP Storage platform of Apple. As soon as the leak episode started last month, there was a lot of buzz on social media which maligned Apple’s iCloud services to a certain extent. It was then that Apple informed FBI to conduct a probe on the leak.
FBI and Apple later came up with a press statement that hackers had obtained the user passwords through so-called “phishing attacks” or by guessing at the security questions that allowed access. The company issued a media statement that its servers were not breached in anyway.
Now, in order to avoid a similar embarrassment, Apple has issued a pre-warning to all its iCloud users to go for its services through trusted sources.
StoneFly, the pioneer of iSCSI Storage and a subsidiary of Dynamic Network Factory has come up with a Hyper-Converged Flash based appliance which is a Virtual Computing Platform well capable of producing a software-defined data center.
StoneFly USS Hyper Converged Appliance takes a different and much simpler approach to converged architecture by incorporating local direct-attached storage for faster performance and greater flexibility. Each node in a new StoneFly Cluster includes flash based storage to deliver massive IOPS for high performance as well as enterprise hard disk drives for low-cost high- capacity storage adhering to the principles of software defined solutions.
The basic principle of StoneFly USS solution is to radically simplify the traditional infrastructure of data centers. This can be achieved by provisioning each volume as iSCSI, Fibre Channel, or NAS. Its hypervisors allow multiple Virtual Machines (VMs) to run on a single physical host and mediate all I/O operations including read and write request to centralized NAS and SAN storage arrays which are typically used to provide shared storage for all of the VMs.
StoneFly implements all control logic as a software based service running on enterprise class Solid-State storage. Virtual Storage Controllers run on each cluster node improving scalability and resilience, while preventing performance bottlenecks since the storage and control logic is now local to the new guest Virtual Machines.
“We are bringing out the next generation of hyper converged software defined virtual infrastructure into the marketplace at the lowest IOPS per dollar,” said Mo Tahmasebi, CEO and President of StoneFly, Inc.
He added that StoneFly USS can help you in replacing your datacenter by migrating your existing Windows and Linux physical servers into virtual machines hosted on the StoneFly USS.
If needed, the migration of data will be handled by expertise from StoneFly, Inc.
Here’s what you can do with a StoneFly USS
- Your enterprise IT team can quickly spin up new virtual machines on the StoneFly USS.
- This appliance can be used as iSCSI storage for your physical machines, virtual machine and as global shared back-end object/block/image storage for OpenStack.
- With the help of StoneFusion Operating system which makes this storage software-defined, creation of numerous delta based snapshots to back up the iSCSI volumes is possible. Then with the use of same patented and award-winning network operating system recovery of mountable read-write snapshot volumes is made possible.
- With the help of StoneFly USS run by StoneFusion Operating System, synchronous replication of all Virtual Machines and storage to a second on premise USS appliance can be achieved for the purpose of business continuity.
- Optimize your data with StoneFly’s optional enterprise-level features which include data deduplication, encryption, thin provisioning and more.
- Scaling out of storage and virtual machines across multiple USS nodes is possible.
- With the help of high performance hardware RAID, protection of data to increase system uptime is highly possible.
So, for all of you, who want a storage appliance which can dual purposely serve as a server and scalable storage, StoneFly USS will be the best choice to make. This software defined storage appliance can be used as a hyper-converged infrastructure solution to consolidate all server and storage system needs into an easy to manage appliance. With the help of virtualized operating system, complete hardware utilization and considerable reduction in power/cooling costs can be achieved.
Nowadays, the trend is to go for a hybrid storage which can quench the high performance needs with an SSD presence and basic high-capacity storage needs with the help of a disk.
So, for those enterprises which need a tiered storage, StoneFly Unified Storage & Server will be a wise choice to make as it offers an automated tiered storage environment in the presence of SSD and Disk and runs intellectually on a software-centric architecture.
For more details call 510.265.1616 or click on StoneFly USS Unified Storage & Server Hyper Converged Appliance
Data Storage market is already being flooded with a myriad hard drive classes and models. On a recent note, IDC estimated that for every couple of years, atleast one new company is entering into this business and becoming ripe. Therefore, it becomes a daunting task to select the right hard drive for a NAS solution.
Although, this selection is a pain for a NAS appliance manufacturer, this article will explain some of the major differences between various HDD classes available in the market, and what considerations a NAS solution provider makes, while manufacturing a Network Attached Storage Solution. This article will also help those who are interested in building their own NAS solution for home or business purpose.
Currently, there are four major hard drive classes on the market, each specifically designed for different applications, workloads, MTBF and power-on hours.
Desktop Hard Drives- Desktop Drives are designed for notebook PCs and desktops where usually a single drive is installed. Most desktop drives are more affordable, but seldom come with vibration protection, making them more vulnerable in multi-drive RAID environments where vibration from other drives and the system chassis can affect both drive health and system performance. When these drives are installed in NAS systems, desktop drives are suitable for situations where data is not often accessed, such as serving small group of users, who occasionally save or access documents of the drive, or as a backup storage destination which only requires a few hours of activity each day.
Enterprise Drives- These drives are manufactured with enterprise application needs in mind. So, these drives are useful for more advanced technology or superior components to provide better performance, POH, MTBF, Vibration protection and error correction. When installed in NAS systems, enterprise drives are suitable for environments that require high data availability and consistent throughput even when moving large amounts of data. This means enterprise drives are more appropriate for businesses with numerous employees accessing files simultaneously from databases, servers, or virtual storage systems.
NAS drives- For users who find desktop hard drives not durable and enterprise drives hard to afford, NAS drives can prove as a good option. These drives are specifically designed for NAS usage and are being offered by only few companies. They often feature better durability, balanced performance, and power consumption when compared to desktop drives. Note- Some NAS drives lack vibration sensors and may not be suitable for multi bay and rack systems. So, better you seek details more from the manufacturer regarding specification and usage before making a purchase.
Surveillance Drives- Two big companies offering hard disk drives are offering surveillance drives to accommodate 24/7 demands of long video recordings. These drives are optimized for sequential write operations, but offer lower random access performance. For surveillance station users who use NAS solutions for backup and storage solutions to NVRs, Video management systems and video servers, NAS solutions tucked with these drives are excellent match. But some surveillance drives lack vibration sensors and may not be suitable for multi ay and rack systems.
- Note 1- MTBF means mean time between failures and is a statistic used by manufacturers to state the reliability of hard drives. Often the higher the MTBF, the lower chance of failure.
- Note 2- POH means Power-on Hours, which is the length of time in hours that electrical power is applied to a device. For hard drives, two categories are used. 8/5 means 8 hours per day, 5 days a week and 24/7 means 24 hours a day all round the year.
Therefore, by keeping the above said four aspects in mind a NAS solution can be smartly designed.
However, for all you enterprise IT professionals, who want a NAS solution for enterprise needs, here’s an advice.
It is always better to go for a NAS solution which is built by well known NAS solution provider like StoneFly, Inc. If you want scalability and redundant factor high in your solution, it is always better to rely on a Network Attached Storage manufacturing vendor.
An enterprise class NAS solution such as StoneFly Twin Scale-Out Appliance (TSO) delivers unprecedented performance, redundancy and scalability. It can prove as a cost-effective disk based solution that offers accessibility and data integrity advantages over tape. Performance that scales as capacity grows enables high-volume throughput and maximum interoperability and flexibility with multi-protocol support. A high performance NAS access enables easy consolidation of archives across multiple application and compute environments. With the help of RAID protection, availability, resiliency and drive-rebuild times will be higher in enterprise NAS solutions.
Therefore, for enterprise needs, it is always wise to rely on a purpose build enterprise NAS system.
Barack Obama, the president of United States has issued an executive order on last Friday to have secure chip-and-PIN technology embedded into government issued credit and debit cards as part of a broader move aimed at stemming payment data breaches.
Under this new law order, government-issued cards that transmit federal benefits such as social security will have microchips embedded instead of the usual magnetic strips. Additionally, the credit card holders will have to include the pin details while making a transaction, similar to that of a debit card.
On Friday, US president revealed that he became a victim of credit card on a recent note, when he discovered that his credit amount was exhausted due to fraudulent transactions.
Explaining it further, Mr. Obama revealed one of his recent experiences with plastic money in a New York restaurant. He said that last month he went to a restaurant with his family and while paying a nominal bill amount related to hotel expenses, he decided to use his credit card. It was then discovered by the US president that his credit amount has been used totally and he was one of the victims of credit card fraud. Obama uses a JP Morgan credit card and it was this card which was caught in fraudulent transactions.
Initially, the manager of the restaurant thought that the card got rejected due to non-payment of bills. But when Obama clarified that he was a man with a white hat, when it comes to managing his finances, a detailed probe was made into this issue. It was then the world realized that US President Barack Obama’s credit card was caught in fraudulent credit card transaction turbulence.
Obama filed a complaint at the Federal Consumer Financial Protection Bureau and a detailed probe is being conducted on this issue.
Thus after realizing the apathy through which a credit-card victim goes through, Barack Obama immediately wanted to re-organize the credit card laws. He issued an order for a replacement program for the cards which is set to begin on January 1st, 2015 and will end by November 2015.
The order comes following a spate of breaches of payment systems at major retailers including Home Depot, Kmart and Target, which have affected more than 100 million Americans over the past year, as per White House reports.
As a part of the new order, a chip-and-Pin system, a standard in Europe will be introduced in United States soon. This new order is to make it harder for cyber thieves to steal payment card information. This technology will require special card readers to scan the chip, plus the entry of a pin, which only the card holder knows will be made mandatory.
Although, it isn’t a foolproof service, it is supposed be a superior service to the current magnetic strip and a signature service.
Currently, the new order endorsed by US President Barack Obama will cover only government cards and the readers at federal agency facilities. But the move could inspire more retail stores and banks to get on board. Stores including Home Depot, Target and Walmart are planning to have readers for chip-and-pin cards installed in their stores by mid next year.
Obama said on Friday that he would be supporting the US Federal Trade Commission in its development of a new resource for victims of data theft via www.identityTheft.gov to streamline the process of reporting thefts to credit bureaus.
Data Storage field is going to witness a new evolution with the arrival of 15TB and 50TB hard drives by next year. At the Ceatec Trade show in Japan last week, TDK which is a well known hard drive maker demonstrated to the world its new prodigy of hard drives which work on Heat-assisted magnetic recording (HAMR) technology.
The company also made an official announcement that with the help of HAMR it will come up with 15TB hard drives by the mid next year.
Heat-assisted magnetic recording (HAMR) magnetically records data on high stability media using laser thermal assistance to first heat the material, which allows in greatly reducing the size of pitches without negative effects on reading and writing abilities. As a result of this technology, hard drives can store vast amounts of data on the available platters through perpendicular recording technique.
Previously, it was believed that HAMR will only become viable sometime in 2017. But companies like TDK, Seagate and Western Digital intensified their HAMR development to such an extent that hard drives based on this technology will turn into a reality soon.
In the meantime, Western Digital made an official announcement recently saying that it will start producing 30TB and 50TB hard drives by 2017. Well, initially these drives will be expensive and so will be targeted at companies running data servers. But eventually home based consumers can also afford them as supply surpasses the demand.
William Cain, Vice President of WD added that his company is planning to increase the capacity of a hard drive by decreasing the size of the bits that hold the data on the disk. The company will take the help of HAMR in order to weed out the challenges met in these environments. He added that the foremost challenge will be faced after the decrease in the size of the bits. With the reduction in size of the bits, the ability to write data diminishes. This is where HAMR comes to rescue, as it allows writing more data in the same amount of surface because the disk is heated using lasers during the writing process.
Well, as of now, everything seems to be innovative in this segment. But the question over here which pops up instantly into the mind is how well this technology will help in dealing with the write speed.
Few storage technologists who are following the trends have already branded HAMR as a good innovation to obtain increased storage capacities on hard drives. But at the same time, they are sure that this technology many not assist in increasing the access (R/W) speeds on these drives.
Backups are essential and that is now a fact, as it helps in keeping business continuity live in any enterprise. This is due to the fact that in case, an enterprise looses access to the primary version of data, a secondary version is available in order to keep the data access alive to users.
Theoretically speaking, Backups and Disaster Recovery (DR) are not directly interchangeable terms, but the latter is not possible without the former in the first place. Infact, Disaster Recovery is having the tested wherewithal to get systems restored and running as quickly as possible, including the associated data.
Adding to this true notion, the increasing usage of virtualization is also slowly changing the way DR is being carried out. In virtual world, a system can be recovered by duplicating images of virtual machines and recreating them elsewhere. So, VM replication, disaster recovery and the way the market has adapted to virtualization are also becoming critical topics to consider.
To explain it better, let’s go through steps of explanation-
- In olden days, if a server crashed then the enterprise IT team would go for a new server. Depending on the budget the IT team would go for a spare to hand appliance-probably an out of date model, if it was to serve the purpose for a brief period.
- Then, either-install all the systems and application software, attempting to get all the settings as they were before, unless of course you had done that in advance. This would not have been possible if you had only invested in one or two redundant servers on standby for many more live ones, not knowing when they would fail.
- Or, for a really critical app, you may have had a “hot” server in a standby mode, which is always ready to go on demand. However, that would have doubled the costs of application ownership, with all the hardware and software costs paid twice.
- Then restore the most recent data backup, for a database that might be almost up to date, but for a file server, an overnight backup may be all that is available, so only as far back as the end of the last working day. Anything that was in memory at the time of the failure is likely to have been lost. How far back you aim to go is defined in a backup plan as the recovery point objective.
In today’s world, the backup and disaster recovery procedures have completely changed and all the credit goes to virtualization. This technology has made everything simple, as first and foremost data can be easily backed-up as part of an image of a given virtual machine, including application software, local data, settings and memory. Second, there is no need of a physical server rebuild; as the VM can be recreated in any other compatible virtualized environment. This may be spare in-house capacity or acquired from a third party cloud service provider. This means the costs of redundancy diminish to gradually disappear.
As a result Disaster Recovery is getting cheaper, quicker, simpler and fuller in a virtual world. So, backup is being backed up by faster recovery time objectives which are easier to achieve.
Following this trend, few traditional backup storage suppliers have adapted their products to this virtualization drift.
For example, in the year 2014 StoneFly, Inc. released its DR365, which it believes matches the capability and performance of the new arrivals. StoneFly DR365 is an ideal purpose built hyper converged infrastructure solution that can consolidate all server, storage and backup needs into one easy to manage appliance.
StoneFly DR365 Backup & Disaster Recovery appliance will automatically create backup images of physical servers based on flexible user-defined policy. These images can be restored (bare metal recovery) to the same hardware, to dissimilar hardware to build a new server, or can be mounted as a drive to retrieve an earlier copy of a specific file or folder.
Therefore every backup is automatically converted into a virtual machine. These virtual machines can be quickly spun up and hosted on a DR365 appliance. That means, the user of DR365 appliance can manage all their backup operations for datacenter or office with a single central management console.
The StoneFly DR365 appliance includes a virtual SAN appliance, a virtual enterprise backup engine and the ability to create additional virtual storage or servers as needed. So, DR365 flexibility replaces the “fixed hardware model” of the past with software-defined on-demand resource allocation (such as CPU, memory, storage etc) based on your application needs.
Therefore, as Virtualization technology is evolving and industry adoption is increasing, organizations are recognizing the benefits reaching far beyond the most popular virtualization justification- reduced infrastructure costs and increased IT agility. Therefore data storage vendors offering backup solutions are using virtualization as next frontier in order to enable and enhance disaster recovery strategies.