Researchers from University of Southern California’s Viterbi School of Engineering have found out that internet sleeps in some parts of the world like any other living creature. The researchers studying how big internet is have found this latest definitive about the Internet and said that this finding will help scientists and policy makers develop better systems to measure and track internet outages such as the one which appeared in New York during Hurricane Sandy.
“The internet is important in our lives and businesses, from streaming movies to buying online. Measuring network outages is a first step to improve Internet reliability,” said John Heidemann, research professor at the USC Viterbi School of Engineering Information Sciences Institute.
The research also revealed a fact that in some countries like United States and Europe, the broadband is always active. But it also mentioned that in some countries like Asia, South America and Eastern Europe, internet access is only available for 15-16 hours a day. The rest of the time, it either takes a power nap or dozes off for good amount of time, only to come alive the next day.
These interesting facts were revealed in an astonishing way by Heidmann, which conducted the probe in collaboration with USC Lin Quan and Yuri Pradkin. These results will be presented at the 2014 ACM Internet Measurements Conference on November 5th, 2014.
The study reveals that they are around 4 billion IPV4 internet addresses. Heideman and his team pinged about 3.7 million address blocks (representing over 750 million addresses) every 11 minutes over the span of two months, looking for daily patterns.
The research was funded by the Department of Homeland Security; Science and Technology Directorate; HSARPA; Cyber Security Division via the Air Force Research Laboratory; Information Directorate and Spawar.
Google, the internet juggernaut known for its innovative web services is all set to come up with a new email service called “Inbox”. This new Google application will be out of Googolplex and will have a different layout, designed to focus on what really matters.
Google Android and Apps head Sundar Pichai, revealed some info on “Inbox” and stated “We got too much email, inboxes are time-consuming to manage and the truly important info often gets overlooked in the clutter, particularly when accessing email from smart phones. So, Inbox will be a sure shot single answer to all these concerns”.
Though, Google is coming to the rescue, will it succeed is now the ‘B’illion dollar question ( not million)?
Pichai calls the new email “Inbox” service as incredibly innovative. He added that the service will be based on an email categorizing feature introduced last year in Gmail. For example, all the purchase receipts made by the user or his/her bank statements will be neatly grouped together in the new service of “Inbox”.
Pichai added that present Gmail service lacks the feature of categorizing the emails in an automated way. So, he feels that users are missing important information in the clutter, particularly when accessing email from smart phones.
Probably, Google is in plans to target “Inbox” specifically to smart phone users in future. But currently, it doesn’t specify any such information in its blog post.
The other big INBOX feature will be ‘Highlights’, where all important email such as flight or event info will be available as per priority.
Many technologists feel that the feature ‘Highlight’ of Inbox looks convincing on paper. But they are in an opinion that Google may face many challenges to get the user interface equation right, so as to not create a confusing mess of information snippets grabbed from multiple messages.
The other highlight in Google’s INBOX email will be “ Assist” which will let people set reminders that are triggered at pre-determined times. For instance, if the user needs to call a store manager at a particular point of time, they can customize Assist feature in such a way that it reminds them about the call. If the user is unable to take the reminder at that time, they can snooze the reminder for a specific amount of time or till they are finished with what they were occupied. This seems to be a clone to some apps on Google Play, where reminders of birthday and anniversaries can be prioritized.
As of now, Google has picked up only 5000 users for its “Inbox” testing spree. Although, it is offering this service as a separate product, testers can use their Gmail username, and all their messages, labels and contacts from Gmail will populate into the “Inbox” service.
Google plans to unveil its “Inbox” in mid 2015.
More details are awaited!
Software Defined Storage (SDS) is now being witnessed as a latest trend in the storage world. At the same time, it is also creating confusion among the storage seekers, since the concept is still evolving. Factually speaking, this term is floating and every company connected to the data storage field is coming up with its own definition which is in fact in align with their product-line.
But in simple terms, SDS is separation of control-plane operations from the storage array or appliance. The separation of control operations from data operations changes how new features and complex functions are added to the storage pool. Control operations will reside on server virtual memory instances. This means they can be distributed throughout the server farm and can be created or removed to match the needs and workloads of an enterprise IT environment.
In simple terms and as most of the IT professionals are expecting software defined storage is a platform, where the hardware in the storage appliance is intellectually driven by automated software. Therefore, this approach will make the appliances, as simple storage devices driven by smart software.
Some data storage appliance vendors are also using the phrase “Software Defined Storage” when talking about their hardware products and when they have added storage virtualization technology into their hardware’s storage controllers or storage servers.
Now, for those who are not aware of the benefits of a Software Defined Storage platform, here are some points meant to enlighten
- SDS tunes the storage system in such a way that the user can make the best possible use of the available storage media like the rotating disks, flash cache and flash storage modules. This operation takes place in such a way that applications will not be aware of media which is supporting their work in most cases.
- With the help of a SDS platform, a combination of storage devices can be achieved which make sense considering an organizations workloads, budget and performance requirements.
- SDS can provide more highly available storage without also requiring the purchase of expensive, special purpose storage devices.
- With the intelligence of software, the available storage resources can be assigned and reassigned as organization’s workloads change over time.
- Storage can be shared between Linux, Windows and UNIX workloads even though the operating system identifies the centralized storage as a different storage resource.
- With the help of SDS, new storage technology can be added to and used alongside of older forms of technology without requiring application changes.
Now, for those IT pros who are interested in having a Software-Defined-Storage platform in their enterprise IT environment, here’s a recommend.
StoneFly, Inc. is offering enterprise class solid-state storage through its USS Hyper Converged appliance to produce the ultimate software-defined virtual computing solution. Use of Virtualized operating system allows complete hardware utilization and considerable reduction in power & cooling costs.
StoneFly Hyper Converged Unified Storage and Server is a hybrid appliance which incorporates Flash and rotating disk as storage media. Therefore, flash based storage inclusion helps to deliver massive IOPS meant for high performance environments and enterprise hard drives addition helps to achieve low cost high capacity storage adhering to the principles of software defined solutions.
For more details call 510.265.1616 or click on StoneFly USS
Apple has issued a new security warning to its iCloud users amid reports that some hacking groups are trying to intrude into its network and are in a mind set to steal passwords and other data from people who use the popular service in China.
Apple has also issued a media statement that as stated in certain sections of media, its servers are not compromised and its whole network is secure and intact. Apple posted these details on Tuesday on its support website, but did not mention China or provide any details on the attacks.
But a popular Online resource added further saying that some Asian users and most of the Chinese internet users have begun seeing warnings that indicate they had been diverted to an unauthorized website when they attempted to sign into their iCloud accounts. This diversion indicates that there is a “man in the middle” to attack who could allow a third party to copy and steal the passwords that users enter when they think they are signing into Apple’s service. Hackers could then use the passwords to collect other data from the user’s accounts. There is a possibility that a repeat of Apple iCloud Naked Celebrity photos leak witnessed last month could dole out again in this scenario.
As per some reliable sources, this series of network attacks on Apple iCloud from China is being carried out by an activists group, which seems to be dead against the selling of Apple iPhone 6 and Apple iPhone 6 Plus in China. The group of activists wants to reveal to the world that Apple’s new OS loaded iPhone 6 mobiles phones can be hacked, even if the company claims that its latest breed of smart phones has a software with enhanced encryption features to protect Apple users data.
However, California based Apple Inc said in its post that the attacks have not affected users who sign into iCloud from their iPhones or iPADS, or Mac computers while using the latest MAC operating system and Apple’s safari Browser. At the same time, Apple suggested to its users that they should verify whether or not they are connecting to a legitimate iCloud server by using the security features built into Safari and other browsers such as Firefox and Google’s Chrome. If they sign in via the said browser, the browser will show a message that warns users when they are connecting to a site that doesn’t have a digital certificate verifying that it is authentic.
If in case, users get a warning from their browser that they are visiting a website which has an invalid certificate, then they should not proceed and should make an exit immediately.
Apple suggested that the fresh round of network attacks appear unrelated to an episode last month in which hackers stole unclothed photos of few celebrities who saved their intimate moments on the IP Storage platform of Apple. As soon as the leak episode started last month, there was a lot of buzz on social media which maligned Apple’s iCloud services to a certain extent. It was then that Apple informed FBI to conduct a probe on the leak.
FBI and Apple later came up with a press statement that hackers had obtained the user passwords through so-called “phishing attacks” or by guessing at the security questions that allowed access. The company issued a media statement that its servers were not breached in anyway.
Now, in order to avoid a similar embarrassment, Apple has issued a pre-warning to all its iCloud users to go for its services through trusted sources.
StoneFly, the pioneer of iSCSI Storage and a subsidiary of Dynamic Network Factory has come up with a Hyper-Converged Flash based appliance which is a Virtual Computing Platform well capable of producing a software-defined data center.
StoneFly USS Hyper Converged Appliance takes a different and much simpler approach to converged architecture by incorporating local direct-attached storage for faster performance and greater flexibility. Each node in a new StoneFly Cluster includes flash based storage to deliver massive IOPS for high performance as well as enterprise hard disk drives for low-cost high- capacity storage adhering to the principles of software defined solutions.
The basic principle of StoneFly USS solution is to radically simplify the traditional infrastructure of data centers. This can be achieved by provisioning each volume as iSCSI, Fibre Channel, or NAS. Its hypervisors allow multiple Virtual Machines (VMs) to run on a single physical host and mediate all I/O operations including read and write request to centralized NAS and SAN storage arrays which are typically used to provide shared storage for all of the VMs.
StoneFly implements all control logic as a software based service running on enterprise class Solid-State storage. Virtual Storage Controllers run on each cluster node improving scalability and resilience, while preventing performance bottlenecks since the storage and control logic is now local to the new guest Virtual Machines.
“We are bringing out the next generation of hyper converged software defined virtual infrastructure into the marketplace at the lowest IOPS per dollar,” said Mo Tahmasebi, CEO and President of StoneFly, Inc.
He added that StoneFly USS can help you in replacing your datacenter by migrating your existing Windows and Linux physical servers into virtual machines hosted on the StoneFly USS.
If needed, the migration of data will be handled by expertise from StoneFly, Inc.
Here’s what you can do with a StoneFly USS
- Your enterprise IT team can quickly spin up new virtual machines on the StoneFly USS.
- This appliance can be used as iSCSI storage for your physical machines, virtual machine and as global shared back-end object/block/image storage for OpenStack.
- With the help of StoneFusion Operating system which makes this storage software-defined, creation of numerous delta based snapshots to back up the iSCSI volumes is possible. Then with the use of same patented and award-winning network operating system recovery of mountable read-write snapshot volumes is made possible.
- With the help of StoneFly USS run by StoneFusion Operating System, synchronous replication of all Virtual Machines and storage to a second on premise USS appliance can be achieved for the purpose of business continuity.
- Optimize your data with StoneFly’s optional enterprise-level features which include data deduplication, encryption, thin provisioning and more.
- Scaling out of storage and virtual machines across multiple USS nodes is possible.
- With the help of high performance hardware RAID, protection of data to increase system uptime is highly possible.
So, for all of you, who want a storage appliance which can dual purposely serve as a server and scalable storage, StoneFly USS will be the best choice to make. This software defined storage appliance can be used as a hyper-converged infrastructure solution to consolidate all server and storage system needs into an easy to manage appliance. With the help of virtualized operating system, complete hardware utilization and considerable reduction in power/cooling costs can be achieved.
Nowadays, the trend is to go for a hybrid storage which can quench the high performance needs with an SSD presence and basic high-capacity storage needs with the help of a disk.
So, for those enterprises which need a tiered storage, StoneFly Unified Storage & Server will be a wise choice to make as it offers an automated tiered storage environment in the presence of SSD and Disk and runs intellectually on a software-centric architecture.
For more details call 510.265.1616 or click on StoneFly USS Unified Storage & Server Hyper Converged Appliance
Data Storage market is already being flooded with a myriad hard drive classes and models. On a recent note, IDC estimated that for every couple of years, atleast one new company is entering into this business and becoming ripe. Therefore, it becomes a daunting task to select the right hard drive for a NAS solution.
Although, this selection is a pain for a NAS appliance manufacturer, this article will explain some of the major differences between various HDD classes available in the market, and what considerations a NAS solution provider makes, while manufacturing a Network Attached Storage Solution. This article will also help those who are interested in building their own NAS solution for home or business purpose.
Currently, there are four major hard drive classes on the market, each specifically designed for different applications, workloads, MTBF and power-on hours.
Desktop Hard Drives- Desktop Drives are designed for notebook PCs and desktops where usually a single drive is installed. Most desktop drives are more affordable, but seldom come with vibration protection, making them more vulnerable in multi-drive RAID environments where vibration from other drives and the system chassis can affect both drive health and system performance. When these drives are installed in NAS systems, desktop drives are suitable for situations where data is not often accessed, such as serving small group of users, who occasionally save or access documents of the drive, or as a backup storage destination which only requires a few hours of activity each day.
Enterprise Drives- These drives are manufactured with enterprise application needs in mind. So, these drives are useful for more advanced technology or superior components to provide better performance, POH, MTBF, Vibration protection and error correction. When installed in NAS systems, enterprise drives are suitable for environments that require high data availability and consistent throughput even when moving large amounts of data. This means enterprise drives are more appropriate for businesses with numerous employees accessing files simultaneously from databases, servers, or virtual storage systems.
NAS drives- For users who find desktop hard drives not durable and enterprise drives hard to afford, NAS drives can prove as a good option. These drives are specifically designed for NAS usage and are being offered by only few companies. They often feature better durability, balanced performance, and power consumption when compared to desktop drives. Note- Some NAS drives lack vibration sensors and may not be suitable for multi bay and rack systems. So, better you seek details more from the manufacturer regarding specification and usage before making a purchase.
Surveillance Drives- Two big companies offering hard disk drives are offering surveillance drives to accommodate 24/7 demands of long video recordings. These drives are optimized for sequential write operations, but offer lower random access performance. For surveillance station users who use NAS solutions for backup and storage solutions to NVRs, Video management systems and video servers, NAS solutions tucked with these drives are excellent match. But some surveillance drives lack vibration sensors and may not be suitable for multi ay and rack systems.
- Note 1- MTBF means mean time between failures and is a statistic used by manufacturers to state the reliability of hard drives. Often the higher the MTBF, the lower chance of failure.
- Note 2- POH means Power-on Hours, which is the length of time in hours that electrical power is applied to a device. For hard drives, two categories are used. 8/5 means 8 hours per day, 5 days a week and 24/7 means 24 hours a day all round the year.
Therefore, by keeping the above said four aspects in mind a NAS solution can be smartly designed.
However, for all you enterprise IT professionals, who want a NAS solution for enterprise needs, here’s an advice.
It is always better to go for a NAS solution which is built by well known NAS solution provider like StoneFly, Inc. If you want scalability and redundant factor high in your solution, it is always better to rely on a Network Attached Storage manufacturing vendor.
An enterprise class NAS solution such as StoneFly Twin Scale-Out Appliance (TSO) delivers unprecedented performance, redundancy and scalability. It can prove as a cost-effective disk based solution that offers accessibility and data integrity advantages over tape. Performance that scales as capacity grows enables high-volume throughput and maximum interoperability and flexibility with multi-protocol support. A high performance NAS access enables easy consolidation of archives across multiple application and compute environments. With the help of RAID protection, availability, resiliency and drive-rebuild times will be higher in enterprise NAS solutions.
Therefore, for enterprise needs, it is always wise to rely on a purpose build enterprise NAS system.
Barack Obama, the president of United States has issued an executive order on last Friday to have secure chip-and-PIN technology embedded into government issued credit and debit cards as part of a broader move aimed at stemming payment data breaches.
Under this new law order, government-issued cards that transmit federal benefits such as social security will have microchips embedded instead of the usual magnetic strips. Additionally, the credit card holders will have to include the pin details while making a transaction, similar to that of a debit card.
On Friday, US president revealed that he became a victim of credit card on a recent note, when he discovered that his credit amount was exhausted due to fraudulent transactions.
Explaining it further, Mr. Obama revealed one of his recent experiences with plastic money in a New York restaurant. He said that last month he went to a restaurant with his family and while paying a nominal bill amount related to hotel expenses, he decided to use his credit card. It was then discovered by the US president that his credit amount has been used totally and he was one of the victims of credit card fraud. Obama uses a JP Morgan credit card and it was this card which was caught in fraudulent transactions.
Initially, the manager of the restaurant thought that the card got rejected due to non-payment of bills. But when Obama clarified that he was a man with a white hat, when it comes to managing his finances, a detailed probe was made into this issue. It was then the world realized that US President Barack Obama’s credit card was caught in fraudulent credit card transaction turbulence.
Obama filed a complaint at the Federal Consumer Financial Protection Bureau and a detailed probe is being conducted on this issue.
Thus after realizing the apathy through which a credit-card victim goes through, Barack Obama immediately wanted to re-organize the credit card laws. He issued an order for a replacement program for the cards which is set to begin on January 1st, 2015 and will end by November 2015.
The order comes following a spate of breaches of payment systems at major retailers including Home Depot, Kmart and Target, which have affected more than 100 million Americans over the past year, as per White House reports.
As a part of the new order, a chip-and-Pin system, a standard in Europe will be introduced in United States soon. This new order is to make it harder for cyber thieves to steal payment card information. This technology will require special card readers to scan the chip, plus the entry of a pin, which only the card holder knows will be made mandatory.
Although, it isn’t a foolproof service, it is supposed be a superior service to the current magnetic strip and a signature service.
Currently, the new order endorsed by US President Barack Obama will cover only government cards and the readers at federal agency facilities. But the move could inspire more retail stores and banks to get on board. Stores including Home Depot, Target and Walmart are planning to have readers for chip-and-pin cards installed in their stores by mid next year.
Obama said on Friday that he would be supporting the US Federal Trade Commission in its development of a new resource for victims of data theft via www.identityTheft.gov to streamline the process of reporting thefts to credit bureaus.