Myths about cloud technology

Myths about cloud technology

The cloud is just a fashionable thing.

The roots of cloud computing go back to high-performance computing. In the past, to deploy an application you had to buy and configure your own physical servers. This approach had a lot of drawbacks, for example, if for the normal operation of an application it has enough one and a half servers, you still had to pay for two – the cost of maintenance and service infrastructure was high.

Today we have such services that allow us to configure a virtual server and data storage for their own needs. Turning to cloud computing, the organization is able to customize the infrastructure at its discretion, spending less money and effort. In other words, this model aims to increase the availability of computing resources and combines self-service on demand, pooling of resources into a pool and the ability to quickly adapt.

Cloud technologies number a lot of all kinds of formats: a part solves certain business tasks, a part forms an infrastructure for their solution. The virtualization technologies are used in this case the same as in conventional data centers. We can say that cloud computing not only builds on proven virtualization technologies, but also addresses some of their shortcomings.

Nowadays, it is impossible to imagine a large bank or telecom operator that does not need to store and process huge amounts of data. Today’s data centers consist of many thousands of servers that process users’ confidential information. To accommodate this amount of equipment, companies are taking unexpected steps, such as building data centers underground.

The development team is confident that mass introduction of such technologies will reduce the time of deployment of new data centers. Moreover, such capsules can be placed closer to cities standing next to water bodies – this will speed up the work of web services, reducing the delay that brings users a lot of inconvenience.

Working on such solutions and testing them requires a lot of money and time, but it is done only to push the development of cloud technology. If the clouds were just a fashionable thing, the world’s leading IT companies wouldn’t be putting so much effort into developing data centers.

Iron is placed wherever you go.

The legs of this myth grow out of the belief that cloud technology in general is unreliable when management is concerned about the safety of data. However, if you trust files and applications to trusted and trusted providers, then they data do not end up in a modern data centers, which can be called the most resource-intensive buildings of our time. Each of them is a “fortress” with servers, which sometimes consumes as much energy as a small city.

All these technologies are used in order to provide maximum comfort for customers and low risk of data center failures, including those caused by human factor. If you are interested in what kind of 1cloud equipment is located in data center racks.

Normal hoster provides 100% equipment availability.

There is nothing supernatural about cloud technology – this is the term that refers to shared hosting. Virtual machines are located on physical equipment that is powered by electricity and served by people, sometimes (which is a sin to conceal) committing errors. Also disable the equipment and make your data inaccessible can any unforeseen events: a hacker attack or natural disasters.

The biggest obstacle to achieving 100% equipment availability remains the very definition of “availability level”. 100% “uptime” is physically impossible, regardless of the hosting platform, but most providers offer a figure equal to or greater than 99.9%.

It should be noted that hosting providers and manufacturers of network equipment are doing everything to get as close to the ideal, adding more and more nines after the decimal point, investing in infrastructure and maintenance. For example, to avoid unwanted downtime, the cluster’s spare machines are configured to switch the load. Hardware-level redundancy is also used.

Each controller has an integrated service processor, isolated from the logic of the main controller, which will maintain its operability even in the event of failure of the rest of the board.

Thus, there is no need to search for providers with 100% “uptime” time. Instead, when choosing the platform on which you are going to host your virtual servers, you should pay attention to other things: how often failures occur, for what reasons it happens (staff errors or due to irreversible force), how they are processed, how many data centers host equipment and so on.

Share