What is hyperscale computing

Big data and cloud computing are on everyone's lips. Current key words include Industry 4.0, Internet of Things or autonomous driving. These technologies require the networking of a very large number of sensors, devices and machines. This results in huge amounts of data that have to be processed in real time and immediately implemented on site in actions. These amounts of data - whether industrial or private, in science and research - are growing at an exponential rate. We currently produce around 220,000 Instagram posts and 280,000 tweets as well as 205 million emails per minute.

It is not always possible to estimate which capacities will be required on servers and when. In order to be able to react to such rapidly changing requirements, should Scalable server capacities be. You can find out which physical structures are necessary for this and how they are best connected to one another in our guide on hyperscale computing. With this knowledge, you can choose the specific server solution that best suits your needs.

What is Hyperscale?

The term "hyperscale" can be translated into German as "excessively large scalability". The term is used in the computer world for a certain form of organization of servers.

The term hyperscale describes scalable cloud computing systemsin which a very large number of servers are connected on a network. The number of servers used can be as required enlarged or reduced. Such a network can process a large number of accesses, but can also provide lower capacities when the load is low.

The Scalability is the actual expression that the network adapts to changing performance requirements. Hyperscale servers are small, simple systems that are precisely tailored to a specific purpose. In order to achieve scalability, they are networked together horizontally. Horizontal describes the fact that to increase the performance of an IT system further Server capacities to be added. In international parlance, the term Scale-out used for it.

The opposite procedure of the vertical scalability (Scale-up) describes the expansion of an existing local system. For this purpose, an existing computer system is upgraded with better hardware, i.e. larger main memory, faster CPU, more powerful hard drives or faster graphics cards. In practice, the on-site technology is often upgraded before the horizontal scaling - up to the limits of what is technically feasible or the limits of acceptable hardware costs. Then the step to the Hyerscaler is usually unavoidable.

How does Hyperscale work?

In hyperscale computing, servers are simply designed horizontally networked. The word “simple” does not mean “primitive”, but “easily put together”. So there exist few and basic conventions - e.g. B. Network protocols. This makes it easy to manage communication between the servers.

The server that is currently required is "addressed" with a computer that manages the incoming inquiries and sends them to the free capacities distributed, the so-called load balancer. It is constantly checked to what extent the servers used are busy with the data volumes to be processed, and at requirement become further servers switched on or switched off again with decreasing request.

Various analyzes have shown that inCompanies only 25 to 30 percent of the available data is activeused become. The unused databases include B. backup copies, customer data, recovery data. Without a strict organization system, this data is difficult to find when needed, and backups can take days. All of this is simplified with hyperscale computing. The complete hardware for computing, for storage and networks then just one point of contact for data backups, operating systems and other required software. The combination of hardware and supporting facilities makes it possible to expand the currently required computing environment to several thousand servers.

It limits excessive copying of data and simplifies the application of corporate policies and security controls, ultimately reducing staff and administrative costs.

Advantages and Disadvantages of Hyperscale Computing

The described possibility of simply expanding or reducing server capacities has both a bright and a dark side.

The advantages

  • There are no limits to scaling, companies remain flexibly equipped for future data volumes. This allows a quick and inexpensive market adjustment.
  • Companies must have long-term strategies for developing their own IT.
  • The providers of hyperscale computing guarantee a high level of reliability through redundant solutions.
  • Avoidance of dependencies through the simultaneous use of several providers.
  • Clearly calculable costs and high cost efficiency optimally support the realization of corporate goals.

The disadvantages

  • Data is given out of hand.
  • Newly added storage / server capacities can also be new sources of error.
  • There are greater demands on the internal management and the responsibility of the employees - in the long term, however, an advantage.
  • The users become dependent on the price model of the hyperscale provider.
  • Each provider has its own user interface.

Around Advantages and disadvantages good against each other weigh up to be able to, companies can have one hybridpath and store large backups or rarely required data in a cloud. This data does not take up the storage capacity of an in-house data center. Examples are personal data of users of an online shop, which must be released or deleted at the request of the user, or company data that must be retained.

What is a hyperscaler?

A hyperscaler is the operator of a data center that scalable cloud computing services offers. Amazon was the first company to enter this market in 2006 with the Amazon Web Services Enter (AWS). This is a subsidiary with the help of which Amazon's own data centers are to be better utilized worldwide. AWS now offers a great number of specific services. The market share is around 40 percent. The other two big players in this market are Microsoft with the service Azure (2010) and the Google Cloud Platform (2010). The company too IBM is considered a major provider of hyperscale computing. These technical possibilities are also offered by authorized partners in data centers in Germany - an important aspect for many companies, especially since the new General Data Protection Regulation came into force.

With IONOS Cloud, IONOS presents an alternative to the big US hyperscalers. The focus here is on Infrastructure as a Service (IaaS), with offers such as Compute Engine, Managed Kubernetes, S3 Object Storage or a private cloud.

Similar articles

Cloud hosting: an overview of revolutionary web hosting technology

For many people, the cloud is the place where private documents such as music, pictures or videos are stored or exchanged online. For most, however, the technology behind it is a cloudy matter. Did you know, for example, that websites or online shops can also be hosted in the cloud?

GAIA-X: The European digital project

Google, Amazon, Facebook and, on the other side of the world, Alibaba: Our data infrastructure is determined by a few large corporations - none of which come from Europe. GAIA-X is supposed to change that. A network of politics, business and science would like to use the concept to strengthen the development of European digital projects. Transparency, data protection and independence are ...

CAP theorem: consistency, availability and failure tolerance

The CAP theorem states that distributed systems can only cover two of the three points of consistency, availability and failure tolerance at the same time. In this article, we will tell you where the CAP theorem comes from and how it is defined. We then show some concrete examples that prove the validity of Brewer's theorem, as the statement is also known ...