HPE Remote Work Solutions

I will tell you a story today. The history of the evolution of computing technology and the emergence of remote jobs from ancient times to the present day.

IT development

The main thing that can be learned from the history of IT is…

HPE Remote Work Solutions

Of course, that IT develops in a spiral. The same solutions and concepts that were discarded decades ago acquire a new meaning and successfully begin to work in new conditions, with new tasks and new capacities. In this IT is no different from any other area of ​​human knowledge and the history of the Earth as a whole.
HPE Remote Work Solutions

A long time ago when computers were big

“I think there is a market for about five computers in the world,” IBM CEO Thomas Watson in 1943.

Early computer technology was great. No, wrong, the early technique was monstrous, cyclopean. A fully computerized machine occupied an area comparable to a gym, and cost absolutely unrealistic money. An example of components is a RAM module on ferrite rings (1964).

HPE Remote Work Solutions

This module has a size of 11 cm * 11 cm, and a capacity of 512 bytes (4096 bits). A cabinet completely stuffed with these modules hardly had the capacity of an ancient 3,5” floppy disk (1.44 MB = 2950 modules), while consuming a very noticeable electrical power and heating itself like a steam locomotive.

It is with the huge size that the English name for debugging program code is associated - “debugging”. One of the first programmers in history, Grace Hopper (yes, a woman), a naval officer, made an entry in the activity log in 1945 after investigating a problem with a program.

HPE Remote Work Solutions

Since moth (moth) is generally a bug (insect), all further problems and actions to solve the staff reported to the authorities as “debugging” (literally debugging), then the name bug was firmly fixed behind the program crash and the error in the code, and debugging became debugging .

As electronics and semiconductor electronics in particular developed, the physical size of machines began to decrease, while the computing power, on the contrary, increased. But even in this case, it was impossible to deliver each individual computer.

“There is no reason anyone would want to keep a computer in their home” - Ken Olsen, founder of DEC, 1977.

In the 70s, the term mini-computer appeared. I remember that when I first read this term many years ago, I imagined something like a netbook, almost a handheld. I couldn't be further from the truth.

HPE Remote Work Solutions

Mini is only compared to the huge computer rooms, but it is still a few cabinets with equipment costing hundreds of thousands and millions of dollars. However, computing power has already increased so much that it was not always 100% loaded, and at the same time computers began to be available to students and university teachers.

And then HE came!

HPE Remote Work Solutions

Few think about the Latin roots in English, but it was he who brought us remote access as we know it now. Terminus (lat) - end, border, goal. The purpose of the T800 Terminator was to end the life of John Connor. We also know that transport stations where passengers are boarded and unloaded or cargo is loaded and unloaded are called terminals - the final destinations of routes.

Accordingly, the concept of terminal access was born, and you can see the most famous terminal in the world, still living in our hearts.

HPE Remote Work Solutions

DEC VT100 is called a terminal because it terminates the information line. It has virtually zero computing power, and its only task is to display the information received from a large machine, and transfer keyboard input to the machine. And although the VT100s are physically dead, we still use them to the fullest.

HPE Remote Work Solutions

Our days

I would start counting “our days” from the beginning of the 80s, from the moment the first processors with any significant computing power appeared available to a wide range of people. Traditionally, it is believed that the main processor of the era was the Intel 8088 (x86 family) as the founder of the winning architecture. What is the fundamental difference with the concept of the 70s?

For the first time there is a tendency to transfer information processing from the center to the periphery. Not all tasks require insane (compared to a weak x86) power of a mainframe or even a mini-computer. Intel does not stand still, in the 90s it releases the Pentium family, which has become truly the first mass home in Russia. These processors are already capable of much, not only to write a letter - but also multimedia, and work with small databases. In fact, for small businesses, there is no need for servers at all - everything can be done on the periphery, on client machines. Every year processors are getting more powerful, and the difference between servers and personal computers is getting smaller and smaller in terms of processing power, often remaining only in power redundancy, hot-swap support, and special cases for mounting in racks.

If we compare modern client processors “ridiculous” for administrators of heavy servers in the 90s by Intel with supercomputers of the past, then it becomes a little uncomfortable.

Let's take a look at the old man, just something almost my age. Cray X-MP/24 1984.

HPE Remote Work Solutions

This machine was one of the top supercomputers in 1984, having 2 x 105 MHz processors with a peak processing power of 400 MFlops (millions of floating point operations). Specifically, the machine that is shown in the photo was in the US NSA cryptography laboratory, and was engaged in breaking ciphers. Converting 15 million 1984 dollars to 2020 dollars would cost 37,4 million, or $93/MFlops.

HPE Remote Work Solutions

In the machine on which I am writing these lines, there is a 5 Core i7400-2017 processor, they are not new at all, and even in the year of their release, they were the youngest 4-core of all mid-range desktop processors. 4 cores at 3.0 GHz base frequency (3.5 with Turbo Boost) and doubling the HyperThreading threads give from 19 to 47 GFlops of power according to various tests at a price of 16 thousand rubles per processor. If you assemble the whole machine, then you can take its cost for $ 750 (at prices and exchange rates as of March 1, 2020).

In the end, we get the superiority of a quite average desktop processor of our days by 50-120 times over a supercomputer from the top 10 of the quite foreseeable past, and the fall in the unit cost of MFlops becomes absolutely monstrous 93500 / 25 = 3700 times.

Why do we still need servers and centralization of computing with such capacities on the periphery - it is absolutely incomprehensible!

Reverse jump - the spiral has made a turn

Diskless Stations

The first signal that the transfer of computing to the periphery will not be final was the emergence of diskless workstation technology. With a significant distribution of workstations throughout the enterprise, and especially in polluted premises, the issue of managing and supporting these stations is very tough.

HPE Remote Work Solutions

The concept of “corridor time” appears - the percentage of time that a technical support employee is in the corridor, on the way to an employee with a problem. This time is paid, but completely unproductive. Not the last role, and especially in polluted rooms, was the failure of hard drives. Let's remove the disk from the workstation, and do the rest over the network, including booting. In addition to the address, the network adapter receives additional information from the DHCP server in addition to the address - the address of the TFTP server (simplified file service) and the name of the boot image, loads it into RAM and starts the machine.

HPE Remote Work Solutions

In addition to fewer breakdowns and reduced corridor time, now you can not debug the machine on the spot, but simply bring in a new one and take the old one for diagnostics to an equipped workplace. But that's not all!

A diskless station becomes much safer - if suddenly someone breaks into the room and takes out all the computers, these are just equipment losses. No data is stored on diskless stations.

Let's remember this moment, information security begins to play an increasing role after the "carefree childhood" of information technology. And the scary and important 3 letters are increasingly invading IT - GRC (Governance, Risk, Compliance), or in Russian “Controllability, Risk, Compliance”.

HPE Remote Work Solutions

Terminal servers

The ubiquity of more and more powerful personal computers in the periphery far outstripped the development of public access networks. Classic 90-early 00 client-server applications did not work very well over a thin channel if the data exchange was any significant. This was especially difficult for remote offices that connected via modem and telephone line, which, moreover, periodically hung up or broke off. AND…

The spiral took a turn and ended up in terminal mode again with the concept of terminal servers.

HPE Remote Work Solutions

In fact, we are back to the 70s with their zero clients and the centralization of computing power. It quickly became clear that in addition to the purely economic rationale with channels, terminal access provides huge opportunities for organizing secure access from the outside, including work from home for employees, or extremely limited and controlled access to contractors from untrusted networks and untrusted / uncontrolled devices.

However, terminal servers, with all their advantages and progressiveness, also had a number of disadvantages - low flexibility, the problem of a noisy neighbor, strictly server Windows, etc.

Birth of proto VDI

HPE Remote Work Solutions

True, in the early to mid-00s, industrial virtualization of the x86 platform was already in full swing. And someone voiced an idea that was simply in the air: instead of centralizing all clients on server terminal farms, let's give everyone their personal VM with client Windows and even administrator access?

Rejection of fat clients

In parallel with session virtualization and OS virtualization, an approach has been developed related to facilitating the client function at the application level.

The logic behind this was quite simple, because not everyone still had personal laptops, not everyone had the Internet in the same way, and many could only connect from Internet cafes with very limited, to put it mildly, rights. In fact, all that could be launched was a browser. The browser has become an indispensable attribute of the OS, the Internet has firmly entered our lives.

In other words, in parallel, there was a trend to transfer logic from the client to the center in the form of web applications, access to which requires only the simplest client, the Internet and a browser.
And we ended up not just in the same place where we started - with zero clients and central servers. We got there in several independent ways.

HPE Remote Work Solutions

Virtual Desktop Infrastructure

Broker

In 2007, the leader of the industrial virtualization market, VMware, released the first version of its VDM (Virtual Desktop Manager) product, which actually became the first in the nascent virtual desktop market. Of course, we did not have to wait long for a response from the leader of terminal servers, Citrix, and in 2008, with the acquisition of XenSource, XenDesktop appears. Of course, there were other vendors with their own proposals, but let's not go too deep into history, moving away from the concept.

And still the concept is preserved. The key component of VDI is the connection broker.
This is the heart of the virtual desktop infrastructure.

The broker is responsible for the most important VDI work processes:

  • Determines the resources available to the connected client (machines/sessions);
  • Balances, if necessary, clients across machine/session pools;
  • Forwards the client to the selected resource.

Today, the client (terminal) for VDI can be virtually anything that has a screen - a laptop, smartphone, tablet, kiosk, thin or zero client. And the reciprocal part, the very one that performs the productive load, is a terminal server session, a physical machine, a virtual machine. Modern mature VDI products are tightly integrated with the virtual infrastructure and independently manage it automatically, deploying or, conversely, deleting virtual machines that are no longer needed.

Slightly aside, but for some customers, a very important VDI technology is support for hardware-accelerated 3D graphics for the work of designers or designers.

Protocol

The second extremely important part of a mature VDI solution is the virtual resource access protocol. If we are talking about working inside a corporate local area network with an excellent reliable network of 1 Gbps to the workplace and a delay of 1 ms, then you can take virtually anyone and not think at all.

You need to think when the connection goes through an uncontrolled network, and the quality of this network can be absolutely anything, up to speeds of tens of kilobits and unpredictable delays. Those are just for organizing real remote work, from summer cottages, from home, from airports and eateries.

Terminal Servers vs Client VMs

When VDI appeared, it seemed that it was time to say goodbye to terminal servers. Why are they needed if everyone has their own personal VM?

However, from the point of view of pure economics, it turned out that for typical mass jobs, identical to nausea, there is nothing more effective than terminal servers in terms of price / session. For all its merits, the “1 user = 1 VM” approach spends much more resources on virtual hardware and a full-fledged OS, which worsens the economy at typical workplaces.

In the case of jobs for top managers, non-standard and loaded jobs, the need to have high rights (up to the administrator), a dedicated VM per user has an advantage. Within this VM, you can allocate resources individually, grant rights of any level, and balance VMs between virtualization hosts under high load.

VDI and economics

For years I have been hearing the same question - how is VDI cheaper than just handing out laptops to everyone? And for years I have had to answer exactly the same thing: in the case of ordinary office workers, VDI is not cheaper, if you consider the net cost of providing equipment. Like it or not, laptops are getting cheaper, but servers, storage systems and system software cost quite a lot of money. If it's time for you to upgrade your fleet and you're thinking of saving money with VDI, no, don't.

I cited the terrible three letters GRC above - and so, VDI is about GRC. It's about risk management, it's about the security and convenience of controlled access to data. And all this usually costs quite a lot of money to implement on a pile of heterogeneous equipment. With VDI, control is simplified, safety is increased, and hair becomes soft and silky.

HPE Remote Solutions

Remote and cloud management

iLO iLO

HPE is far from new to remote management of server infrastructure, no joke - in March, the legendary iLO (Integrated Lights Out) turned 18 years old. Remembering my admin times in the 00s, I personally could not get enough of it. Initial rack mounting and cabling was all that needed to be done in a noisy and cold data center. All other configuration, including filling the OS, could already be done from the workplace, two monitors and with a mug of hot coffee. And this is 13 years ago!

HPE Remote Work Solutions

Today, HPE servers are not without reason the undeniable long-term quality standard - and far from the last role in this is played by the gold standard of the remote management system - iLO.

HPE Remote Work Solutions

I would like to separately note the actions of HPE in keeping humanity under control over the coronavirus. HPE announcedthat until the end of 2020 (at least) the iLO Advanced license is available to everyone for free.

infosight

If you have more than 10 servers in your infrastructure, and the administrator does not get bored, then of course the cloud-based HPE Infosight system based on artificial intelligence will be a great addition to the standard monitoring tools. The system not only monitors the status and builds graphs, but also independently recommends further actions based on the current situation and trends.

HPE Remote Work Solutions

HPE Remote Work Solutions

Be smart, be like a bank "Opening", try Infosight!

one view

Last in line, but not least, I want to note HPE OneView - a whole product portfolio with huge capabilities for monitoring and managing the entire infrastructure. And all this without getting up from the desktop, which you probably have in the current situation in general in the country.

HPE Remote Work Solutions

SHD is also not badly sewn!

Of course, all storage systems are remotely controlled and monitored - this was the case many years ago. Therefore, today I want to talk about something else, namely metro clusters.

Metro clusters are not a novelty on the market at all, but it is precisely because of this that they are still not very popular - the inertia of thinking and first impressions are affecting. Of course, 10 years ago they already were, but now they cost like a cast-iron bridge. The years since the first metro clusters have changed the industry and the availability of technology to the general public.

I remember projects where parts of the storage system were specially distributed - separately for supercritical services in a metro cluster, separately for synchronous replication (many times cheaper).

In fact, in 2020, a metro cluster costs you nothing if you are able to organize two sites and channels. But the channels for synchronous replication require exactly the same channels as for metroclusters. Software licensing has long been packaged - and synchronous replication comes immediately with a metro cluster, and the only thing that keeps unidirectional replication alive so far is the need to organize an extended L2 network. And even then, L2 over L3 is already sweeping the country with might and main.

HPE Remote Work Solutions

So what is the fundamental difference between synchronous replication and a metro cluster in terms of remote work?

Everything is very simple. Metrocluster works by itself, automatically, always, almost instantly.

What does the process of switching the load to synchronous replication look like on an infrastructure of at least a few hundred VMs?

  1. An emergency signal is received.
  2. The shift on duty analyzes the situation - you can safely lay down from 10 to 30 minutes just to receive a signal and make a decision.
  3. If the engineers on duty do not have the authority to independently start the switchover, it is still safe to have 30 minutes to contact the authorized person and formally confirm the start of the switchover.
  4. Pressing the Big Red Button.
  5. 10-15 minutes for timeouts and remounting volumes, re-registration of VMs.
  6. 30 minutes to change IP addressing is an optimistic estimate.
  7. And finally, the start of the VM and the launch of productive services.

Total RTO (time to recovery of business processes) can be safely estimated at 4 hours.

Compare with the situation at the metro cluster.

  1. The storage system understands that the connection with the metrocluster shoulder is lost - 15-30 seconds.
  2. Virtualization hosts understand that the first data center is lost - 15-30 seconds (simultaneously with n 1).
  3. Automatic restart of half to a third of the VMs in the second data center - 10-15 minutes before loading services.
  4. Around this time, the duty shift realizes what happened.

Total: RTO = 0 for individual services, 10-15 minutes in general.

Why restart only half to a third of the VM? See what's up:

  1. You do everything smartly, and turn on the automatic balancing of the VM. As a result, on average, only half of the VMs are executed in one of the data centers. After all, the whole point of the metrocluster is to minimize downtime, and therefore it is in your interests to minimize the number of VMs under attack.
  2. Part of the services can be clustered at the application level, spread across different VMs. Accordingly, these paired VMs are nailed one by one, or tied with a ribbon to different data centers, so that the service does not wait for the VM to restart at all in the event of an accident.

With a well-built infrastructure with stretched metro clusters, business users work with minimal delays from anywhere, even in the event of an accident at the data center level. In the worst case, the delay will be one cup of coffee.

And, of course, metro clusters work great both on the HPE 3Par going towards Valinor and on the brand new Primera!

HPE Remote Work Solutions

Remote Workplace Infrastructure

Terminal servers

Terminal Servers don't need to be anything new, for many years HPE has been supplying some of the best servers in the world for them. Timeless classics - DL360 (1U) or DL380 (2U) or for AMD lovers - DL385. Of course, there are blade servers, both the classic C7000 and the new composable Synergy platform.

HPE Remote Work Solutions

For every taste, for every color, maximum sessions per server!

“Classic” VDI + HPE Simplivity

In this case, when I say “classic VDI”, I mean the concept of 1 user = 1 client Windows VM. And of course, there is no closer and dearer VDI load for hyperconverged systems, especially with deduplication and compression.

HPE Remote Work Solutions

Here, HPE can offer both its own Simplivity hyperconverged platform and servers / certified nodes for partner solutions, such as VSAN Ready Nodes for building VDI on a VMware VSAN infrastructure.

Let's talk a little more about Simplivity's own solution. At the forefront, as the name gently hints at us, is simplicity (English simple - simple). Easy to deploy, easy to manage, easy to scale.

Hyperconverged systems are one of the hottest topics in IT today, and the number of vendors of different levels is about 40. According to Gartner's magic square, HPE is in Top 5 globally, and is included in the square of leaders - they understand and where the industry is developing, and are able to understanding to embody in iron.

Architecturally, Simplivity is a classic hyperconverged system with controller virtual machines, which means it can support various hypervisors, unlike systems integrated into the hypervisor. Indeed, as of April 2020, VMware vSphere and Microsoft Hyper-V are supported, and plans for KVM support have been announced. A key feature of Simplivity since its inception on the market has been hardware acceleration of compression and deduplication using a special accelerator card.

HPE Remote Work Solutions

It should be noted that compression with deduplication are global and permanently enabled, they are not an optional feature, but the architecture of the solution.

HPE Remote Work Solutions

HPE is of course somewhat disingenuous, claiming an efficiency of 100:1, counting in a special way, but the efficiency of space use is really very high. It's just that the number 100:1 is painfully beautiful. Let's see how Simplivity is technically implemented to show such numbers.

Snapshot. Snapshots (instant snapshots) - 100% correctly implemented as RoW (Redirect-on-Write), and therefore occur instantly and do not give a performance penalty. How are they different from some other systems? Why do we need local snapshots without penalties? Yes, it's very simple, to reduce the RPO from 24 hours (average RPO for backup) to tens or even units of minutes.

Backup. A snapshot differs from a backup only in how it is perceived by the virtual machine management system. If when you delete the machine, everything else is deleted, then it was a snapshot. If left, it means a backup (backup copy). Thus, any snapshot can be considered a full backup if it is marked in the system and not deleted.

Of course, many will object - what kind of backup is this if it is stored on the same system? And here there is a very simple answer in the form of a counter question: tell me, do you have a formal threat model that sets the rules for storing a backup copy? This is an absolutely honest backup against deleting a file inside the VM, this is a backup against deleting the VM itself. In the case of the need to store a backup exclusively on a stand-alone system, there is a choice: replication of this snapshot to a second Simplivity cluster or to HPE StoreOnce.

HPE Remote Work Solutions

And this is where it turns out that such an architecture is just perfect for any kind of VDI. After all, VDI is hundreds or even thousands of extremely similar machines with the same OS, with the same applications. Global deduplication will chew it all up and compress not even 100:1, but much better. Deploy 1000 VMs from one template? Not a problem at all, these machines will take longer to register with vCenter than to clone.

Especially for users with special performance requirements, and for those who need 3D accelerators, the Simplivity G line was created.

HPE Remote Work Solutions

This series does not use a hardware deduplication accelerator, and therefore the number of disks per node is reduced so that the controller can handle it in software. This frees up PCIe slots for any other accelerators. We also doubled the amount of available memory per node to 3TB for the most demanding workloads.

HPE Remote Work Solutions

Simplivity is ideal for organizing geographically distributed VDI infrastructures with data replication to a central data center.

HPE Remote Work Solutions

Such a VDI architecture (and not only VDI, by the way) is especially interesting in the conditions of Russian realities - huge distances (and therefore delays) and far from ideal channels. Regional centers are being created (and even just 1-2 Simplivity nodes in a completely remote office), where local users connect via fast channels, full control and management from the center is maintained, and only a small amount of already real, valuable, and not junk ones is replicated to the center data.

Of course, Simplivity is fully connected to OneView and InfoSight.

Thin and Zero Clients

Thin clients are specialized solutions for use exclusively as terminals. Since there is actually no load on the client except for maintaining the channel and decoding the video, there is almost always a processor with passive cooling, a small boot disk just to start a special embedded OS, and, in general, that's all. There is practically nothing to break in it, and it is useless to steal. The cost is low and no data is stored in it.

There is a special category of thin clients, the so-called zero clients. Their main difference from thin ones is the absence of even a general-purpose embedded OS, and work exclusively from a microchip with firmware. Often they have special hardware accelerators for decoding video streams in terminal protocols such as PCoIP or HDX.

Despite the division of the large Hewlett Packard into separate HPE and HP, one cannot fail to mention thin clients manufactured by HP.

The choice is wide, for every taste and need - up to multi-monitor workstations with hardware accelerated video streaming.

HPE Remote Work Solutions

HPE service for your remote work

And last on the list, but not least, I want to mention the HPE service. It would be too long to list all the HPE service levels and features, but there is at least one critical offering in a remote work environment. Namely - a service engineer from HPE / authorized service center. You continue to work remotely, from your favorite dacha, listening to bumblebees, while the HPE bee, having arrived at the data center, replaces disks or a failed power supply in your servers.

HPE CallHome

In today's conditions, with restrictions on movement, the Call Home function becomes more relevant than ever. Any HPE system with this feature can independently report a hardware or software failure to the HPE Support Center. And it is likely that a replacement part and/or a service engineer will arrive at your place long before you notice malfunctions and problems with productive services.

Personally, I highly recommend this feature.

Source: habr.com

Add a comment