We assemble a server for graphic and CAD / CAM applications for remote work via RDP based on a CISCO UCS-C220 M3 v2

We assemble a server for graphic and CAD / CAM applications for remote work via RDP based on a CISCO UCS-C220 M3 v2
Almost every company now necessarily has a department or group working in CAD / CAM
or heavy design programs. This group of users is united by serious requirements for hardware: a lot of memory - 64GB or more, a professional video card, a fast ssd, and to be reliable. Often, companies buy several powerful PCs (or graphics stations) for some users of such departments and less powerful ones for the rest, depending on the needs, as well as the financial capabilities of the company. Often this is a standard approach for solving such problems, and it works fine. But during a pandemic and remote work, and in general, this approach is not optimal, very redundant and extremely inconvenient in administration, management and other aspects. Why is this so, and what solution will ideally satisfy the needs of many companies in graphics stations? Please welcome under the cat, which describes how to assemble a working and inexpensive solution to kill and feed several birds with one stone, and what small nuances need to be taken into account in order to successfully implement this solution.

In December last year, one company opened a new office for a small design bureau and the task was to organize the entire computer infrastructure for them, given that the company already has laptops for users and a couple of servers. The laptops were already a couple of years old and were mostly gaming configurations with 8-16GB of RAM, and basically couldn't handle the load of CAD/CAM applications. Users need to be mobile as they often need to work away from the office. In the office, for each laptop, an additional monitor is bought (this is how they work with graphics). With such input data, the only optimal, but risky solution for me, is to implement a powerful terminal server with a powerful professional video card and an nvme ssd drive.

Benefits of a graphical terminal server and RDP

  • On some powerful PCs or graphic stations, most of the time, hardware resources are not used even by a third and are idle and only for a short period of time are used at 35-100% of their capacity. In general, the efficiency is 5-20 percent.
  • But often the hardware part is far from the most expensive component, because basic graphic or CAD / CAM software licenses often cost from $ 5000, and if they also have advanced options, then from $ 10. Usually, in an RDP session, these programs run without problems, but sometimes it is necessary to re-order the RDP option, or search the forums for what to write in the configs or registry and how to run such software in an RDP session. But check that the software we need works via RDP needed at the beginning and it’s easy to do this: we try to log in via RDP - if the program has started and all the basic software functions work, then most likely there will be no problems with licenses. And if it gives an error, then before implementing a project with a graphical terminal server, we are looking for a solution that is satisfactory for us.
  • Also a big plus is the support for the same configuration and specific settings, components and templates, which is often difficult to implement for all PC users. Management, administration and software updates are also "no hitch"

In general, there are many pluses - let's see how our almost ideal solution will show in practice.

We assemble a server based on a CISCO UCS-C220 M3 v2

Initially, it was planned to buy a newer and more powerful server with 256GB DDR3 ecc memory and 10GB ethernet, but they said that they needed to save a little and fit into the budget for a $1600 terminal server. Well, okay - the client is always greedy right and we select for this amount:

bu CISCO UCS-C220 M3 v2 (2 X SIX CORE 2.10GHZ E5-2620 v2) 128GB DDR3 ecc - $625
3.5" 3TB sas 7200 US id — 2×65$=130$
SSD M.2 2280 970 PRO, PCI-E 3.0 (x4) 512GB Samsung — $200
Video card QUADRO P2200 5120MB — $470
Adapter Ewell PCI-E 3.0 to M.2 SSD (EW239) -10$
Total per server = $1435

It was planned to take ssd 1TB and 10GB ethernet adapter - $40, but it turned out that there was no UPS for their 2 servers, and I had to shrink a little and buy a UPS PowerWalker VI 2200 RLE - $350.

Why a server and not a powerful PC? Justification of the chosen configuration.

Many short-sighted admins (I have already encountered many times) - for some reason they buy a powerful (often a gaming PC), put 2-4 disks there, create RAID 1, proudly call it a server and put it in the corner of the office. The whole complex is natural - a "hodgepodge" of dubious quality. Therefore, I will sign in detail why such a configuration was chosen for such a budget.

  1. Reliability!!! — all server components are designed and tested to work for more than 5-10 years. And gaming nurses work for 3-5 years at most, and even the percentage of breakdowns during the warranty period for some exceeds 5%. And our server is from a super-reliable CISCO brand, so no special problems are expected and their probability is an order of magnitude lower than a stationary PC
  2. Important components such as the power supply are duplicated and ideally you can supply power from two different lines and if one block fails, the server continues to work
  3. ECC memory - now few people remember that initially ECC memory was introduced to correct one bit from an error that occurs mainly from the effects of cosmic rays, and on 128GB of memory - an error can occur several times a year. On a stationary PC, we can observe a program crash, freeze, etc., which is not critical, but on the server the cost of an error is sometimes very high (for example, an incorrect entry in the database), in our case, with a serious glitch, you need to reboot and sometimes it costs several people a day's work
  4. Scalability - often the company's need for resources grows several times in a couple of years and it is easy to add disk memory to the server, change processors (in our case, six-core E5-2620 to ten-core Xeon E5 2690 v2) - there is almost no scalability on a regular PC
  5. U1 server format - servers must be in server rooms! and in compact racks, rather than stoking (up to 1 kW of heat) and making noise in the corner of the office! Just in the new office of the company, a little (3-6 units) space was provided separately in the server room, and one unit on our server was right next to us.
  6. Remote: management and console - without this, normal server maintenance for remote! work is extremely difficult!
  7. 128GB of RAM - in the TOR it was said 8-10 users, but in reality there will be 5-6 simultaneous sessions - therefore, given the typical maximum memory consumption in that company, 2 users of 30-40GB = 70GB and 4 users of 3-15GB = 36GB, + up to 10GB per operating system in the amount of 116GB and 10% we have in stock (this is all in rare cases of maximum use. But if it is not enough, you can add up to 256GB at any time
  8. Video card QUADRO P2200 5120MB - average per user in that company in
    In a remote session, video memory consumption was from 0,3GB to 1,5GB, so 5GB would be enough. The initial data was taken from a similar, but less powerful solution based on i5/64GB/Quadro P620 2GB, which was enough for 3-4 users
  9. SSD M.2 2280 970 PRO, PCI-E 3.0 (x4) 512GB Samsung - for simultaneous work
    8-10 users need NVMe speed and Samsung ssd reliability. In terms of functionality, this disk will be used for OS and applications
  10. 2x3TB sas - combine in RAID 1, use for large or rarely used local user data, as well as for system backup and critical local data from the nvme disk

The configuration is approved and purchased, and soon the moment of truth will come!

Assembly, configuration, installation and problem solving.

From the very beginning, I was not sure that this was a 100% working solution, since at any stage, from assembly to installation, launch and correct operation of applications, one could get stuck without the ability to continue, so I agreed about the server that it a couple of days can be returned, and other components can be used in an alternative solution.

1 far-fetched problem - a professional, full-length video card! + a couple of mm, but what if it doesn't fit? 75W - what if the pci connector does not pull? And how to make a normal heat sink of these 75W? But I climbed in, started up, the heat sink is normal (especially if the server coolers are turned on at a speed above average. True, when I set it, to make sure that nothing shorted, something in the server was bent by 1mm (I don’t remember what), but for better heat removal from the lid server then, after the final settings, tore off the instruction film, which was on the entire lid and which could worsen heat dissipation through the lid.

2nd test - the NVMe disk could not be seen through the adapter, or the system would not be installed there, and if it was installed, it would not boot. Oddly enough, Windows was installed on an NVMe disk, but it could not boot from it, which is logical since the BIOS (even updated) did not want to recognize NVMe for booting. I didn’t want to be a crutch, but I had to - then our favorite habr and post came to the rescue about booting from nvme disk on legacy systems downloaded Boot Disk Utility (BDUtility.exe), created a USB flash drive with CloverBootManager according to the instructions from the post, installed the USB flash drive in the BIOS first for boot, and now we are already loading the bootloader from the USB flash drive, Clover successfully saw our NVMe disk and automatically booted from it in a couple of seconds! It was possible to play around with installing clover on our raid 3TB disk, but it was already Saturday evening, and the work remained for another day, because until Monday it was necessary to either give the server or leave it. I left the bootable USB flash drive inside the server, there was just an extra usb.

3rd is almost a threat of failure. I installed Windows 2019 standart + RD services, installed the main application for which everything was started, and everything works wonderfully and literally flies.

Amazing! I go home and connect via RDP, the application starts, but there is a serious lag, I look and in the program the message “soft mode is enabled”. What?! I'm looking for more recent and super professional firewood for a video card, I set the result to zero, more ancient firewood under p1000 is also nothing. And at this time, the inner voice is still mocking “I told you - don’t experiment with a fresh one - take p1000”. And the time is already long night in the yard, with a heavy heart I go to bed. Sunday, I'm going to the office - I put a quadro P620 in the server and it also doesn't work via RDP - MS, what's the matter? I’m looking on the “2019 server and RDP” forums - I found the answer almost immediately.

It turns out that since most now have high-resolution monitors, and in most servers the integrated graphics adapter does not support these resolutions, hardware acceleration is disabled by default through group policies. I quote the instructions for inclusion:

  • Open the Edit Group Policy tool from Control Panel or use the Windows Search dialog (Windows Key + R, then type in gpedit.msc)
  • Browse to: Local Computer PolicyComputer ConfigurationAdministrative TemplatesWindows ComponentsRemote Desktop ServicesRemote Desktop Session HostRemote Session Environment
  • Then enable “Use the hardware default graphics adapter for all Remote Desktop Services sessions”

We reboot - everything works fine via RDP. We change the video card to P2200 it works again! Now that we are sure that the solution is fully working, we bring all the server settings to the ideal, enter them into the domain, configure user access, and so on, put the server in the server room. We test the whole team for a couple of days - everything works perfectly, there are enough server resources for all tasks in excess, the minimum lag resulting from work on RDP is invisible to all users. Excellent - 100% completed.

A couple of points on which the success of the implementation of a graphical server depends

Since at any stage of the introduction of a graphical server into an organization, pitfalls may arise that can create a situation similar to the one in the picture with the escaped fish

We assemble a server for graphic and CAD / CAM applications for remote work via RDP based on a CISCO UCS-C220 M3 v2

then at the planning stage, you need to take a few simple steps:

  1. Target audience and tasks - users who work intensively with graphics and need hardware acceleration of the video card. The success of our solution is based on the fact that the power requirements of users of graphic and CAD/CAM programs were satisfied more than 10 years ago, and at the moment we have a power reserve that exceeds the needs by 10 or more times. For example, the power of the Quadro P2200 GPU is more than enough for 10 users, and even with a lack of video memory, the video card gets enough from the RAM, and for an ordinary 3d developer, such a small drop in memory speed goes unnoticed. But if user tasks include intensive computational tasks (rendering, calculations, etc.) that often use 100% of resources, then our solution is not suitable, since other users will not be able to work normally during these periods. Therefore, we carefully analyze the tasks of users and the current load of resources (at least approximately). We also pay attention to the volume of rewriting to disk per day, and if it is a large volume, then we select server ssd or optane disks for this volume.
  2. Based on the number of users, we select a server, video card and disks that are suitable for resources:
    • processors according to the formula 1 core per user + 2,3 per OS, anyway, each at one time does not use one or maximum two (if the model is loaded rarely) cores;
    • video card - look at the average consumption of video memory and GPU per user in an RDP session and select a professional one! video card;
    • we do the same with the RAM and the disk subsystem (now you can even pick up a RAID nvme inexpensively).
  3. We carefully look at the documentation for the server (fortunately, all branded servers have full documentation) for compliance with connectors, speeds, power supply and supported technologies, as well as physical dimensions and heat dissipation standards of installed additional components.
  4. We check the normal operation of our software in several RDP sessions, as well as the absence of license restrictions and carefully check the availability of the necessary licenses. We solve this issue before the first steps to implement the implementation. As it was said in the comment by respected malefix
    "- Licenses can be tied to the number of users - then you are violating the license.
    - The software may not work correctly with several running instances - if it writes garbage or settings in at least one place not to the user profile /% temp%, but to something publicly available - then it will be very fun for you to catch the problem"
  5. We think over where the graphics server will be installed, do not forget about UPS and the availability of high-speed ethernet ports and the Internet (if necessary), as well as compliance with the climatic requirements of the server.
  6. We increase the implementation period to at least 2,5-3 weeks, because many even small necessary components can take up to two weeks, but assembly and configuration take several days - only a normal server load to the OS can take more than 5 minutes.
  7. We discuss with management and suppliers that if suddenly at any stage the project does not go well or goes wrong, then we can make a refund or replacement.
  8. It was also kindly suggested in Comments
    after all the experiments with the settings - take everything down and put it from scratch. Like this:
    — during experiments it is necessary to document all critical settings
    - during the installation from scratch, you re-perform the minimum necessary settings (which you documented in the previous step)
  9. We first install the operating system (preferably Windows server 2019 - there is a high-quality RDP) in Trial mode, but in no case evaluate (you need to reinstall it from scratch later). And only after a successful launch, we solve issues with licenses and activate the OS.
  10. Also, prior to implementation, we select an initiative group to test the work and explain to future users the advantages of working with a graphical server. If this is done after, then we increase the risk of complaints, sabotage and unreasoned negative reviews.

It feels like working on RDP is no different from working in a local session. Often you even forget that you are working somewhere via RDP - after all, even video and sometimes video communication in an RDP session work without noticeable delays, because now most people have high-speed Internet connected. In terms of the speed and functionality of RDP, Microsoft now continues to pleasantly surprise both 3D hardware acceleration and multi-monitors - everything that users of graphic, 3D and CAD / CAM programs need for remote work!

So in many cases, the installation of a graphical server according to the implementation is preferable and more mobile than 10 graphical stations or PCs.

PS How to easily and securely connect via the Internet via RDP, as well as the optimal settings for RDP clients - you can peep in the article "Remote work in the office. RDP, Port Knocking, Mikrotik: simple and secure"

Source: habr.com

Add a comment