Checklist for creating and publishing web applications

In order to create your own web application in our time, it is not enough to be able to develop it. An important aspect is the configuration of tools for deploying the application, monitoring, as well as managing and administering the environment in which it works. The era of manual deployment is fading into oblivion, even for small projects, automation tools can bring tangible benefits. When deploying "by hand", we can often forget to transfer something, take into account this or that nuance, run a forgotten test, this list can be continued for quite a long time.

This article can help those who are just learning the basics of creating web applications, and want to understand a little the basic terms and conventions.

So, building applications can still be divided into 2 parts, this is everything that corresponds to the application code, and everything that corresponds to the environment in which this code is executed. The application code, in turn, is also divided into server code (the one that runs on the server, often: business logic, authorization, data storage, etc.), and client code (the one that runs on the user's machine: often the interface, and related logic with it).

Let's start from Wednesday.

The operating system is the basis for the operation of any code, system, software, so below we will consider the most popular systems on the hosting market and give them a brief description:

Windows Server - the same Windows, but in a server variation. Some functionality available in the client (normal) version of Windows is not present here, for example, some services for collecting statistics, and similar software, but there is a set of utilities for network administration, basic software for deploying servers (web, ftp, ...). In general, Windows Server looks like regular Windows, quacks like regular Windows, however, it costs 2 times more than its regular counterpart. However, considering that you will most likely deploy the application on a dedicated/virtual server, the final cost for you, although it may increase, is not critical. Since the Windows platform occupies an overwhelming place in the consumer OS market, its server edition will be the most familiar to most users.

Unix- similar system. Traditional work in these systems does not imply the presence of a familiar graphical interface, offering the user only the console as a control element. For an inexperienced user, working in this format can be difficult, which is only worth leaving a text editor that is quite popular in the data. Vim, a question related to this has already gained more than 6 million views in 1.8 years. The main distributions (editions) of this family are: Debian is a popular distribution, the package versions in it are focused mainly on LTS (Long Term Support - support for a long time), which is expressed in a sufficiently high reliability and stability of the system and packages; Ubuntu - contains distributions of all packages in their latest versions, which may affect stability, but allows you to use the functionality that comes with new versions; Red Hat Enterprise Linux - OS, positioned for commercial use, is paid, however, includes support from software vendors, some proprietary packages and driver packages; CentOS - open source a variant of Red Hat Enterprise Linux, notable for the lack of proprietary packages and support.

For those who only comprehend the development of this area, my recommendation would be systems Windows Serveror Ubuntu. If we consider Windows, then this is primarily the familiarity of the system, Ubuntu - more tolerance for updates, and in turn, for example, fewer problems when launching projects on technologies that require new versions.

So, having decided on the OS, let's move on to a set of tools that allow you to deploy (install), update and monitor the state of the application or its parts on the server.

The next important decision will be the hosting of your application, and the server for it. At the moment, the most common are 3 ways:

  • Hosting (keeping) the server on your own is the most budget option, but you will have to order a static IP from the provider so that your resource does not change its address over time.
  • Rent a Dedicated Server (VDS) - and independently manage its administration and scaling loads
  • Pay (often at the same time they give a free trial of the platform functionality) a subscription to some cloud hosting, where the payment model for the resources used is quite common. The most prominent representatives of this direction: Amazon AWS (they give a free year of using the services, but with a limit per month), Google Cloud (they give $ 300 to the account, which can be spent during the year on cloud hosting services), Yandex.Cloud (they give 4000 rubles . for 2 months), Microsoft Azure (gives free access to popular services for a year, + 12 rubles for any services for one month). Thus, you can try out any of these providers without spending a penny, but getting an approximate opinion about the quality and level of services provided.

Depending on the chosen path, in the future only the fact will change, who, for the most part, is responsible for one or another area of ​​administration. If you host yourself, then you should understand that any interruptions in electricity, the Internet, the server itself, the software deployed on it - all this is completely on your shoulders. However, for training and testing, this is more than enough.

If you do not have an extra machine that can play the role of a server, then you will want to use the second or third way. The second case is identical to the first, except that you shift the responsibility for server availability and capacity to the shoulders of the host. Server and software administration is still under your control.

And finally, the option of renting the capacity of cloud providers. Here you can set up automated control of almost anything, without going too far into technical nuances. In addition, instead of one machine, you can have several instances (instances) running in parallel, which can, for example, be responsible for different parts of the application, while not differing much in cost from owning a dedicated server. And yet, there are tools for orchestration, containerization, automatic deployment, continuous integration and much more! We will look at some of these things below.

In general, the server infrastructure looks like this: we have a so-called β€œorchestrator” (β€œorchestration” is the process of managing multiple server instances) that manages environment changes on a server instance, a virtualization container (optional, but quite often used) that allows you to split the application into isolated logical layers, and Continuous Integration software that allows you to update the hosted code through "scripts".

So, orchestration allows you to see server statuses, roll-forward or roll-back server environment updates, and so on. At first, this aspect is unlikely to affect you, because in order to orchestrate something, you need several servers (one is possible, but why is it needed?), And in order to have several servers, you need to need them. Of the tools in this area, Kubernetes is mostly heard, developed by Google.

The next step is virtualization at the OS level. Now the concept of "dockerization" has become widespread, - which came from the tool Docker, which provides the functionality of containers isolated from each other, but launched in the context of one operating system. What this means: in each of these containers, you can run an application, or even a set of applications that will consider that they are the only ones in the entire OS, without even knowing that someone else exists on this machine. This feature is very useful both for running the same applications of different versions, or simply conflicting applications, as well as for separating pieces of an application into layers. This cast of layers can later be written to an image that can be used, for example, to deploy an application. That is, by installing this image and deploying the containers it contains, you get a ready-made environment for running your application! In the first steps, you can use this tool both for informational purposes and to get very real benefits by splitting the application logic into different layers. But, here it is worth saying that not everyone needs dockerization, and not always. Dockerization is justified in cases where the application is "fragmented", broken into small parts, each responsible for its own task, the so-called "microservice architecture".

In addition, in addition to providing an environment, we also need to provide a competent deployment of the application, which includes all kinds of code transformations, installation of libraries and packages associated with the application, test runs, notifications about these operations, and so on. Here we need to pay attention to such a concept as "Continuous Integration" (CI - Continuous Integration). The main tools in this area at the moment are Jenkins (CI software written in Java can seem a bit complicated at the start), Travis C.I. (written in Ruby, subjectively, somewhat simpler Jenkins, however, some knowledge in the field of deployment configuration is still required), Gitlab CI (written in Ruby and Go).

So, having talked about the environment in which your application will work, it's time to finally see what tools the modern world offers us to create these very applications.

Let's start with the basics: BACKEND (backend) - server side. The choice of language, set of core features and predefined structure (framework) here is determined mainly by personal preferences, but nevertheless, it is worth mentioning for consideration (the author's opinion about languages ​​is quite subjective, albeit with a claim to an unbiased description):

  • Python is a friendly enough language for an inexperienced user, it forgives some mistakes, but it can also be strict with the developer so that he does not do anything bad. Already quite an adult and meaningful YP, which appeared in 1991.
  • Go - a language from Google, is also quite friendly and convenient, it is easy enough to compile and get an executable file on any platform. It can be simple and pleasant, or it can be complex and serious. Fresh and young, appeared relatively recently, in 2009.
  • Rust is a little older than the previous colleague, released in 2006, still quite young relative to its brethren. It is aimed at more experienced developers, although it still tries to solve many low-level tasks for a programmer.
  • Java is a commercial development veteran dating back to 1995 and is one of the most commonly used languages ​​in enterprise application development today. With its basic concepts and heavy setup, the runtime environment can become quite complex for a beginner.
  • ASP.net is an application development platform released by Microsoft. To write functionality, the C# language (pronounced C Sharp), which appeared in 2000, is mainly used. Its complexity is comparable to the level between Java and Rust.
  • PHP - originally used for HTML preprocessing, at the moment, although it holds the absolute leadership in the language market, there is a tendency to decline in use. It is distinguished by a low entry threshold, ease of writing code, but at the same time, when developing sufficiently large applications, the functionality of the language may not be enough.

Well, the final part of our application - the most tangible for the user - Frontend (frontend) - is the face of your application, it is with this part that the user interacts directly.

Without going into too much detail, the modern front-end stands on three pillars, frameworks (and not so) for building user interfaces. Accordingly, the three most popular are:

  • ReactJS is not a framework but a library. Actually, the framework differs from the proud title only in the absence of some functions β€œout of the box”, and the need to install them manually. Thus, there are several variations of the "cooking" of this library, which form a kind of framework. For a beginner, it can be a little tricky, due to some basic principles, and a fairly aggressive setup of the build environment. However, for a quick start, you can use the "create-react-app" package.
  • VueJS is a framework for building custom interfaces. From this trinity, it rightfully takes the title of the most user-friendly framework, for development in Vue the entry threshold is lower than that of the other brethren given. In addition, among them he is the youngest.
  • Angular - considered the most complex of the above frameworks, the only one requires TypeScript (an add-on for the Javascript language). Often used to build large enterprise applications.

Summarizing the above, we can conclude that now the deployment of the application is fundamentally different from how this process proceeded before. However, no one bothers to carry out the "deploy" in the old fashioned way. But is it worth a little saved time at the start - a huge number of rakes that a developer who has chosen this path will have to step on? I believe the answer is "no". By spending a little more time getting to know these tools (and you don't need to, because you need to understand whether you need them in the current project or not), you can play it back, significantly reducing, for example, ghost errors that depend on the environment and that appear only on the production server, nightly analyzes of what led to the server crash, and why it does not start, and much more.

Source: habr.com

Add a comment