FAQ on architecture and work VKontakte

The history of the creation of VKontakte is on Wikipedia, Pavel himself told it. It seems like everyone already knows it. About the internals, architecture and structure of the site on HighLoad ++ Pavel said back in 2010. Many servers have leaked since then, so we will update the information: we dissect, pull out the insides, weigh it - look at the VK device from a technical point of view.

FAQ on architecture and work VKontakte

Alexey Akulovich (AterCattus) backend developer in the VKontakte team. The transcript of this report is a collective answer to frequently asked questions about the work of the platform, infrastructure, servers and the interaction between them, but not about development, namely about iron. Separately - about databases and what VK has instead of them, about collecting logs and monitoring the entire project as a whole. Details under the cut.



For more than four years I have been doing all sorts of tasks related to the backend.

  • Loading, storage, processing, distribution of media: video, live streaming, audio, photos, documents.
  • Infrastructure, platform, developer-side monitoring, logs, regional caches, CDN, proprietary RPC protocol.
  • Integration with external services: push notifications, parsing external links, RSS feed.
  • Help colleagues on various issues, for the answers to which you have to dive into unknown code.

During this time, I had a hand in many components of the site. This experience is what I want to share.

General architecture

Everything, as usual, begins with a server or group of servers that accept requests.

Front server

The front server accepts requests via HTTPS, RTMP and WSS.

HTTPS - these are requests for the main and mobile web versions of the site: vk.com and m.vk.com, and other official and unofficial clients of our API: mobile clients, instant messengers. We have a reception RTMP-traffic for live broadcasts with separate front-servers and WSS-connections for Streaming API.

For HTTPS and WSS, servers have . For RTMP broadcasts, we recently switched to our own solution had, but it is outside the scope of the report. These servers advertise shared IP addresses for fault tolerance and act as groups so that in the event of a problem on one of the servers, user requests are not lost. For HTTPS and WSS, these same servers encrypt traffic in order to take some of the CPU load onto themselves.

We will not talk about WSS and RTMP further, but only about the standard HTTPS requests that are usually associated with a web project.

BACKEND

Behind the front are usually backend servers. They process requests that the front server receives from clients.

It is a kphp serversthat the HTTP daemon is running because HTTPS is already decrypted. kPHP is a server that runs on prefork models: starts a master process, a bunch of child processes, gives them listening sockets and they process their requests. In this case, the processes do not restart between each request from the user, but simply reset their state to the initial zero-value state - request by request, instead of restarting.

Load distribution

All our backends are not a huge pool of machines that can process any request. We them divided into separate groups: general, mobile, api, video, staging… The problem is on a separate group of machines, it will not affect all the others. In case of problems with the video, the user who listens to the music will not even know about the problems. Which backend to send the request to, decides nginx on the front according to the config.

Collecting metrics and rebalancing

To understand how many cars you need to have in each group, we do not rely on QPS. The backends are different, they have different requests, each request has a different complexity of calculating QPS. That's why we we operate with the concept of load on the server as a whole - on the CPU and perf.

We have thousands of such servers. Each physical server has a kPHP group running to dispose of all cores (because kPHP is single-threaded).

ContentServer

CS or Content Server is storage. CS is a server that stores files, and also processes uploaded files, all kinds of background synchronous tasks that the main web frontend puts on it.

We have tens of thousands of physical servers that store files. Users love uploading files, and we love storing and sharing them. Some of these servers are closed by special pu/pp servers.

pu/pp

If you opened the network tab in VK, you saw pu/pp.

FAQ on architecture and work VKontakte

What is pu/pp? If we close one server after another, then there are two options for uploading and uploading a file to the server that was closed: directly via http://cs100500.userapi.com/path or through an intermediate serverhttp://pu.vk.com/c100500/path.

Pu is the historical name for photo upload and pp is photo proxy. That is, one server to upload photos, and the other to give. Now not only photos are uploaded, but the name has been preserved.

These servers terminate HTTPS sessionsto take the CPU load off the storage. Also, since user files are processed on these servers, the less sensitive information stored on these machines, the better. For example, HTTPS encryption keys.

Since the machines are covered by our other machines, we can afford not to give them "white" external IPs, and give "gray". So they saved on the IP pool and guaranteed to protect the machines from outside access - there is simply no IP to get to it.

Failover via shared IP. In terms of fault tolerance, the scheme works the same way - several physical servers have a common physical IP, and the piece of iron in front of them chooses where to send the request. Later I will talk about other options.

The point at issue is that in this case client keeps fewer connections. If you have the same IP on several machines - with the same host: pu.vk.com or pp.vk.com, the client browser has a limit on the number of simultaneous requests to one host. But with the ubiquity of HTTP/2, I believe that this is no longer so relevant.

An obvious minus of the scheme is that it is necessary download all traffic, which goes to the repository, through another server. Since we are pumping traffic through machines, we cannot yet pump heavy traffic, for example, video, in the same way. We transfer it directly - a separate direct connection for separate storages specifically for video. We transfer lighter content through a proxy.

Not so long ago we have an improved version of the proxy. Now I will tell you how they differ from ordinary ones and why it is needed.

Sun

In September 2017, Oracle, which previously acquired Sun, laid off a huge number of Sun employees. We can say that at that moment the company ceased to exist. Choosing a name for the new system, our admins decided to pay tribute to the memory of this company, and called the new system Sun. Among ourselves, we call it simply "sunshine".

FAQ on architecture and work VKontakte

pp had several problems. One IP per group - inefficient cache. Several physical servers share a common IP address, and there is no way to control which server the request will go to. Therefore, if different users come for the same file, then if there is a cache on these servers, the file settles in the cache of each server. This is a very inefficient scheme, but nothing could be done.

Consequently - we can't shard content, because we cannot select a specific server of this group - they have a common IP. Also, for some internal reasons, we it was not possible to install such servers in the regions. They stood only in St. Petersburg.

With the suns, we changed the selection system. Now we have anycast routing: dynamic routing, anycast, self-check daemon. Each server has its own individual IP, but at the same time a common subnet. Everything is set up so that if one server fails, traffic is automatically spread across the rest of the servers in the same group. Now it is possible to select a specific server, no redundant caching, and reliability is not affected.

Scale support. Now we can afford to install machines of different capacities as needed, as well as in case of temporary problems, change the weights of the working “suns” to reduce the load on them so that they “rest” and start working again.

Sharding by content id. A funny thing about sharding is that we usually shard content so that different users go to the same file through the same "sunshine" so that they have a common cache.

We recently launched the Clover app. This is a live online quiz where the host asks questions and users respond in real time by choosing options. The application has a chat where users can flood. Can be connected to the broadcast at the same time more than 100 thousand people. They all write messages that are sent to all participants, along with the message comes another avatar. If 100 thousand people come for one avatar in one "sun", then it can sometimes roll behind a cloud.

To withstand bursts of requests for the same file, it is for a certain type of content that we turn on a stupid scheme that spreads files over all the available "suns" of the region.

Sun from within

Reverse proxy to nginx, cache either in RAM or fast Optane/NVMe drives. Example: http://sun4-2.userapi.com/c100500/path - a link to the "sun", which is in the fourth region, the second server group. It closes the path file, which is physically located on server 100500.

cache

We add one more node to our architectural scheme - the caching environment.

FAQ on architecture and work VKontakte

Below is the layout regional caches, there are about 20 of them. These are the places where there are caches and "suns" that can cache traffic through themselves.

FAQ on architecture and work VKontakte

This is caching of multimedia content, user data is not stored here - just music, video, photos.

To determine the user's region, we collect BGP network prefixes announced in the regions. In the case of fallback, we also have geoip base parsing if we could not find the IP by prefixes. We determine the region by the user's IP. In the code, we can look at one or more regions of the user - those points to which he is closest geographically.

How does it work?

Count file popularity by region. There is a number of the regional cache where the user is located, and a file identifier - we take this pair and increment the rating with each download.

At the same time, demons - services in the regions - from time to time come to the API and say: "I'm a cache of such and such, give me a list of the most popular files in my region that I don't have yet." The API gives a bunch of files sorted by rating, the daemon pumps them out, takes them to the regions, and gives the files from there. This is the fundamental difference between pu / pp and Sun caches: they give the file through themselves immediately, even if this file is not in the cache, and the cache first downloads the file to itself, and then starts to give it away.

In doing so, we get content closer to users and network load smearing. For example, only from the Moscow cache we distribute more than 1 Tbps during peak hours.

But there are problems cache servers are not rubber. For super popular content, sometimes there is not enough network for a separate server. Our cache servers are 40-50 Gb / s, but there is content that completely clogs such a channel. We are moving towards storing more than one copy of popular files in a region. I hope that we will implement it by the end of the year.

We've looked at the overall architecture.

  • Front servers that accept requests.
  • Backends that process requests.
  • Repositories that are closed by two types of proxies.
  • Regional caches.

What is missing in this scheme? Of course, the databases in which we store data.

Databases or engines

We call them not bases, but engines - Engines, because we practically have no databases in the generally accepted sense.

FAQ on architecture and work VKontakte

This is a necessary measure.. It happened because in 2008-2009, when VK had an explosive growth in popularity, the project was fully running on MySQL and Memcache and there were problems. MySQL liked to crash and corrupt files, after which it would not rise, and Memcache gradually degraded in performance, and had to be restarted.

It turns out that in the project gaining popularity there was a persistent storage that corrupts data, and a cache that slows down. In such conditions, it is difficult to develop a growing project. It was decided to try to rewrite the critical things that the project rested on their own bikes.

Solution was successful. The opportunity to do this was, as well as an emergency, because there were no other ways to scale at that time. There weren’t a lot of databases, NoSQL didn’t exist yet, there were only MySQL, Memcache, PostrgreSQL - that’s all.

Universal operation. The development was led by our C-development team and everything was done in a consistent way. Regardless of the engine, everywhere there was approximately the same format of files written to disk, the same launch options, the same signals were processed, and they behaved approximately the same in case of edge situations and problems. With the growth of engines, it is convenient for admins to operate the system - there is no zoo that needs to be maintained, and re-learn how to operate each new third-party base, which made it possible to quickly and conveniently increase their number.

Engine types

The team has written quite a few engines. Here are just a few of them: friend, hints, image, ipdb, letters, lists, logs, memcached, meowdb, news, nostradamus, photo, playlists, pmemcached, sandbox, search, storage, likes, tasks, …

For each task that requires a specific data structure or processes atypical requests, the C-team writes a new engine. Why not.

We have a separate engine Memcached, which is similar to the usual one, but with a bunch of goodies, and which does not slow down. Not ClickHouse, but works too. Available separately pmemcached - Is persistent memcached, which can also store data on the disk, and more than it fits into RAM, so as not to lose data when restarting. There are various engines for individual tasks: queues, lists, sets - everything that our project needs.

Clusters

From a code point of view, there is no need to think of engines or databases as processes, entities, or instances. The code works specifically with clusters, with groups of engines - one type per cluster. Let's say there is a memcached cluster - it's just a group of machines.

The code does not need to know the physical location, size, and number of servers at all. He goes to the cluster by a certain identifier.

For this to work, one more entity needs to be added, which sits between the code and the engines − proxy.

RPC proxy

proxy- link bus, which runs almost the entire site. At the same time, we have no service discovery - instead of it there is a config of this proxy, which knows the location of all clusters and all shards of this cluster. This is what admins do.

Programmers don’t care at all how much, where and what costs - they just go to the cluster. This allows us a lot. When a request is received, the proxy redirects the request, knowing where - it determines it.

FAQ on architecture and work VKontakte

In this case, the proxy is a point of protection against service denial. If some engine slows down or crashes, then the proxy understands this and responds accordingly to the client side. This allows you to remove the timeout - the code does not wait for the engine's response, but understands that it does not work and needs to behave differently. The code must be prepared for the fact that the bases do not always work.

Specific Implementations

Sometimes we still really want to have some kind of non-standard solution as an engine. At the same time, it was decided not to use our ready-made rpc-proxy, created specifically for our engines, but to make a separate proxy for the task.

For MySQL, which we still have somewhere, we use db-proxy, and for ClickHouse - Kittenhouse.

It generally works like this. There is a certain server that runs kPHP, Go, Python - in general, any code that can follow our RPC protocol. The code goes locally to the RPC-proxy - each server where there is code has its own local proxy. Upon request, the proxy understands where to go.

FAQ on architecture and work VKontakte

If one engine wants to go to another, even if it is a neighbor, then it goes through a proxy, because the neighbor can be in another data center. The engine should not be tied to knowing the location of anything other than itself - we have this standard solution. But of course there are exceptions 🙂

An example of a TL-scheme, according to which all engines work.

memcache.not_found                                = memcache.Value;
memcache.strvalue	value:string flags:int = memcache.Value;
memcache.addOrIncr key:string flags:int delay:int value:long = memcache.Value;

tasks.task
    fields_mask:#
    flags:int
    tag:%(Vector int)
    data:string
    id:fields_mask.0?long
    retries:fields_mask.1?int
    scheduled_time:fields_mask.2?int
    deadline:fields_mask.3?int
    = tasks.Task;
 
tasks.addTask type_name:string queue_id:%(Vector int) task:%tasks.Task = Long;

This is a binary protocol, the closest analogue of which is protobuf. The scheme pre-describes optional fields, complex types - extensions of built-in scalars, and queries. Everything works according to this protocol.

RPC over TL over TCP/UDP…UDP?

We have an RPC engine request execution protocol that runs on top of the TL schema. It all works over a TCP/UDP connection. TCP is understandable, but why do we often ask UDP.

UDP helps avoid the problem of huge number of connections between servers. If each server has an RPC-proxy and, in general, it can go to any engine, then we get tens of thousands of TCP connections to the server. There is a load, but it is useless. In the case of UDP, this problem does not exist.

No redundant TCP handshake. This is a typical problem: when a new engine or a new server is brought up, many TCP connections are established at once. For small, lightweight requests, such as UDP payload, all code communication with the engine is two UDP packets: one flies in one direction, the second in the other. One round trip - and the code received a response from the engine without a handshake.

Yes, it only works with very low packet loss. The protocol has support for retransmits, timeouts, but if we lose a lot, we will get almost TCP, which is not profitable. We do not drive across the oceans of UDP.

We have thousands of such servers, and there is the same scheme: a pack of engines is installed on each physical server. They are mostly single-threaded to run as fast as possible without blocking, and are sharded as single-threaded solutions. At the same time, we have nothing more reliable than these engines, and very much attention is paid to persistent data storage.

Persistent data storage

Engines write binlogs. A binlog is a file at the end of which an event is added for a change in state or data. In different solutions it is called differently: binary log, WAL, AOFbut the principle is the same.

To prevent the engine from rereading the entire binlog for many years when restarting, the engines write snapshots - current state. If necessary, they read first from it, and then finish reading from the binlog. All binlogs are written in the same binary format - according to the TL scheme, so that admins can administer them with their own tools in the same way. For snapshots, this is not necessary. There is a general header that indicates whose snapshot is int, the magic of the engine, and which body is not important to anyone. This is a problem with the engine that took the snapshot.

I will briefly describe the principle of operation. There is a server on which the engine is running. It opens a new empty binlog for writing, writes a change event to it.

FAQ on architecture and work VKontakte

At some point, he either decides to take a snapshot himself, or he receives a signal. The server creates a new file, writes its state completely into it, appends the current binlog size - offset to the end of the file, and continues writing further. A new binlog is not created.

FAQ on architecture and work VKontakte

At some point, when the engine is restarted, there will be both a binlog and a snapshot on disk. The engine reads the full snapshot, raises its state at a certain moment.

FAQ on architecture and work VKontakte

Subtracts the position that was at the moment the snapshot was created, and the size of the binlog.

FAQ on architecture and work VKontakte

Reads the end of the binlog to get the current state and continues to write further events. This is a simple scheme, all our engines work on it.

Data replication

As a result, we have data replication statement-based - we write to the binlog not any page changes, namely change requests. Very similar to what comes over the network, only slightly modified.

The same scheme is used not only for replication, but also to create backups. We have an engine - a writing master that writes to binlog. In any other place where the admins set up, copying this binlog rises, and that's all - we have a backup.

FAQ on architecture and work VKontakte

If needed reading cue, in order to reduce the load on reading on the CPU, the reading engine simply rises, which reads the end of the binlog and executes these commands locally.

The lag here is very small, and it is possible to find out how much the replica lags behind the master.

Data sharding in RPC-proxy

How does sharding work? How does the proxy understand which cluster shard to send to? The code does not say: "Send to 15 shard!" - no, this is done by the proxy.

The simplest schema is firstint is the first number in the query.

get(photo100_500) => 100 % N.

This is an example for a simple memcached text protocol, but, of course, requests can be complex, structured. The example takes the first number in the query and the remainder of the cluster size.

This is useful when we want to have data locality of a single entity. Let's say 100 is a user or group ID, and we want all the data of one entity to be on the same shard for complex queries.

If we don't care how requests are spread across the cluster, there is another option - hashing the entire shard.

hash(photo100_500) => 3539886280 % N

We also get the hash, the remainder of the division, and the shard number.

Both of these options work only if we are ready for the fact that when increasing the cluster size, we will split or increase it by a multiple of times. For example, we had 16 shards, we don’t have enough, we want more - you can painlessly get 32 ​​without downtime. If we want to build up not a multiple, there will be a downtime, because it will not be possible to carefully split everything without loss. These options are useful, but not always.

If we need to add or remove an arbitrary number of servers, use consistent hashing on the ring a la Ketama. But at the same time, we completely lose the locality of the data, we have to make a merge request for the cluster so that each piece returns its own small response, and already merge the responses to the proxy.

There are super-specific requests. It looks like this: RPC-proxy receives the request, determines which cluster to go to and determines the shard. Then there are either writing masters, or, if the cluster has replica support, it sends to the replica on request. This is all done by the proxy.

FAQ on architecture and work VKontakte

Logs

We write logs in several ways. The most obvious and simple - writing logs to memcache.

ring-buffer: prefix.idx = line

There is a key prefix - the name of the log, a line, and there is the size of this log - the number of lines. We take a random number from 0 to the number of lines minus 1. The key in memcache is the prefix concatenated with this random number. In the value we save the line of the log and the current time.

When it is necessary to read the logs, we carry out Multi Get of all keys, sorted by time, and thus we get a real-time production log. The scheme is used when you need to debug something in production in real time, without breaking anything, without stopping or letting traffic to other machines, but this log does not live long.

For reliable storage of logs, we have an engine log engine. That is why it was created and is widely used in a huge number of clusters. The largest cluster known to me stores 600 TB of packed logs.

The engine is very old, there are clusters that are already 6-7 years old. There are problems with it that we are trying to solve, for example, we began to actively use ClickHouse to store logs.

Collection of logs in ClickHouse

This diagram shows how we walk into our engines.

FAQ on architecture and work VKontakte

There is a code that goes locally via RPC to the RPC-proxy, and it understands where to go to the engine. If we want to write logs in ClickHouse, we need to change two parts in this scheme:

  • replace some engine with ClickHouse;
  • replace RPC-proxy, which cannot go to ClickHouse, with some solution that can, moreover, via RPC.

It's easy with the engine - we replace it with a server or a cluster of servers with ClickHouse.

And to go to ClickHouse, we did Kitten House. If we go directly from KittenHouse to ClickHouse, it will fail. Even without requests, from the HTTP connections of a huge number of machines, it adds up. For the scheme to work, on the server with ClickHouse rises local reverse proxy, which is written in such a way that it can withstand the required volumes of connections. It can also relatively reliably buffer data in itself.

FAQ on architecture and work VKontakte

Sometimes we don't want to implement the RPC scheme in non-standard solutions, for example, in nginx. Therefore, KittenHouse has the ability to receive logs via UDP.

FAQ on architecture and work VKontakte

If the sender and receiver of the logs are running on the same machine, then the chance of losing a UDP packet within the local host is quite low. As a compromise between the need to implement RPC in a third-party solution and reliability, we use just sending over UDP. We will return to this scheme.

Monitoring

We have two types of logs: those that administrators collect on their servers and those that developers write from the code. They correspond to two types of metrics: system and product.

System Metrics

Works on all servers netdata, which collects statistics and sends them to Graphite Carbon. Therefore, ClickHouse is used as a storage system, and not Whisper, for example. If necessary, you can directly read from ClickHouse, or use grafana for metrics, graphs and reports. As developers, we have enough access to Netdata and Grafana.

Product metrics

For convenience, we have written a lot of things. For example, there is a set of regular functions that allow you to write Counts, UniqueCounts values ​​​​to statistics that are sent somewhere further.

statlogsCountEvent   ( ‘stat_name’,            $key1, $key2, …)
statlogsUniqueCount ( ‘stat_name’, $uid,    $key1, $key2, …)
statlogsValuetEvent  ( ‘stat_name’, $value, $key1, $key2, …)

$stats = statlogsStatData($params)

Subsequently, we can use sorting, grouping filters and do whatever we want from the statistics - build graphs, set up Watchdogs.

We write very many metrics the number of events from 600 billion to 1 trillion per day. However, we want to keep them at least a couple of yearsto understand metric trends. Gluing it all together is a big problem that we haven't solved yet. Let me tell you how it has worked over the past few years.

We have functions that write these metrics to local memcacheto reduce the number of entries. Once in a short period of time locally launched stats-daemon collects all records. Next, the daemon merges the metrics into two layers of servers log-collectors, which aggregates statistics from a bunch of our machines so that the layer behind them does not die.

FAQ on architecture and work VKontakte

If necessary, we can write directly to logs-collectors.

FAQ on architecture and work VKontakte

But writing from code directly to collectors, bypassing stas-daemom, is a poorly scalable solution, because it increases the load on the collector. The solution is only suitable if, for some reason, we cannot raise the memcache stats-daemon on the machine, or it has fallen, and we went directly.

Next, logs-collectors merge the statistics into meowDB - this is our database, which also knows how to store metrics.

FAQ on architecture and work VKontakte

Then from the code we can use binary “near-SQL” to make selections.

FAQ on architecture and work VKontakte

Experiment

In the summer of 2018, we had an internal hackathon, and the idea came up to try replacing the red part of the diagram with something that can store metrics in ClickHouse. We have logs on ClickHouse - why not give it a try?

FAQ on architecture and work VKontakte

We had a scheme that wrote logs through KittenHouse.

FAQ on architecture and work VKontakte

We decided add another "*House" to the scheme, which will receive exactly the metrics in the format as our code writes them via UDP. Then this *House turns them into inserts, like logs that KittenHouse understands. He can perfectly deliver these logs to ClickHouse, which should be able to read them.

FAQ on architecture and work VKontakte

The scheme with memcache, stats-daemon and logs-collectors of the base is replaced with this one.

FAQ on architecture and work VKontakte

The scheme with memcache, stats-daemon and logs-collectors of the base is replaced with this one.

  • There is a send from the code, which is written locally in StatsHouse.
  • StatsHouse writes batches of UDP metrics already converted into SQL inserts to KittenHouse.
  • KittenHouse sends them to ClickHouse.
  • If we want to read them, then we already read them bypassing StatsHouse - directly from ClickHouse using regular SQL.

Is it still experimentbut we like how it turns out. If we fix the problems of the scheme, then, perhaps, we will completely switch to it. Personally, I hope so.

scheme does not save iron. You need fewer servers, you don't need local stats-daemons and logs-collectors, but ClickHouse requires a fatter server than those in the current scheme. Fewer servers are needed, but they should be more expensive and more powerful.

Deploy

First, let's look at the PHP deployment. We are developing in git: use GitLab и teamcity for deployment. Developer branches are merged into the master branch, from the master for testing they are merged into staging, from staging into production.

Before deployment, the current production branch and the previous one are taken, they consider diff files - change: created, deleted, changed. This change is recorded in the binlog of a special copyfast engine, which can quickly replicate changes to our entire server park. This is not copying directly, but gossip replication, when one server sends changes to its nearest neighbors, those neighbors send them to their neighbors, and so on. This allows you to update the code in tens and units of seconds throughout the park. When a change reaches the local replica, it applies these patches on its own local file system. Rollback is performed according to the same scheme.

We also deploy kPHP a lot and it also has its own development on git according to the diagram above. Since it HTTP server binary, then we cannot diff - the release binary weighs hundreds of MB. Therefore, here is another option - the version is written in binlog copyfast. With each build, it increments, and when you rollback, it also increases. Version replicated to servers. Local copyfasts see that a new version has got into binlog, and by the same gossip replication they take a fresh version of the binary for themselves, without tiring our master server, but gently spreading the load over the network. Next comes graceful restart to the new version.

For our engines, which are also essentially binaries, the scheme is very similar:

  • git master branch
  • binary in .deb;
  • the version is written to binlog copyfast;
  • replicated to servers;
  • the server pulls out a fresh .dep;
  • dpkg -i;
  • graceful restart to the new version.

The difference is that our binary is packed into archives .deb, and when pumping out they dpkg -i are placed on the system. Why do we have kPHP deployed as a binary, and engines - dpkg? It so happened. Works - do not touch.

Useful links:

Aleksey Akulovich is one of those who, as part of the Program Committee, helps PHP Russia already on May 17 to become the largest recent event for PHP developers. Look what a cool PC we have, what speakers (two of them develop the PHP core!) - it seems that if you write in PHP, this is not to be missed.

Source: habr.com

Add a comment