Where to go: upcoming free events for IT professionals in Moscow (January 14–18)

Where to go: upcoming free events for IT professionals in Moscow (January 14–18)

Events with open registration:


AI & Mobile

January 14, 19:00-22:00, Tuesday

We invite you to a meeting about artificial intelligence, its application on mobile devices and the most important technological and business trends of the new decade. The program includes interesting reports, discussions, pizza and good mood.

One of the speakers is a pioneer in introducing the latest technologies in Hollywood, the White House; his book “Augmented: Life in the Smart Lane” was mentioned as one of his favorite reference books by the President of China in his New Year’s address.

NeurIPS New Year Afterparty

January 15, starting at 18:00, Wednesday

  • 18: 00 Registration
  • 19:00 Opening - Mikhail Bilenko, Yandex
  • 19:05 Reinforcement learning at NeurIPS 2019: how it was - Sergey Kolesnikov, TinkoffEvery year the topic of reinforcement learning (RL) is becoming hotter and more hyped. And every year, DeepMind and OpenAI add fuel to the fire by releasing a new superhuman performance bot. Is there something really worthwhile behind this? And what are the latest trends in all RL diversity? Let's find out!
  • 19:25 Review of NLP work at NeurIPS 2019 - Mikhail Burtsev, MIPTToday, the most breakthrough trends in the field of natural language processing are associated with the construction of architectures based on language models and knowledge graphs. The report will provide an overview of works in which these methods are used to build dialog systems to implement various functions. For example, for communicating on general topics, increasing empathy and conducting goal-oriented dialogue.
  • 19:45 Ways to understand the type of surface of the loss function - Dmitry Vetrov, Faculty of Computer Science, National Research University Higher School of EconomicsI will discuss several papers that explore unusual effects in deep learning. These effects shed light on the appearance of the surface of the loss function in weight space and allow us to put forward a number of hypotheses. If confirmed, it will be possible to more effectively regulate the step size in optimization methods. This will also make it possible to predict the achievable value of the loss function on the test sample long before the end of training.
  • 20:05 Review of works on computer vision at NeurIPS 2019 - Sergey Ovcharenko, Konstantin Lakhman, YandexWe will look at the main areas of research and work in computer vision. Let's try to understand whether all the problems have already been solved from the point of view of the academy, whether the victorious march of GAN continues in all areas, who is resisting it, and when the unsupervised revolution will take place.
  • 20:25 Coffee break
  • 20:40 Modeling sequences with unlimited order of generation - Dmitry Emelianenko, YandexWe propose a model that can insert words into arbitrary places in the generated sentence. The model implicitly learns a convenient decoding order based on the data. The best quality is achieved on several datasets: for machine translation, use in LaTeX and image description. The report is dedicated to an article in which we show that the learned decoding order actually makes sense and is specific to the problem being solved.
  • 20:55 Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness - Andrey Malinin, YandexEnsemble approaches for uncertainty estimation have recently been applied to the tasks of misclassification detection, out-of-distribution input detection and adversarial attack detection. Prior Networks have been proposed as an approach to efficiently emulate an ensemble of models for classification by parameterizing a Dirichlet prior distribution over output distributions. These models have been shown to outperform alternative ensemble approaches, such as Monte-Carlo Dropout, on the task of out-of-distribution input detection. However, scaling Prior Networks to complex datasets with many classes is difficult using the training criteria originally proposed. This paper makes two contributions. First, we show that the appropriate training criterion for Prior Networks is the reverse KL-divergence between Dirichlet distributions. This issues addresses in the nature of the training data target distributions, enabling prior networks to be successfully trained on classification tasks with arbitrarily many classes, as well as improving out-of-distribution detection performance. Second, taking advantage of this new training criterion, this paper investigates using Prior Networks to detect adversarial attacks and proposes a generalized form of adversarial training. It is shown that the construction of successful adaptive whitebox attacks, which affect the prediction and evade detection, against Prior Networks trained on CIFAR-10 and CIFAR-100 using the proposed approach requires a greater amount of computational effort than against networks defended using standard adversarial training or MC-dropout.
  • 21:10 Panel discussion: “NeurlPS, which has grown too much: who is to blame and what to do?” — Alexander Krainov, Yandex
  • 21:40 Afterparty

R Moscow Meetup #5

January 16, 18:30-21:30, Thursday

  • 19:00-19:30 “Solving operational problems using R for dummies” - Konstantin Firsov (Netris JSC, Chief Implementation Engineer).
  • 19:30-20:00 “Optimization of inventory in retail” - Genrikh Ananyev (PJSC Beluga Group, Head of reporting automation).
  • 20:00-20:30 “BMS in X5: how to do business-process mining on unstructured POS logs using R” - Evgeniy Roldugin (X5 Retail Group, Head of Service Quality Control Tools Department), Ilya Shutov (Media Tel, Head of Department data scientist).

Frontend Meetup in Moscow (Gastromarket Balchug)

January 18, 12:00-18:00, Saturday

  • “When is it worth rewriting an application from scratch, and how to convince business of this” - Alexey Pyzhyanov, developer, SiburThe real story of how we dealt with technical debt in the most radical way. I'll tell you about it:
    1. Why a good application turned into a terrible legacy.
    2. How we made the difficult decision to rewrite everything.
    3. How we sold this idea to the product owner.
    4. What came out of this idea in the end, and why we don’t regret the decision we made.

  • “Vuejs API mocks” — Vladislav Prusov, Frontend developer, AGIMA

Machine learning training in Avito 2.0

January 18, 12:00-15:00, Saturday

  • 12:00 “Zindi Sendy Logistics Challenge (rus)” - Roman Pyankov
  • 12:30 “Data Souls Wildfire AI (rus)” - Ilya Plotnikov
  • 13:00 Coffee break
  • 13:20 “Topcoder SpaceNet 5 Challenge & Signate The 3rd Tellus Satellite Challenge (eng)” - Ilya Kibardin
  • 14:00 Coffee break
  • 14:10 “Codalab Automated Time Series Regression (eng)” — Denis Vorotyntsev

Source: habr.com

Add a comment