Release of TensorFlow 2.0 machine learning system

Submitted by significant release of machine learning platform TensorFlow 2.0, which provides ready-made implementations of various deep machine learning algorithms, a simple programming interface for building models in the Python language, and a low-level interface for the C ++ language that allows you to control the construction and execution of computational graphs. The system code is written in C++ and Python and spreads under the Apache license.

The platform was originally developed by the Google Brain team and is used in Google services for speech recognition, face detection in photos, image similarity detection, spam filtering in Gmail, selection news in Google News and organizing translations based on meaning. Distributed machine learning systems can be created on typical hardware, thanks to the built-in support in TensorFlow for spreading calculations across multiple CPUs or GPUs.

TensorFlow provides a library of ready-made numerical computation algorithms implemented through data flow graphs. The nodes in such graphs implement mathematical operations or entry/exit points, while the edges of the graph represent multidimensional data arrays (tensors) that flow between nodes.
Nodes can be assigned to computing devices and run asynchronously, simultaneously processing all the tesors that fit them at once, which makes it possible to organize the simultaneous operation of nodes in a neural network by analogy with the simultaneous activation of neurons in the brain.

The focus of the new version was on simplicity and ease of use. Some innovations:

  • A new high-level API has been proposed for building and training models Hard, which provides several options for interfaces for building models (Sequential, Functional, Subclassing) with the ability to immediate implementation (without pre-compilation) and with a simple debugging mechanism;
  • Added API tf.distribute.Strategy to organize distributed learning models with minimal changes to existing code. In addition to the possibility of spreading calculations on multiple GPUs, experimental support is available for splitting the learning process into several independent processors and the ability to use cloud TPU (Tensor processing unit);
  • Instead of a declarative model for building a graph with execution through tf.Session, it is possible to write ordinary Python functions that, using a call to tf.function, can be converted into graphs and then remotely executed, serialized, or optimized for performance;
  • Added translator AutoGraph, which converts a Python command stream into TensorFlow expressions, allowing you to use Python code inside tf.function-decorated, tf.data, tf.distribute, and tf.keras functions;
  • SavedModel unifies the model exchange format and adds support for saving and restoring the state of models. Models built for TensorFlow can now be used in TensorFlow Lite (on mobile devices), TensorFlowJS (in browser or Node.js), TensorFlow Serving ΠΈ TensorFlow Hub;
  • Unified tf.train.Optimizers and tf.keras.Optimizers APIs, instead of compute_gradients a new class is proposed for computing gradients Gradient Tape;
  • Significantly increased performance when using the GPU.
    The speed of training models on systems with NVIDIA Volta and Turing GPUs has increased up to three times;

  • Carried out big API cleanup, many calls renamed or removed, support for global variables in helper methods stopped. Instead of tf.app, tf.flags, tf.logging, a new absl-py API has been proposed. To continue using the old API, the compat.v1 module has been prepared.

Source: opennet.ru

Add a comment