Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba
M'nkhaniyi, ndikuuzani momwe mungakhazikitsire malo ophunzirira makina mumphindi 30, kupanga neural network kuti muzindikire zithunzi, ndikuyendetsa maukonde omwewo pa graphics processor (GPU).

Choyamba, tiyeni tifotokoze chomwe neural network ndi.

Kwa ife, ichi ndi chitsanzo cha masamu, komanso mapulogalamu ake kapena mawonekedwe a hardware, omangidwa pa mfundo ya bungwe ndi kugwira ntchito kwa maukonde amtundu wa neural - maukonde a mitsempha ya chamoyo. Lingaliro ili lidawuka pophunzira njira zomwe zimachitika muubongo ndikuyesera kutengera izi.

Ma Neural network samakonzedwa mwanjira yanthawi zonse, amaphunzitsidwa. Kutha kuphunzira ndi chimodzi mwazabwino zazikulu zama neural network kuposa ma aligorivimu achikhalidwe. Mwaukadaulo, kuphunzira kumaphatikizapo kupeza ma coefficients a kulumikizana pakati pa ma neuron. Pa nthawi yophunzitsira, neural network imatha kuzindikira kudalira kovutirapo pakati pa data yolowera ndi data yotulutsa, komanso kupanga generalization.

Kuchokera pamakina ophunzirira makina, neural network ndi njira yapadera yozindikirira njira, kusanthula tsankho, njira zophatikizira ndi njira zina.

Zida

Choyamba, tiyeni tione zipangizo. Tikufuna seva yokhala ndi makina opangira a Linux omwe adayikidwapo. Zida zofunika kugwiritsa ntchito makina ophunzirira makina ndi zamphamvu kwambiri ndipo, chifukwa chake, ndizokwera mtengo. Kwa iwo omwe alibe makina abwino omwe ali pafupi, ndikupangira kumvera zomwe amapereka kwa opereka mtambo. Mutha kubwereka seva yofunikira mwachangu ndikulipira nthawi yogwiritsira ntchito.

M'mapulojekiti omwe kuli kofunikira kupanga ma neural network, ndimagwiritsa ntchito ma seva a mmodzi wa opereka mitambo aku Russia. Kampaniyo imapereka ma seva amtambo kuti abwereke makamaka pophunzira makina ndi ma processor amphamvu a Tesla V100 (GPU) ochokera ku NVIDIA. Mwachidule: kugwiritsa ntchito seva yokhala ndi GPU kungakhale kokwanira kakhumi (mwachangu) poyerekeza ndi seva yamtengo wofanana yomwe imagwiritsa ntchito CPU (yodziwika bwino pakati pa processing unit) powerengera. Izi zimatheka chifukwa cha mawonekedwe a zomangamanga za GPU, zomwe zimagwirizana ndi mawerengedwe mofulumira.

Kuti tigwiritse ntchito zitsanzo zomwe zafotokozedwa pansipa, tagula seva zotsatirazi kwa masiku angapo:

  • SSD disk 150 GB
  • RAM 32 GB
  • Tesla V100 16 Gb purosesa yokhala ndi 4 cores

Tinayika Ubuntu 18.04 pamakina athu.

Kukhazikitsa chilengedwe

Tsopano tiyeni tiyike zonse zofunika kuti tigwire ntchito pa seva. Popeza nkhani yathu ndi ya oyamba kumene, ndilankhula za mfundo zina zomwe zingakhale zothandiza kwa iwo.

Ntchito zambiri pokhazikitsa chilengedwe zimachitika kudzera mu mzere wolamula. Ogwiritsa ntchito ambiri amagwiritsa ntchito Windows ngati OS yawo yogwira ntchito. The standard console mu OS iyi imasiya zambiri zofunika. Choncho, tidzagwiritsa ntchito chida chosavuta Cmder/. Tsitsani mtundu wa mini ndikuyendetsa Cmder.exe. Kenako muyenera kulumikizana ndi seva kudzera pa SSH:

ssh root@server-ip-or-hostname

M'malo mwa seva-ip-kapena-hostname, tchulani adilesi ya IP kapena dzina la DNS la seva yanu. Kenako, lowetsani mawu achinsinsi ndipo ngati kulumikizana kwabwino, tiyenera kulandira uthenga wofanana ndi uwu.

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64)

Chilankhulo chachikulu chopangira mitundu ya ML ndi Python. Ndipo nsanja yotchuka kwambiri yogwiritsira ntchito pa Linux ndi Anaconda.

Tiyeni tiyike pa seva yathu.

Timayamba ndikusintha woyang'anira phukusi lanu:

sudo apt-get update

Ikani ma curl (command line utility):

sudo apt-get install curl

Tsitsani mtundu waposachedwa wa Anaconda Distribution:

cd /tmp
curl –O https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh

Tiyeni tiyambe kukhazikitsa:

bash Anaconda3-2019.10-Linux-x86_64.sh

Pakukhazikitsa, mudzafunsidwa kuti mutsimikizire mgwirizano wa layisensi. Mukakhazikitsa bwino muyenera kuwona izi:

Thank you for installing Anaconda3!

Maziko ambiri apangidwa kuti apange mitundu ya ML; timagwira ntchito ndi otchuka kwambiri: PyTorch ΠΈ Kutuluka kwamatsenga.

Kugwiritsa ntchito chimango kumakuthandizani kuti muwonjezere liwiro lachitukuko ndikugwiritsa ntchito zida zokonzekeratu pantchito zokhazikika.

Mu chitsanzo ichi tigwira ntchito ndi PyTorch. Tiyeni tiyike:

conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

Tsopano tikufunika kukhazikitsa Jupyter Notebook, chida chodziwika bwino cha akatswiri a ML. Zimakulolani kuti mulembe kachidindo ndikuwona nthawi yomweyo zotsatira za kuphedwa kwake. Jupyter Notebook ikuphatikizidwa ndi Anaconda ndipo idayikidwa kale pa seva yathu. Muyenera kulumikizana nayo kuchokera pakompyuta yathu.

Kuti tichite izi, tiyambitsa Jupyter pa seva yofotokoza doko 8080:

jupyter notebook --no-browser --port=8080 --allow-root

Kenaka, kutsegula tabu ina mu Cmder console yathu (pamwamba mndandanda - Nkhani yatsopano ya console) tidzalumikiza kudzera pa doko 8080 ku seva kudzera pa SSH:

ssh -L 8080:localhost:8080 root@server-ip-or-hostname

Tikalowa lamulo loyamba, tidzapatsidwa maulalo kuti titsegule Jupyter mu msakatuli wathu:

To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-18788-open.html
    Or copy and paste one of these URLs:
        http://localhost:8080/?token=cca0bd0b30857821194b9018a5394a4ed2322236f116d311
     or http://127.0.0.1:8080/?token=cca0bd0b30857821194b9018a5394a4ed2322236f116d311

Tiyeni tigwiritse ntchito ulalo wa localhost:8080. Lembani njira yonse ndikuyiyika mu adilesi ya msakatuli wapa PC wanu. Jupyter Notebook idzatsegulidwa.

Tiyeni tipange kope latsopano: Latsopano - Notebook - Python 3.

Tiyeni tiwone momwe zida zonse zomwe taziyika zikuyendera bwino. Tiyeni tiyike chitsanzo cha PyTorch code mu Jupyter ndikuyendetsa (Thamanga batani):

from __future__ import print_function
import torch
x = torch.rand(5, 3)
print(x)

Chotsatiracho chiyenera kukhala chonchi:

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Ngati muli ndi zotsatira zofanana, ndiye kuti takonza zonse molondola ndipo tikhoza kuyamba kupanga neural network!

Kupanga neural network

Tipanga neural network kuti tizindikire zithunzi. Tiyeni titenge izi ngati maziko kalozera.

Tigwiritsa ntchito CIFAR10 yomwe ilipo poyera kuti tiphunzitse netiweki. Ili ndi makalasi: "ndege", "galimoto", "mbalame", "mphaka", "gwape", "galu", "chule", "kavalo", "sitima", "galimoto". Zithunzi mu CIFAR10 ndi 3x32x32, ndiko kuti, zithunzi zamitundu 3 za ma pixel 32x32.

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba
Pantchito, tidzagwiritsa ntchito phukusi lopangidwa ndi PyTorch pogwira ntchito ndi zithunzi - torchvision.

Tidzachita zotsatirazi kuti:

  • Kutsegula ndi kukhazikika kwamaphunziro ndi ma seti a data
  • Neural Network Tanthauzo
  • Maphunziro a maukonde pa data yophunzitsira
  • Kuyesa kwa netiweki pa data yoyeserera
  • Tiyeni tibwereze maphunziro ndi kuyesa pogwiritsa ntchito GPU

Tikhala tikuchita ma code onse omwe ali pansipa mu Jupyter Notebook.

Kutsegula ndi normalizing CIFAR10

Koperani ndikuyendetsa nambala iyi mu Jupyter:


import torch
import torchvision
import torchvision.transforms as transforms

transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                          shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
                                         shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

Yankho liyenera kukhala:

Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
Extracting ./data/cifar-10-python.tar.gz to ./data
Files already downloaded and verified

Tiyeni tiwone zithunzi zingapo zoyeserera zoyeserera:


import matplotlib.pyplot as plt
import numpy as np

# functions to show an image

def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()

# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()

# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Neural Network Tanthauzo

Choyamba tiyeni tiwone momwe neural network yozindikiritsa zithunzi imagwirira ntchito. Iyi ndi netiweki yosavuta yolunjika. Zimatengera deta yolowera, ndikudutsa zigawo zingapo imodzi ndi imodzi, ndiyeno pamapeto pake imatulutsa deta.

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Tiyeni tipange maukonde ofanana mdera lathu:


import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net = Net()

Timatanthauziranso ntchito yotayika komanso optimizer


import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

Maphunziro a maukonde pa data yophunzitsira

Tiyeni tiyambe kuphunzitsa neural network yathu. Chonde dziwani kuti mutayendetsa kachidindo kameneka, muyenera kudikirira mpaka ntchitoyo ithe. Zinanditengera mphindi zisanu. Zimatenga nthawi kuphunzitsa maukonde.

 for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

Timapeza zotsatirazi:

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Timasunga chitsanzo chathu chophunzitsidwa:

PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)

Kuyesa kwa netiweki pa data yoyeserera

Tinaphunzitsa netiweki pogwiritsa ntchito seti ya data yophunzitsira. Koma tiyenera kuyang'ana ngati maukonde aphunzira kalikonse.

Tidzayesa izi polosera zolemba zamakalasi zomwe neural network imatulutsa ndikuyesa kuti tiwone ngati ndizowona. Ngati kuneneratu kuli kolondola, timawonjezera chitsanzo pamndandanda wazonenedweratu zolondola.
Tiyeni tiwonetse chithunzi kuchokera pa test set:

dataiter = iter(testloader)
images, labels = dataiter.next()

# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Tsopano tiyeni tifunse neural network kutiuza zomwe zili pazithunzi izi:


net = Net()
net.load_state_dict(torch.load(PATH))

outputs = net(images)

_, predicted = torch.max(outputs, 1)

print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
                              for j in range(4)))

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Zotsatira zake zikuwoneka zabwino kwambiri: maukonde adazindikira bwino zithunzi zitatu mwa zinayi.

Tiyeni tiwone momwe netiweki imagwirira ntchito pa dataset yonse.


correct = 0
total = 0
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Zikuwoneka kuti netiweki ikudziwa zinazake ndipo ikugwira ntchito. Ngati atsimikiza makalasi mwachisawawa, kulondola kukanakhala 10%.

Tsopano tiyeni tiwone magulu ati omwe maukonde amawazindikiritsa bwino:

class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs, 1)
        c = (predicted == labels).squeeze()
        for i in range(4):
            label = labels[i]
            class_correct[label] += c[i].item()
            class_total[label] += 1


for i in range(10):
    print('Accuracy of %5s : %2d %%' % (
        classes[i], 100 * class_correct[i] / class_total[i]))

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Zikuwoneka kuti maukonde ndi abwino kwambiri pozindikira magalimoto ndi zombo: 71% kulondola.

Ndiye network ikugwira ntchito. Tsopano tiyeni tiyese kusamutsa ntchito yake kwa graphics purosesa (GPU) ndi kuwona kusintha.

Kuphunzitsa ma neural network pa GPU

Choyamba, ndikufotokozera mwachidule zomwe CUDA ili. CUDA (Compute Unified Device Architecture) ndi nsanja yofananira yamakompyuta yopangidwa ndi NVIDIA kuti ipange makompyuta ambiri pamagawo opangira zithunzi (GPUs). Ndi CUDA, opanga amatha kufulumizitsa kwambiri kugwiritsa ntchito makompyuta potengera mphamvu za GPU. Pulatifomu iyi idayikidwa kale pa seva yathu yomwe tidagula.

Tiyeni tifotokozere kaye GPU yathu ngati chida choyamba chowoneka cha cuda.

device = torch . device ( "cuda:0" if torch . cuda . is_available () else "cpu" )
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print ( device )

Neural network yanu yoyamba pa graphics processing unit (GPU). Buku Loyamba

Kutumiza netiweki ku GPU:

net.to(device)

Tidzatumizanso zolowa ndi zomwe tikufuna pa sitepe iliyonse ku GPU:

inputs, labels = data[0].to(device), data[1].to(device)

Tiyeni tiphunzitsenso maukonde pa GPU:

import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
    inputs, labels = data[0].to(device), data[1].to(device)

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

Panthawiyi, maphunziro a netiweki adatenga pafupifupi mphindi zitatu. Tikumbukenso kuti siteji yomweyo pa purosesa ochiritsira inatenga mphindi 3. Kusiyana kwake sikofunikira, izi zimachitika chifukwa maukonde athu siakulu kwambiri. Mukamagwiritsa ntchito magulu akuluakulu pophunzitsa, kusiyana pakati pa liwiro la GPU ndi purosesa yachikhalidwe kumawonjezeka.

Ndizo zonse. Zomwe tidakwanitsa kuchita:

  • Tinayang'ana zomwe GPU ili ndikusankha seva yomwe imayikidwa;
  • Takhazikitsa chilengedwe cha mapulogalamu kuti tipange neural network;
  • Tinapanga neural network kuti tizindikire zithunzi ndikuziphunzitsa;
  • Tinabwereza maphunziro a maukonde pogwiritsa ntchito GPU ndipo tinalandira kuwonjezeka kwa liwiro.

Ndidzakhala wokondwa kuyankha mafunso mu ndemanga.

Source: www.habr.com

Kuwonjezera ndemanga