Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang
Sehloohong sena, ke tla u bolella mokhoa oa ho theha tikoloho ea ho ithuta mochine ka metsotso ea 30, ho etsa marang-rang a neural bakeng sa ho lemoha litšoantšo, ebe u tsamaisa marang-rang a tšoanang ho processor ea litšoantšo (GPU).

Taba ea pele, a re hlaloseng hore na neural network ke eng.

Tabeng ea rona, sena ke mohlala oa lipalo, hammoho le software ea eona kapa hardware mothofatso, e hahiloeng holim'a molao-motheo oa ho hlophisoa le tshebetso ya dinetweke tsa baeloji neural - marangrang a lisele tsa methapo ea lintho tse phelang. Khopolo ena e hlahile ha ho ntse ho ithuta lits'ebetso tse etsahalang bokong le ho leka ho etsa mohlala oa lits'ebetso tsena.

Marang-rang a Neural ha a hlophisoa ka mokhoa o tloaelehileng oa lentsoe, a koetlisoa. Bokhoni ba ho ithuta ke o mong oa melemo ea mantlha ea marang-rang a neural ho feta li-algorithms tsa setso. Ka botekgeniki, ho ithuta ho kenyelletsa ho fumana li-coefficients tsa likhokahano lipakeng tsa methapo ea kutlo. Nakong ea ts'ebetso ea koetliso, marang-rang a marang-rang a khona ho tseba ho itšetleha ka mokhoa o rarahaneng pakeng tsa data ea ho kenya le data e hlahisoang, hammoho le ho etsa kakaretso.

Ho ea ka pono ea ho ithuta mochine, marang-rang a marang-rang ke mokhoa o khethehileng oa mekhoa ea ho lemoha mohlala, tlhahlobo ea khethollo, mekhoa ea lihlopha le mekhoa e meng.

Lisebelisoa

Pele, a re shebeng thepa. Re hloka seva e nang le sistimi ea ts'ebetso ea Linux e kentsoeng ho eona. Thepa e hlokehang ho sebelisa sistimi ea ho ithuta ka mochini e matla haholo, ka lebaka leo, e theko e boima. Bakeng sa ba se nang mochine o motle o haufi, ke khothaletsa ho ela hloko litlhahiso tsa bafani ba maru. O ka hira seva e hlokahalang kapele mme o lefelle feela nako ea ts'ebeliso.

Mererong moo ho hlokahalang ho theha marang-rang a neural, ke sebelisa li-server tsa e mong oa bafani ba maru a Russia. Khamphani e fana ka li-server tsa leru bakeng sa khiriso ka ho khetheha bakeng sa ho ithuta ka mochini ka li-processor tse matla tsa Tesla V100 (GPU) tse tsoang NVIDIA. Ka bokhutšoanyane: ho sebelisa seva e nang le GPU e ka ba makhetlo a mashome a sebetsang hantle (ka potlako) ha ho bapisoa le seva sa theko e tšoanang e sebelisang CPU (sebaka se tsebahalang sa ho sebetsa bohareng) bakeng sa lipalo. Sena se finyelloa ka lebaka la likarolo tsa meralo ea GPU, e sebetsanang le lipalo ka potlako.

Ho kenya ts'ebetsong mehlala e hlalositsoeng ka tlase, re rekile seva se latelang ka matsatsi a 'maloa:

  • SSD disk 150 GB
  • RAM ke 32 GB
  • Tesla V100 16 Gb processor e nang le li-cores tse 4

Re kentse Ubuntu 18.04 mochining oa rona.

Ho theha tikoloho

Joale a re kenyeng ntho e 'ngoe le e' ngoe e hlokahalang bakeng sa ho sebetsa ho seva. Kaha sehlooho sa rona ke sa ba qalang, ke tla bua ka lintlha tse ling tse tla ba molemo ho bona.

Mosebetsi o mongata ha o theha tikoloho o etsoa ka mohala oa taelo. Boholo ba basebelisi ba sebelisa Windows joalo ka OS ea bona e sebetsang. Khokahano e tloaelehileng ho OS ena e siea lintho tse ngata tse lakatsehang. Ka hona, re tla sebelisa sesebelisoa se loketseng Cmder/. Khoasolla mofuta o monyane 'me u tsamaise Cmder.exe. E latelang o hloka ho hokela seva ka SSH:

ssh root@server-ip-or-hostname

Sebakeng sa seva-ip-or-hostname, bolela aterese ea IP kapa lebitso la DNS la seva sa hau. Ka mor'a moo, kenya phasewete 'me haeba khokahanyo e atlehile, re lokela ho fumana molaetsa o tšoanang le ona.

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64)

Puo e ka sehloohong ea ho hlahisa mefuta ea ML ke Python. 'Me sethala se tsebahalang haholo bakeng sa ts'ebeliso ea sona ho Linux ke Anaconda.

Ha re e kenye ho seva sa rona.

Re qala ka ho nchafatsa mookameli oa sephutheloana sa lehae:

sudo apt-get update

Kenya curl (mohala oa taelo):

sudo apt-get install curl

Khoasolla mofuta oa morao-rao oa Anaconda Distribution:

cd /tmp
curl –O https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh

Ha re qale ho kenya:

bash Anaconda3-2019.10-Linux-x86_64.sh

Nakong ea ts'ebetso ea ho instola, o tla kopuoa ho netefatsa tumellano ea laesense. Kamora ho kenya ka katleho, o lokela ho bona sena:

Thank you for installing Anaconda3!

Merero e mengata e se e entsoe bakeng sa nts'etsopele ea mefuta ea ML; re sebetsa le tse tsebahalang haholo: PyTorch и ho phalla ha tensor.

Ho sebelisa moralo ho u lumella ho eketsa lebelo la nts'etsopele le ho sebelisa lisebelisoa tse lokiselitsoeng bakeng sa mesebetsi e tloaelehileng.

Mohlaleng ona re tla sebetsa le PyTorch. Ha re e kenye:

conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

Joale re hloka ho tsebisa Jupyter Notebook, sesebelisoa se tsebahalang sa nts'etsopele bakeng sa litsebi tsa ML. E u lumella ho ngola khoutu mme hang-hang u bone liphetho tsa ts'ebetso ea eona. Jupyter Notebook e kenyelelitsoe le Anaconda mme e se e kentsoe ho seva sa rona. U hloka ho hokela ho eona ho tsoa ho sistimi ea rona ea komporo.

Ho etsa sena, re tla qala Jupyter ho seva e hlalosang boema-kepe ba 8080:

jupyter notebook --no-browser --port=8080 --allow-root

E latelang, ho bula tabo e 'ngoe ho khomphutha ea rona ea Cmder (menu e kaholimo - puisano e ncha ea console) re tla hokela ka port 8080 ho seva ka SSH:

ssh -L 8080:localhost:8080 root@server-ip-or-hostname

Ha re kenya taelo ea pele, re tla fuoa likhokahano tsa ho bula Jupyter ho sebatli sa rona:

To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-18788-open.html
    Or copy and paste one of these URLs:
        http://localhost:8080/?token=cca0bd0b30857821194b9018a5394a4ed2322236f116d311
     or http://127.0.0.1:8080/?token=cca0bd0b30857821194b9018a5394a4ed2322236f116d311

Ha re sebeliseng sehokelo sa localhost:8080. Kopitsa tsela e felletseng ebe u e beha ka har'a bareng ea aterese ea sebatli sa lehae sa PC ea hau. Jupyter Notebook e tla buloa.

Ha re theheng bukana e ncha: New - Notebook - Python 3.

Ha re hlahlobe tšebetso e nepahetseng ea likarolo tsohle tseo re li kentseng. Ha re kenye mohlala oa khoutu ea PyTorch ho Jupyter ebe re tsamaisa ts'ebetso (konopo ea Run):

from __future__ import print_function
import torch
x = torch.rand(5, 3)
print(x)

Sephetho e lokela ho ba ntho e kang ena:

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Haeba u na le sephetho se ts'oanang, joale re hlophisitse ntho e 'ngoe le e' ngoe ka nepo 'me re ka qala ho theha marang-rang a neural!

Ho theha marang-rang a methapo ea kutlo

Re tla theha marang-rang a neural bakeng sa ho lemoha litšoantšo. Ha re nke sena e le motheo tsamaiso.

Re tla sebelisa dataset ea CIFAR10 e fumanehang phatlalatsa ho koetlisa marang-rang. E na le lihlopha: "sefofane", "koloi", "nonyana", "katse", "khama", "ntja", "frog", "pere", "sekepe", "teraka". Litšoantšo ho CIFAR10 ke 3x32x32, ke hore, litšoantšo tse 3 tsa mebala ea 32x32 pixels.

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang
Bakeng sa mosebetsi, re tla sebelisa sephutheloana se entsoeng ke PyTorch bakeng sa ho sebetsa le litšoantšo - torchvision.

Re tla etsa mehato e latelang ka tatellano:

  • Ho kenya le ho tloaeleha lithupelo le lisebelisoa tsa tlhahlobo ea data
  • Neural Network Definition
  • Koetliso ea marang-rang mabapi le data ea koetliso
  • Teko ea marang-rang ho data ea liteko
  • Ha re pheteng koetliso le liteko re sebelisa GPU

Re tla be re sebelisa khoutu eohle e ka tlase ho Jupyter Notebook.

Loading and normalizing CIFAR10

Kopitsa 'me u tsamaise khoutu e latelang ho Jupyter:


import torch
import torchvision
import torchvision.transforms as transforms

transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                          shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
                                         shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

Karabo e lokela ho ba:

Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
Extracting ./data/cifar-10-python.tar.gz to ./data
Files already downloaded and verified

Ha re hlahise litšoantšo tse 'maloa tsa koetliso bakeng sa tlhahlobo:


import matplotlib.pyplot as plt
import numpy as np

# functions to show an image

def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()

# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()

# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Neural Network Definition

Ha re qale ka ho nahana hore na marang-rang a neural bakeng sa ho tseba litšoantšo a sebetsa joang. Ena ke marang-rang a bonolo a ntlha-to-point. Ho nka data e kentsoeng, ho e fetisa ka mekhahlelo e 'maloa ka bonngoe, ebe qetellong e hlahisa data e hlahisoang.

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Ha re theheng marang-rang a ts'oanang tikolohong ea rona:


import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net = Net()

Re boetse re hlalosa ts'ebetso ea tahlehelo le optimizer


import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

Koetliso ea marang-rang mabapi le data ea koetliso

Ha re qaleng ho koetlisa marang-rang a rona a neural. Ka kopo hlokomela hore ka mor'a hore u tsamaise khoutu ena, u tla hloka ho ema nako e itseng ho fihlela mosebetsi o phethoa. Ho nkukile metsotso e 5. Ho nka nako ho koetlisa marang-rang.

 for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

Re fumana sephetho se latelang:

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Re boloka mohlala oa rona o koetlisitsoeng:

PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)

Teko ea marang-rang ho data ea liteko

Re koetlisitse marang-rang re sebelisa sete ea data ea koetliso. Empa re hloka ho hlahloba hore na marang-rang a ithutile letho ho hang.

Re tla leka sena ka ho bolela esale pele lengolo la sehlopha seo marang-rang a neural a se hlahisang le ho se hlahloba ho bona hore na ke 'nete. Haeba ponelopele e nepahetse, re kenya sampole lenaneng la likhakanyo tse nepahetseng.
Ha re bonts'e setšoantšo se tsoang sehlopheng sa liteko:

dataiter = iter(testloader)
images, labels = dataiter.next()

# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Joale a re kopeng marang-rang a neural ho re bolella se ka litšoantšong tsena:


net = Net()
net.load_state_dict(torch.load(PATH))

outputs = net(images)

_, predicted = torch.max(outputs, 1)

print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
                              for j in range(4)))

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Liphetho li bonahala li le ntle haholo: marang-rang a tsebahalitse ka nepo litšoantšo tse tharo ho tse 'ne.

Ha re boneng hore na marang-rang a sebetsa joang ho dataset kaofela.


correct = 0
total = 0
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Ho bonahala eka marang-rang a tseba ho hong mme a sebetsa. Haeba a ne a ka khetha litlelase ka tšohanyetso, ho nepahala e ne e tla ba 10%.

Joale ha re boneng hore na marang-rang a tsebahatsa lihlopha life hamolemo:

class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs, 1)
        c = (predicted == labels).squeeze()
        for i in range(4):
            label = labels[i]
            class_correct[label] += c[i].item()
            class_total[label] += 1


for i in range(10):
    print('Accuracy of %5s : %2d %%' % (
        classes[i], 100 * class_correct[i] / class_total[i]))

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Ho bonahala eka marang-rang a molemo ka ho fetisisa ho khetholla likoloi le likepe: ho nepahala ha 71%.

Kahoo marang-rang a sebetsa. Joale a re lekeng ho fetisetsa mosebetsi oa eona ho processor ea graphics (GPU) mme re bone hore na ke liphetoho life.

Koetlisa neural network ho GPU

Pele, ke tla hlalosa ka bokhutšoanyane hore na CUDA ke eng. CUDA (Compute Unified Device Architecture) ke sethala sa komporo se ts'oanang se ntlafalitsoeng ke NVIDIA bakeng sa komporo e akaretsang ho li-graphics processing units (GPUs). Ka CUDA, bahlahisi ba ka potlakisa lits'ebetso tsa komporo ka ho sebelisa matla a li-GPU. Sethala sena se se se kentsoe ho seva sa rona seo re se rekileng.

Ha re hlaloseng pele GPU ea rona e le sesebelisoa sa pele se bonahalang sa cuda.

device = torch . device ( "cuda:0" if torch . cuda . is_available () else "cpu" )
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print ( device )

Neural network ea hau ea pele ho yuniti ea ts'ebetso ea litšoantšo (GPU). Tataiso ea ba qalang

Ho romella marang-rang ho GPU:

net.to(device)

Hape re tla tlameha ho romella lintlha le lipehelo mohatong o mong le o mong ho GPU:

inputs, labels = data[0].to(device), data[1].to(device)

Ha re koetliseng marang-rang ho GPU:

import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
    inputs, labels = data[0].to(device), data[1].to(device)

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

Lekhetlong lena, koetliso ea marang-rang e ile ea nka metsotso e ka bang 3. A re hopoleng hore sethaleng se tšoanang ho processor e tloaelehileng e ile ea nka metsotso e 5. Phapang ha e bohlokoa, sena se etsahala hobane marang-rang a rona ha a kholo haholo. Ha u sebelisa lihlopha tse kholo bakeng sa koetliso, phapang pakeng tsa lebelo la GPU le processor ea setso e tla eketseha.

Ho bonahala e le phetho. Seo re khonneng ho se etsa:

  • Re ile ra sheba hore na GPU ke eng mme ra khetha seva eo e kentsoeng ho eona;
  • Re thehile tikoloho ea software ho theha marang-rang a neural;
  • Re thehile marang-rang a methapo bakeng sa ho lemoha litšoantšo le ho li koetlisa;
  • Re pheta koetliso ea marang-rang re sebelisa GPU mme ra fumana keketseho ea lebelo.

Ke tla thabela ho araba lipotso ka litlhaloso.

Source: www.habr.com

Eketsa ka tlhaloso