Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo
Kulesi sihloko, ngizokutshela ukuthi ungasetha kanjani indawo yokufunda yomshini emaminithini angu-30, udale inethiwekhi ye-neural ukuze uthole ukuqashelwa kwesithombe, bese usebenzisa inethiwekhi efanayo ku-graphics processor (GPU).

Okokuqala, ake sichaze ukuthi iyini inethiwekhi ye-neural.

Esimweni sethu, lena imodeli yezibalo, kanye nesofthiwe yayo noma i-hardware embodiment, eyakhelwe phezu kwesimiso sokuhleleka nokusebenza kwamanethiwekhi e-biological neural - amanethiwekhi amangqamuzana ezinzwa ento ephilayo. Lo mqondo wavela ngenkathi kufundwa izinqubo ezenzeka ebuchosheni futhi uzama ukumodela lezi zinqubo.

Amanethiwekhi e-Neural awahlelwanga ngomqondo ojwayelekile wegama, ayaqeqeshwa. Amandla okufunda angenye yezinzuzo eziyinhloko zamanethiwekhi we-neural ngaphezu kwe-algorithms yendabuko. Ngokobuchwepheshe, ukufunda kuhlanganisa ukuthola ama-coefficients okuxhumana phakathi kwama-neurons. Phakathi nenqubo yokuqeqeshwa, inethiwekhi ye-neural iyakwazi ukuhlonza ukuncika okuyinkimbinkimbi phakathi kwedatha yokufaka nedatha yokuphumayo, kanye nokwenza okujwayelekile.

Ngokombono wokufunda komshini, inethiwekhi ye-neural iyindaba ekhethekile yezindlela zokuqaphela iphethini, ukuhlaziya okubandlululayo, izindlela zokuhlanganisa nezinye izindlela.

Izinsiza

Okokuqala, ake sibheke imishini. Sidinga iseva enesistimu yokusebenza ye-Linux efakwe kuyo. Izisetshenziswa ezidingekayo ukuze kusetshenziswe izinhlelo zokufunda zomshini zinamandla impela, ngenxa yalokho, ziyabiza. Kulabo abangenawo umshini omuhle eduze, ngincoma ukunaka ukunikezwa kwabahlinzeki bamafu. Ungakwazi ukuqasha iseva edingekayo ngokushesha futhi ukhokhele kuphela isikhathi sokusebenzisa.

Kumaphrojekthi lapho kudingekile ukudala amanethiwekhi e-neural, ngisebenzisa amaseva womunye wabahlinzeki bamafu baseRussia. Le nkampani inikeza amaseva amafu ukuze aqashwe ngokukhethekile ukuze afunde ngomshini ngama-processors wezithombe ze-Tesla V100 (GPU) anamandla avela ku-NVIDIA. Ngamafuphi: ukusebenzisa iseva ene-GPU kungase kusebenze kahle izikhathi eziyishumi (ngokushesha) uma kuqhathaniswa neseva yezindleko ezifanayo esebenzisa i-CPU (iyunithi yokucubungula emaphakathi eyaziwayo) ukuze ibale. Lokhu kufinyelelwa ngenxa yezici zesakhiwo se-GPU, esibhekana nezibalo ngokushesha.

Ukuze sisebenzise izibonelo ezichazwe ngezansi, sithenge iseva elandelayo izinsuku ezimbalwa:

  • I-SSD disk engu-150 GB
  • RAM 32 GB
  • Iphrosesa ye-Tesla V100 16 Gb enama-cores angu-4

Sifake Ubuntu 18.04 emshinini wethu.

Ukusetha imvelo

Manje ake sifake konke okudingekayo emsebenzini kuseva. Njengoba isihloko sethu ngokuyinhloko singeyabaqalayo, ngizokhuluma ngamaphuzu athile azoba usizo kubo.

Umsebenzi omningi lapho uhlela indawo wenziwa ngomugqa womyalo. Iningi labasebenzisi basebenzisa iWindows njenge-OS yabo esebenzayo. I-console ejwayelekile kule OS ishiya okuningi okufanele ufise. Ngakho-ke, sizosebenzisa ithuluzi elikahle Cmder/. Landa inguqulo encane bese usebenzisa i-Cmder.exe. Okulandelayo udinga ukuxhuma kuseva nge-SSH:

ssh root@server-ip-or-hostname

Esikhundleni se-server-ip-or-hostname, cacisa ikheli le-IP noma igama le-DNS leseva yakho. Okulandelayo, faka iphasiwedi futhi uma uxhumano luphumelele, kufanele sithole umlayezo ofana nalo.

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64)

Ulimi oluyinhloko lokuthuthukisa amamodeli e-ML yiPython. Futhi ipulatifomu ethandwa kakhulu yokusetshenziswa kwayo ku-Linux yi Anaconda.

Masiyifake kuseva yethu.

Siqala ngokubuyekeza umphathi wephakheji wendawo:

sudo apt-get update

Faka i-curl (insiza yomugqa womyalo):

sudo apt-get install curl

Landa inguqulo yakamuva ye-Anaconda Distribution:

cd /tmp
curl –O https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh

Ake siqale ukufaka:

bash Anaconda3-2019.10-Linux-x86_64.sh

Phakathi nenqubo yokufaka, uzocelwa ukuthi uqinisekise isivumelwano selayisense. Lapho ufaka ngempumelelo kufanele ubone lokhu:

Thank you for installing Anaconda3!

Manje sekudalwe izinhlaka eziningi zokuthuthukiswa kwamamodeli e-ML esisebenza nawo aziwa kakhulu: I-PyTorch и I-Tensorflow.

Ukusebenzisa uhlaka kukuvumela ukuthi ukhuphule ijubane lokuthuthuka futhi usebenzise amathuluzi enziwe ngomumo wemisebenzi ejwayelekile.

Kulesi sibonelo sizosebenza nePyTorch. Masiyifake:

conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

Manje sidinga ukwethula i-Jupyter Notebook, ithuluzi elidumile lokuthuthukisa lochwepheshe be-ML. Ikuvumela ukuthi ubhale ikhodi futhi ubone ngokushesha imiphumela yokwenziwa kwayo. I-Jupyter Notebook ifakiwe ne-Anaconda futhi isivele ifakiwe kuseva yethu. Udinga ukuxhuma kuyo ngohlelo lwethu lwedeskithophu.

Ukuze senze lokhu, sizoqala sethule i-Jupyter kuseva ecacisa imbobo 8080:

jupyter notebook --no-browser --port=8080 --allow-root

Okulandelayo, ukuvula enye ithebhu kukhonsoli yethu ye-Cmder (imenyu ephezulu - Ingxoxo yekhonsoli entsha) sizoxhuma nge-port 8080 kuseva nge-SSH:

ssh -L 8080:localhost:8080 root@server-ip-or-hostname

Lapho sifaka umyalo wokuqala, sizonikezwa izixhumanisi zokuvula i-Jupyter esipheqululini sethu:

To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-18788-open.html
    Or copy and paste one of these URLs:
        http://localhost:8080/?token=cca0bd0b30857821194b9018a5394a4ed2322236f116d311
     or http://127.0.0.1:8080/?token=cca0bd0b30857821194b9018a5394a4ed2322236f116d311

Masisebenzise isixhumanisi se-localhost:8080. Kopisha indlela egcwele bese uyinamathisele kubha yekheli yesiphequluli sendawo se-PC yakho. I-Jupyter Notebook izovuleka.

Masidale incwadi yokubhalela entsha: Okusha - I-Notebook - Python 3.

Ake sihlole ukusebenza okulungile kwazo zonke izingxenye esizifakile. Masifake isibonelo sekhodi ye-PyTorch ku-Jupyter bese siqhuba ukukhishwa (Inkinobho yokuqalisa):

from __future__ import print_function
import torch
x = torch.rand(5, 3)
print(x)

Umphumela kufanele ube into enjengale:

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Uma unomphumela ofanayo, khona-ke silungiselele yonke into ngendlela efanele futhi singaqala ukwakha inethiwekhi ye-neural!

Ukudala inethiwekhi ye-neural

Sizodala inethiwekhi ye-neural ukuze sibonwe isithombe. Ake sithathe lokhu njengesisekelo umhlahlandlela.

Ukuze siqeqeshe inethiwekhi, sizosebenzisa idathasethi etholakala esidlangalaleni ye-CIFAR10. Inezigaba: “indiza”, “imoto”, “inyoni”, “ikati”, “izinyamazane”, “inja”, “ixoxo”, “ihhashi”, “umkhumbi”, “iloli”. Izithombe ku-CIFAR10 zingu-3x32x32, okungukuthi, izithombe zemibala yeziteshi ezi-3 zamaphikseli angu-32x32.

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo
Ngomsebenzi, sizosebenzisa iphakheji eyenziwe yi-PyTorch ukuze sisebenze ngezithombe - i-torchvision.

Sizokwenza lezi zinyathelo ezilandelayo:

  • Ilayisha futhi ijwayele ukuqeqeshwa namasethi edatha yokuhlola
  • Neural Network Definition
  • Ukuqeqeshwa kwenethiwekhi kudatha yokuqeqeshwa
  • Ukuhlolwa kwenethiwekhi kudatha yokuhlola
  • Masiphinde ukuqeqeshwa nokuhlola sisebenzisa i-GPU

Sizobe sisebenzisa yonke ikhodi engezansi kuJupyter Notebook.

Iyalayisha futhi ijwayele i-CIFAR10

Kopisha bese usebenzisa ikhodi elandelayo ku-Jupyter:


import torch
import torchvision
import torchvision.transforms as transforms

transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                          shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
                                         shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

Impendulo kufanele ibe:

Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
Extracting ./data/cifar-10-python.tar.gz to ./data
Files already downloaded and verified

Masibonise izithombe ezimbalwa zokuqeqesha ukuze zihlolwe:


import matplotlib.pyplot as plt
import numpy as np

# functions to show an image

def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()

# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()

# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Neural Network Definition

Ake siqale sicabangele ukuthi inethiwekhi ye-neural yokuqashelwa kwesithombe isebenza kanjani. Lena inethiwekhi elula yephoyinti nephoyinti. Kuthatha idatha yokufaka, idlulise izendlalelo eziningana ngayinye ngayinye, bese ekugcineni ikhiqiza idatha yokuphumayo.

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Masidale inethiwekhi efanayo endaweni yethu:


import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net = Net()

Siphinde sichaze umsebenzi wokulahlekelwa kanye ne-optimizer


import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

Ukuqeqeshwa kwenethiwekhi kudatha yokuqeqeshwa

Masiqale ukuqeqesha inethiwekhi yethu ye-neural. Sicela uqaphele ukuthi ngemva kokusebenzisa le khodi, uzodinga ukulinda isikhashana kuze kuqedwe umsebenzi. Kungithathe imizuzu emi-5. Kuthatha isikhathi ukuqeqesha inethiwekhi.

 for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

Sithola umphumela olandelayo:

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Silondoloza imodeli yethu eqeqeshiwe:

PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)

Ukuhlolwa kwenethiwekhi kudatha yokuhlola

Siqeqeshe inethiwekhi sisebenzisa isethi yedatha yokuqeqeshwa. Kodwa sidinga ukuhlola ukuthi ingabe inethiwekhi ifunde okuthile nhlobo.

Sizokuhlola lokhu ngokubikezela ilebula yekilasi ekhishwa yinethiwekhi ye-neural futhi siyihlole ukuze sibone ukuthi iyiqiniso yini. Uma isibikezelo silungile, sengeza isampula kuhlu lokubikezela okulungile.
Masibonise isithombe esivela kusethi yokuhlola:

dataiter = iter(testloader)
images, labels = dataiter.next()

# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Manje ake sibuze inethiwekhi ye-neural ukuthi isitshele ukuthi yini ekulezi zithombe:


net = Net()
net.load_state_dict(torch.load(PATH))

outputs = net(images)

_, predicted = torch.max(outputs, 1)

print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
                              for j in range(4)))

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Imiphumela ibonakala mihle kakhulu: inethiwekhi ikhombe kahle izithombe ezintathu kwezine.

Ake sibone ukuthi inethiwekhi isebenza kanjani kuyo yonke idathasethi.


correct = 0
total = 0
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Kubonakala sengathi inethiwekhi yazi okuthile futhi iyasebenza. Uma enquma amakilasi ngokungahleliwe, ukunemba kuzoba ngu-10%.

Manje ake sibone ukuthi yiziphi izigaba inethiwekhi ezihlonza kangcono:

class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs, 1)
        c = (predicted == labels).squeeze()
        for i in range(4):
            label = labels[i]
            class_correct[label] += c[i].item()
            class_total[label] += 1


for i in range(10):
    print('Accuracy of %5s : %2d %%' % (
        classes[i], 100 * class_correct[i] / class_total[i]))

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Kubonakala sengathi inethiwekhi ihamba phambili ekuhlonzeni izimoto nemikhumbi: ukunemba okungu-71%.

Ngakho inethiwekhi iyasebenza. Manje ake sizame ukudlulisa umsebenzi wayo ku-graphics processor (GPU) futhi sibone ukuthi yiziphi izinguquko.

Ukuqeqesha inethiwekhi ye-neural ku-GPU

Okokuqala, ngizochaza kafushane ukuthi i-CUDA iyini. I-CUDA (I-Compute Unified Device Architecture) iyinkundla yekhompyutha efanayo eyakhiwe yi-NVIDIA yokwenza ikhompuyutha evamile kumayunithi okucubungula ihluzo (GPUs). Nge-CUDA, abathuthukisi bangasheshisa ngokumangalisayo izinhlelo zokusebenza zekhompuyutha ngokusebenzisa amandla ama-GPU. Le nkundla isivele ifakiwe kuseva yethu esiyithengile.

Ake siqale sichaze i-GPU yethu njengethuluzi lokuqala elibonakalayo le-cuda.

device = torch . device ( "cuda:0" if torch . cuda . is_available () else "cpu" )
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print ( device )

Inethiwekhi yakho yokuqala ye-neural kuyunithi yokucubungula ihluzo (GPU). Umhlahlandlela Wabasaqalayo

Ithumela inethiwekhi ku-GPU:

net.to(device)

Kuzodingeka futhi ukuthi sithumele okokufaka nokuhlosiwe esinyathelweni ngasinye ku-GPU:

inputs, labels = data[0].to(device), data[1].to(device)

Masiqeqeshe kabusha inethiwekhi ku-GPU:

import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
    inputs, labels = data[0].to(device), data[1].to(device)

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

Ngalesi sikhathi, ukuqeqeshwa kwenethiwekhi kuthathe cishe imizuzu emi-3. Masikhumbule ukuthi isiteji esifanayo ku-processor evamile sathatha imizuzu emi-5. Umehluko awubalulekile, lokhu kwenzeka ngoba inethiwekhi yethu ayinkulu kakhulu. Uma usebenzisa amalungu afanayo amakhulu ekuqeqesheni, umehluko phakathi kwejubane le-GPU nephrosesa evamile uzokhula.

Kubonakala sengathi konke lokho. Esikwazile ukukwenza:

  • Sibheke ukuthi iyini i-GPU futhi sakhetha iseva efakwe kuyo;
  • Simise indawo yesofthiwe ukuze sakhe inethiwekhi ye-neural;
  • Sakhe inethiwekhi ye-neural ukuze ibonwe isithombe futhi sayiqeqesha;
  • Siphinde ukuqeqeshwa kwenethiwekhi sisebenzisa i-GPU futhi sathola ukukhuphuka kwejubane.

Ngizojabula ukuphendula imibuzo kumazwana.

Source: www.habr.com

Engeza amazwana