Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga
I roto i tenei tuhinga, ka korero atu ahau ki a koe me pehea te whakarite i tetahi taiao ako miihini i roto i te 30 meneti, te hanga i tetahi whatunga neural mo te tohu whakaahua, katahi ka whakahaere i taua whatunga ki runga i te tukatuka whakairoiro (GPU).

Tuatahi, me tautuhi he aha te whatunga neural.

I roto i ta maatau, he tauira pangarau tenei, me ona ahuatanga rorohiko, taputapu taputapu ranei, i hangaia i runga i te kaupapa o te whakahaere me te mahi o nga whatunga neural koiora - nga whatunga o nga pūtau nerve o te rauropi ora. I puta mai tenei ariā i te wa e ako ana i nga mahi ka puta i roto i te roro me te ngana ki te whakatauira i enei mahi.

Ko nga whatunga neural kaore i te whakamaherehia i roto i te tikanga o te kupu, kua whakangungua. Ko te kaha ki te ako ko tetahi o nga painga nui o nga whatunga neural i runga i nga algorithm tuku iho. Ko te tikanga, ko te ako ko te kimi i nga whakarea o nga hononga i waenga i nga neurons. I te wa o te whakangungu, ka taea e te whatunga neural te tautuhi i nga whakawhirinakitanga uaua i waenga i nga raraunga whakauru me nga raraunga whakaputa, me te mahi i te whaanui.

Mai i te tirohanga o te ako miihini, ko te whatunga neural he keehi motuhake mo nga tikanga tohu tauira, tātaritanga whakahirahira, tikanga whakahiato me etahi atu tikanga.

Tuhinga

Tuatahi, kia titiro tatou ki nga taputapu. Kei te hiahia matou he tūmau me te punaha whakahaere Linux kua whakauruhia ki runga. Ko nga taputapu e hiahiatia ana hei whakahaere i nga punaha ako miihini he tino kaha, a, ko te mutunga, he utu nui. Mo te hunga kaore he miihini pai i te ringaringa, ka tūtohu ahau kia aro ki nga tuku a nga kaiwhakarato kapua. Ka taea e koe te utu tere i te tūmau e hiahiatia ana ka utu mo te wa whakamahi anake.

I roto i nga kaupapa e tika ana ki te hanga whatunga neural, ka whakamahi ahau i nga kaitoro o tetahi o nga kaiwhakarato kapua a Ruhia. Ka tukuna e te kamupene nga tuutuu kapua mo te riihi motuhake mo te ako miihini me nga miihini whakairoiro Tesla V100 (GPU) mai i NVIDIA. I roto i te poto: te whakamahi i te tūmau me te GPU ka taea te tekau nga wa te pai ake (tere) whakaritea ki te tūmau o te utu rite e whakamahi ana i te PTM (te wae tukatuka pokapū rongonui) mo te tātaitanga. Ka tutuki tenei na nga ahuatanga o te hoahoanga GPU, he tere ake te tarai i nga tatauranga.

Hei whakatinana i nga tauira e whakaahuatia ana i raro nei, i hokona e matou te tūmau e whai ake nei mo etahi ra:

  • Kopae SSD 150 GB
  • RAM 32 GB
  • Tesla V100 16 Gb pūtukatuka ki 4 matua

I whakauruhia e matou te Ubuntu 18.04 ki runga i ta maatau miihini.

Te whakarite i te taiao

Inaianei me whakauru nga mea katoa e tika ana mo te mahi i runga i te tūmau. I te mea ko ta maatau tuhinga te tuatahi mo nga tiimata, ka korero ahau mo etahi waahanga ka whai hua ki a raatau.

He maha nga mahi i te wa e whakarite ana i tetahi taiao ka mahia ma te raina whakahau. Ko te nuinga o nga kaiwhakamahi e whakamahi ana i a Windows hei OS mahi. Ko te papatohu paerewa i roto i tenei OS ka nui te hiahia. Na reira, ka whakamahia e matou he taputapu watea Cmder/. Tangohia te putanga paku ka whakahaere Cmder.exe. Whai muri me hono koe ki te tūmau mā SSH:

ssh root@server-ip-or-hostname

Engari i te server-ip-or-hostname, whakapūtā te wāhitau IP, te ingoa DNS rānei o tō tūmau. I muri mai, whakauruhia te kupuhipa me te mea kua angitu te hononga, me whiwhi he karere rite ki tenei.

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64)

Ko te reo matua mo te whakawhanake tauira ML ko Python. A ko te papanga rongonui mo tana whakamahi i runga i te Linux Anaconda.

Kia tāuta i runga i to tatou tūmau.

Ka timata tatou ma te whakahōu i te kaiwhakahaere mōkihi ā-rohe:

sudo apt-get update

Tāutahia te curl (whakamahinga raina whakahau):

sudo apt-get install curl

Tangohia te putanga hou o Anaconda Distribution:

cd /tmp
curl –O https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh

Me timata te whakaurunga:

bash Anaconda3-2019.10-Linux-x86_64.sh

I te wa o te whakaurunga, ka tonohia koe ki te whakaū i te kirimana raihana. Ina oti te whakaurunga me kite koe i tenei:

Thank you for installing Anaconda3!

He maha nga anga kua hangaia inaianei mo te whanaketanga o nga tauira ML; mahi matou me nga mea rongonui: PyTorch и rere hau.

Ma te whakamahi i te anga ka taea e koe te whakanui ake i te tere o te whakawhanaketanga me te whakamahi i nga taputapu kua rite mo nga mahi paerewa.

I tenei tauira ka mahi tahi tatou me PyTorch. Kia tāuta tatou:

conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

Inaianei me whakarewa tatou i te Jupyter Notebook, he taputapu whanaketanga rongonui mo nga tohunga ML. Ka taea e koe te tuhi waehere me te kite tonu i nga hua o tana mahi. Kei te whakauruhia a Jupyter Notebook ki a Anaconda a kua whakauruhia ki runga i ta maatau tūmau. Me hono koe ki a ia mai i ta maatau punaha papamahi.

Hei mahi i tenei, ka whakarewahia e matou a Jupyter i runga i te tūmau e tohu ana i te tauranga 8080:

jupyter notebook --no-browser --port=8080 --allow-root

Whai muri, ka whakatuwhera i tetahi atu ripa i roto i ta maatau papatohu Cmder (tahua runga - korero papatohu hou) ka hono atu ma te tauranga 8080 ki te tūmau ma te SSH:

ssh -L 8080:localhost:8080 root@server-ip-or-hostname

Ka uru ana matou ki te whakahau tuatahi, ka tukuna he hononga ki a matou ki te whakatuwhera i a Jupyter i roto i to maatau tirotiro:

To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-18788-open.html
    Or copy and paste one of these URLs:
        http://localhost:8080/?token=cca0bd0b30857821194b9018a5394a4ed2322236f116d311
     or http://127.0.0.1:8080/?token=cca0bd0b30857821194b9018a5394a4ed2322236f116d311

Me whakamahi te hono mo localhost:8080. Tāruatia te ara katoa ka whakapiri ki te pae wāhitau o te pūtirotiro paetata o tō PC. Ka tuwhera a Jupyter Notebook.

Me hanga he pukatuhi hou: Hou - Pukatuhituhi - Python 3.

Kia tirohia te mahi tika o nga waahanga katoa i whakauruhia e matou. Whakauruhia te tauira PyTorch waehere ki Jupyter ka whakahaere i te mahi (Pātene Whakahaere):

from __future__ import print_function
import torch
x = torch.rand(5, 3)
print(x)

Me penei te hua:

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

Mena he rite to hua, katahi ka whirihora tika i nga mea katoa ka taea e taatau te tiimata ki te whakawhanake i tetahi whatunga neural!

Te hanga whatunga neural

Ka hangahia e matou he whatunga neural mo te tohu whakaahua. Me tango tenei hei kaupapa rangatira.

Ka whakamahia e matou te huinga raraunga CIFAR10 e waatea ana ki te iwi katoa hei whakangungu i te whatunga. He karaehe: "rererangi", "motika", "manu", "ngeru", "tia", "kuri", "poroka", "hoiho", "kaipuke", "taraka". Ko nga whakaahua kei CIFAR10 he 3x32x32, ara, 3-hongere whakaahua tae o te 32x32 pika.

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga
Mo te mahi, ka whakamahia e matou te kete i hangaia e PyTorch mo te mahi me nga whakaahua - torchvision.

Ka mahi maatau i nga waahanga e whai ake nei:

  • Te uta me te whakarite i nga huinga raraunga whakangungu me te whakamatautau
  • Whakamaramatanga Whatunga Neural
  • Whakangungu whatunga mo nga raraunga whakangungu
  • Te whakamatautau whatunga mo nga raraunga whakamatautau
  • Kia tuaruatia te whakangungu me te whakamatautau ma te whakamahi GPU

Ka mahia e matou nga waehere katoa i raro nei i te Pukatuhipoka Jupyter.

Te uta me te whakarite CIFAR10

Tāruahia me te whakahaere i te waehere e whai ake nei ki Jupyter:


import torch
import torchvision
import torchvision.transforms as transforms

transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                          shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
                                         shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

Me penei te whakautu:

Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
Extracting ./data/cifar-10-python.tar.gz to ./data
Files already downloaded and verified

Me whakaatu etahi whakaahua whakangungu mo te whakamatautau:


import matplotlib.pyplot as plt
import numpy as np

# functions to show an image

def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()

# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()

# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

Whakamaramatanga Whatunga Neural

Me whakaaro tuatahi me pehea te mahi a te whatunga neural mo te tohu whakaahua. He whatunga tohu-ki-ira ngawari tenei. Ka tango i nga raraunga whakauru, ka tukuna ma te maha o nga paparanga kotahi, ka mutu ka whakaputa raraunga whakaputa.

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

Me hanga he whatunga rite ki to tatou taiao:


import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net = Net()

Ka tautuhia hoki e matou he mahi ngaro me te moohio


import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

Whakangungu whatunga mo nga raraunga whakangungu

Me timata tatou ki te whakangungu i to tatou whatunga neural. Kia mohio koe i muri i to whakahaere i tenei waehere, me tatari koe kia oti ra ano te mahi. E 5 meneti ahau. He wa roa ki te whakangungu i te whatunga.

 for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

Ka whiwhi tatou i te hua e whai ake nei:

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

Ka tiakina e matou to maatau tauira kua whakangungua:

PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)

Te whakamatautau whatunga mo nga raraunga whakamatautau

I whakangungua e matou te whatunga ma te whakamahi i te huinga raraunga whakangungu. Engari me tirotiro mena kua akohia e te whatunga tetahi mea.

Ka whakamatauhia tenei ma te matapae i te tapanga o te karaehe ka whakaputahia e te whatunga neural me te whakamatautau kia kitea mena he pono. Mena he tika te matapae, ka taapirihia te tauira ki te rarangi o nga matapae tika.
Me whakaatu he whakaahua mai i te huinga whakamatautau:

dataiter = iter(testloader)
images, labels = dataiter.next()

# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

Inaianei me patai ki te whatunga neural kia korero mai he aha kei roto i enei pikitia:


net = Net()
net.load_state_dict(torch.load(PATH))

outputs = net(images)

_, predicted = torch.max(outputs, 1)

print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
                              for j in range(4)))

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

He pai te ahua o nga hua: i tohu tika te whatunga e toru o nga pikitia e wha.

Kia kite e pehea te mahi a te whatunga puta noa i te huingararaunga katoa.


correct = 0
total = 0
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

Te ahua nei kei te mohio te whatunga ki tetahi mea kei te mahi. Mena ka whakatauhia e ia nga karaehe i runga i te matapōkere, ka 10% te tika.

Inaianei kia kite ko wai nga karaehe e tohu pai ake ana te whatunga:

class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs, 1)
        c = (predicted == labels).squeeze()
        for i in range(4):
            label = labels[i]
            class_correct[label] += c[i].item()
            class_total[label] += 1


for i in range(10):
    print('Accuracy of %5s : %2d %%' % (
        classes[i], 100 * class_correct[i] / class_total[i]))

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

Te ahua nei he pai rawa atu te whatunga ki te tautuhi waka me nga kaipuke: 71% tika.

Na kei te mahi te whatunga. Inaianei me ngana ki te whakawhiti i ana mahi ki te tukatuka whakairoiro (GPU) ka kite he aha nga huringa.

Te whakangungu i te whatunga neural i runga i te GPU

Tuatahi, ka whakamarama poto ahau he aha te CUDA. Ko te CUDA (Compute Unified Device Architecture) he papa rorohiko whakarara i hangaia e NVIDIA mo te rorohiko whanui i runga i nga waeine tukatuka whakairoiro (GPU). Ma te CUDA, ka taea e nga kaihanga te whakatere i nga tono rorohiko ma te whakamahi i te mana o nga GPU. Kua whakauruhia tenei turanga ki runga i ta maatau tūmau i hokona e matou.

Me tautuhi i to tatou GPU hei taputapu cuda tuatahi ka kitea.

device = torch . device ( "cuda:0" if torch . cuda . is_available () else "cpu" )
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print ( device )

Ko to whatunga neural tuatahi i runga i te wae tukatuka whakairoiro (GPU). Aratohu mo te timatanga

Te tuku i te whatunga ki te GPU:

net.to(device)

Ka tukuna ano e matou nga whakaurunga me nga whaainga i ia taahiraa ki te GPU:

inputs, labels = data[0].to(device), data[1].to(device)

Kia whakangungu ano tatou i te whatunga i runga i te GPU:

import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
    inputs, labels = data[0].to(device), data[1].to(device)

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

I tenei wa, ka roa te whakangungu whatunga mo te 3 meneti. Kia maumahara tatou e 5 meneti te roa o te atamira kotahi i runga i te tukatuka tikanga. Ehara i te mea nui te rereketanga, ka tupu tenei na te mea kaore i te rahi to taatau whatunga. A, no te whakamahi i nga raupapa nui mo te whakangungu, ka piki ake te rereketanga i waenga i te tere o te GPU me te tukatuka tuku iho.

Ko te ahua o tera. He aha i taea e matou te mahi:

  • I titiro matou he aha te GPU me te whiriwhiri i te tūmau i whakauruhia ai;
  • Kua whakaturia e matou he taiao rorohiko hei hanga i te whatunga neural;
  • I hanga e matou he whatunga neural mo te tohu ahua me te whakangungu;
  • I tukuna ano e matou te whakangungu whatunga ma te whakamahi i te GPU ka piki ake te tere.

Ka koa ahau ki te whakautu i nga paatai ​​​​i roto i nga korero.

Source: will.com

Tāpiri i te kōrero