Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Essential PyTorch Techniques for Deep Learning Implementation

Tech 1

Core Development Tools

dir(): Inspect object attributes help(): Access official documentasion

Data Loading Fundamentals

import os
from torch.utils.data import Dataset
from PIL import Image

class CustomDataset(Dataset):
    def __init__(self, base_dir, category_dir):
        self.base_path = base_dir
        self.category = category_dir
        self.full_path = os.path.join(base_dir, category_dir)
        self.image_list = os.listdir(self.full_path)

    def __getitem__(self, index):
        img_file = self.image_list[index]
        img_path = os.path.join(self.base_path, self.category, img_file)
        image = Image.open(img_path)
        return image, self.category

    def __len__(self):
        return len(self.image_list)

Visualization with TensorBoard

from torch.utils.tensorboard import SummaryWriter
import numpy as np

writer = SummaryWriter('logs')

# Track scalar values
for iteration in range(100):
    writer.add_scalar("y=2x", 2*iteration, iteration)

# Display images
image_array = np.array(Image.open("sample.jpg"))
writer.add_image("sample", image_array, 1, dataformats='HWC')

writer.close()

Image Transformations

from torchvision import transforms

transform_pipeline = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

Neural Network Architecture

import torch.nn as nn

class NeuralNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.layers = nn.Sequential(
            nn.Conv2d(3, 32, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.MaxPool2d(2),
            nn.Flatten(),
            nn.Linear(32*16*16, 10)
        )

    def forward(self, x):
        return self.layers(x)

Training Pipeline

import torch.optim as optim

model = NeuralNet()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

for epoch in range(10):
    for inputs, labels in dataloader:
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

Pretrained Model Adaptation

vgg16 = torchvision.models.vgg16(pretrained=True)
# Modify final layer
vgg16.classifier[6] = nn.Linear(4096, num_classes)

Model Persistence

# Save model
torch.save(model.state_dict(), 'model_weights.pth')

# Load model
model = NeuralNet()
model.load_state_dict(torch.load('model_weights.pth'))

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.