Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Implementing Neural Architectures with PyTorch

Tech May 14 2

The torch.nn namespace encapsulates essential building blocks for constructing deep learning pipelines. These modules accept Tensor inputs, perform computations to generate outputs, and maintain internal parameters. Users typical construct models using either the functional API with nn.Sequential or by subclassing the nn.Module base class.

Defining Layer Units

Begin by instantiating a dense layer. This component maps input vectors to output space. For instance, mapping an input dimension of 512 down to 128 units.

import torch
from torch import nn

batch_samples = torch.randn(32, 512)
transformation_layer = nn.Linear(in_features=512, out_features=128)
result = transformation_layer(batch_samples)

print(f'Input Shape: {batch_samples.shape}')
print(f'Output Shape: {result.shape}')

Compositional Models with Sequential

Stacking operations into a pipeline is straightforward using nn.Sequential. Consider a multilayer perceptron passing through twenty hidden neurons before reaching a single output node.

from torch import nn

sequence_model = nn.Sequential(
    nn.Linear(10, 20),
    nn.ReLU(),
    nn.Linear(20, 5),
    nn.ReLU(),
    nn.Linear(5, 1)
)
print(sequence_model)

Subclassing for Custom Logic

Creating a custom architecture requires inheriting from nn.Module. Define layers in the constructor and define the forward pass explicitly.

import torch.nn.functional as F
from torch import nn

class DeepClassifier(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv_a = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=5)
        self.conv_b = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=5)
        self.dense_top = nn.Linear(128 * 5 * 5, 64)
        self.output_layer = nn.Linear(64, 10)

    def forward(self, x):
        x = F.relu(self.conv_a(x))
        x = F.max_pool2d(x, 2)
        x = F.relu(self.conv_b(x))
        x = F.max_pool2d(x, 2)
        x = torch.flatten(x, 1)
        x = F.relu(self.dense_top(x))
        return F.log_softmax(x, dim=1)

Once defined, instantiate the class to inspect its structure and register parameters automatically.

Hardware Configuraton

Models operate on default CPU devices initially. To enable GPU aceleration, transfer parameters and tensors to the CUDA device.

model_instance = DeepClassifier()

# Check current device
print(f'Current Device: {next(model_instance.parameters()).device}')

# Move to GPU if available
target_device = torch.device('cuda:0')
model_instance.to(target_device)
print(f'New Device: {next(model_instance.parameters()).device}')

Inspecting Architecture

Third-party tools like torchsummary can visualize layer configurations and parameter counts. Install the library first.

pip install torchsummary

Then load the summary to view the computation graph.

from torchsummary import summary
summary(model_instance, input_size=(1, 32, 32))
Tags: pytorch

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.