Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Physics-Regularized Neural Networks for Hydrological State Estimation

Tech May 8 3

Neural Network Architecture

The estimator employs a multi-layer perceptron with exponential linear unit activations and intermediate dropout for regularization when mapping atmospheric forcing variables to subsurface storage states.

import torch
import torch.nn.functional as F
from torch import nn, optim

class HydrologicalStateEstimator(nn.Module):
    def __init__(self, input_features=8):
        super().__init__()
        self.hidden_dim = 96
        self.encoder = nn.Linear(input_features, self.hidden_dim)
        self.transform = nn.Linear(self.hidden_dim, self.hidden_dim // 2)
        self.predictor = nn.Linear(self.hidden_dim // 2, 1)
        self.dropout = nn.Dropout(0.15)
    
    def forward(self, atmospheric_forcing):
        h = F.elu(self.encoder(atmospheric_forcing))
        h = self.dropout(F.elu(self.transform(h)))
        return self.predictor(h)

Physics-Constrained Objective Function

The opitmization target combines empirical error minimization with a residual penalty enforcing mass conservation principles. The constraint ensures predicted storage anomalies correspond to net water fluxes.

def compute_physics_loss(predicted_state, observed_state, forcing_variables, physics_weight=0.25):
    # Data fidelity component
    fidelity_loss = F.mse_loss(predicted_state, observed_state)
    
    # Extract hydrological flux terms
    # Tensor columns: [precipitation, evapotranspiration, subsurface_drainage]
    precipitation = forcing_variables[:, 0]
    evaporation = forcing_variables[:, 1]
    drainage = forcing_variables[:, 2]
    
    # Continuity constraint: Storage ≈ Inputs - Outputs
    expected_change = precipitation - evaporation - drainage
    storage_residual = predicted_state.view(-1) - expected_change
    conservation_penalty = torch.mean(storage_residual ** 2)
    
    return fidelity_loss + physics_weight * conservation_penalty

Optimization Loop

Training employs adaptive moment estimation with weight decay and learning rate annealing based on plateau detection.

def train_model(network, data_loader, iterations=200):
    optimizer = optim.AdamW(network.parameters(), lr=0.001, weight_decay=1e-4)
    scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.5, patience=15)
    
    network.train()
    for iteration in range(iterations):
        cumulative_loss = 0.0
        
        for batch_inputs, batch_targets in data_loader:
            optimizer.zero_grad()
            
            state_predictions = network(batch_inputs)
            batch_loss = compute_physics_loss(state_predictions, batch_targets, batch_inputs)
            
            batch_loss.backward()
            optimizer.step()
            cumulative_loss += batch_loss.item()
        
        avg_loss = cumulative_loss / len(data_loader)
        scheduler.step(avg_loss)
        
        if (iteration + 1) % 50 == 0:
            print(f"Epoch {iteration+1}/{iterations}, Composite Loss: {avg_loss:.6f}")

Validation Metrics

Performance evaluation quantifies both predictive accuracy and physical consistency on held-out observations.

def validate_model(network, test_loader):
    network.eval()
    mse_accumulator = 0.0
    physics_violation = 0.0
    batches = 0
    
    with torch.no_grad():
        for inputs, targets in test_loader:
            outputs = network(inputs)
            
            # Standard accuracy metric
            mse_accumulator += F.mse_loss(outputs, targets).item()
            
            # Mass balance residual
            rain, evap = inputs[:, 0], inputs[:, 1]
            balance_error = torch.mean((outputs.squeeze() - (rain - evap)) ** 2).item()
            physics_violation += balance_error
            batches += 1
    
    print(f"Test MSE: {mse_accumulator/batches:.4f}")
    print(f"Mean Physics Residual: {physics_violation/batches:.4f}")

The physics_weight coefficient modulates the trade-off between empirical data fitting and adherence to conservation laws. Elevated values enforce physical plausibility in poorly constrained regions, while reduced values prioritize observational fidelity when sensor data is reliable.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.