Help me de prestatiegegevens van mijn PyTorch-model tijdens training te evalueren.
description
Door de prestatiegegevens van je PyTorch-model te evalueren, krijg je inzicht in de voortgang van de training, kun je problemen vroegtijdig identificeren en je model optimaliseren voor betere resultaten.
prompt
try_prompt
Ik wil de prestaties van mijn PyTorch-model gedurende het trainingsproces begrijpen. Geef alstublieft begeleiding over hoe ik effectief belangrijke prestatiegegevens zoals verlies, nauwkeurigheid, precisie en re ... more
generate_helper
...
tags
ai_answers
provider: openai
model: gpt-4.1-nano
Natuurlijk! Om de prestaties van je PyTorch-model tijdens het trainen te volgen en te visualiseren, kun je het beste gebruik maken van logginglijsten en vervolgens bijvoorbeeld Matplotlib voor visualisatie. Hier is een stapsgewijze aanpak inclusief codevoorbeelden:
1. Initializeer lijsten om de metrics op te slaan.
2. Tijdens elke epoch log je de verlieswaarde, nauwkeurigheid, precisie en recall.
3. Na elke epoch of na de training, visualiseer je de gegevens.
Hier is een voorbeeld implementatie:
```python
import torch
import matplotlib.pyplot as plt
from sklearn.metrics import precision_score, recall_score
# Lijsten voor het loggen van metrics
train_loss_history = []
train_acc_history = []
train_precision_history = []
train_recall_history = []
val_loss_history = []
val_acc_history = []
val_precision_history = []
val_recall_history = []
# Veronderstel dat je model, optimizer, loss_fn al hebt gedefinieerd
for epoch in range(num_epochs):
model.train()
epoch_loss = 0
all_preds = []
all_labels = []
for batch in train_loader:
inputs, labels = batch
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
# Voor nauwkeurigheid, precisie en recall
preds = torch.argmax(outputs, dim=1)
all_preds.extend(preds.cpu().numpy())
all_labels.extend(labels.cpu().numpy())
# Gemiddelde verlies per epoch
avg_loss = epoch_loss / len(train_loader)
train_loss_history.append(avg_loss)
# Bereken nauwkeurigheid, precisie en recall
train_acc = (np.array(all_preds) == np.array(all_labels)).mean()
train_precision = precision_score(all_labels, all_preds, average='weighted')
train_recall = recall_score(all_labels, all_preds, average='weighted')
train_acc_history.append(train_acc)
train_precision_history.append(train_precision)
train_recall_history.append(train_recall)
# Validatie
model.eval()
val_loss = 0
val_preds = []
val_labels = []
with torch.no_grad():
for batch in val_loader:
inputs, labels = batch
outputs = model(inputs)
loss = loss_fn(outputs, labels)
val_loss += loss.item()
preds = torch.argmax(outputs, dim=1)
val_preds.extend(preds.cpu().numpy())
val_labels.extend(labels.cpu().numpy())
avg_val_loss = val_loss / len(val_loader)
val_loss_history.append(avg_val_loss)
val_acc = (np.array(val_preds) == np.array(val_labels)).mean()
val_precision = precision_score(val_labels, val_preds, average='weighted')
val_recall = recall_score(val_labels, val_preds, average='weighted')
val_acc_history.append(val_acc)
val_precision_history.append(val_precision)
val_recall_history.append(val_recall)
print(f"Epoch {epoch+1}/{num_epochs}")
print(f"Train Loss: {avg_loss:.4f} | Val Loss: {avg_val_loss:.4f}")
print(f"Train Acc: {train_acc:.4f} | Val Acc: {val_acc:.4f}")
print(f"Train Prec: {train_precision:.4f} | Val Prec: {val_precision:.4f}")
print(f"Train Rec: {train_recall:.4f} | Val Rec: {val_recall:.4f}")
# Visualisatie van de metrics
epochs = range(1, num_epochs + 1)
plt.figure(figsize=(12, 8))
plt.subplot(2, 2, 1)
plt.plot(epochs, train_loss_history, label='Train Loss')
plt.plot(epochs, val_loss_history, label='Val Loss')
plt.xlabel('Epochs')
plt.ylabel('Verlies')
plt.legend()
plt.subplot(2, 2, 2)
plt.plot(epochs, train_acc_history, label='Train Accuracy')
plt.plot(epochs, val_acc_history, label='Val Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Nauwkeurigheid')
plt.legend()
plt.subplot(2, 2, 3)
plt.plot(epochs, train_precision_history, label='Train Precisie')
plt.plot(epochs, val_precision_history, label='Val Precisie')
plt.xlabel('Epochs')
plt.ylabel('Precisie')
plt.legend()
plt.subplot(2, 2, 4)
plt.plot(epochs, train_recall_history, label='Train Recall')
plt.plot(epochs, val_recall_history, label='Val Recall')
plt.xlabel('Epochs')
plt.ylabel('Recall')
plt.legend()
plt.tight_layout()
plt.show()
```
**Belangrijke opmerkingen:**
- Zorg dat je numpy hebt geïmporteerd (`import numpy as np`).
- Pas de code aan op jouw data en model.
- Voor een meer geavanceerde logging en visualisatie kun je ook tools gebruiken zoals TensorBoard of Weights & Biases.
Zo krijg je een duidelijk overzicht van de prestaties van je model gedurende het trainingsproces!
provider: deepseek
model: deepseek-chat
Hier is een uitgebreide handleiding voor het loggen en visualiseren van prestaties van je PyTorch-model:
## 1. Importeer benodigde bibliotheken
```python
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import precision_score, recall_score, accuracy_score
import json
import time
```
## 2. Basis trainingsloop met logging
```python
class ModelTrainer:
def __init__(self, model, optimizer, criterion, device):
self.model = model
self.optimizer = optimizer
self.criterion = criterion
self.device = device
self.history = {
'train_loss': [],
'val_loss': [],
'train_accuracy': [],
'val_accuracy': [],
'train_precision': [],
'val_precision': [],
'train_recall': [],
'val_recall': [],
'epoch_times': []
}
def calculate_metrics(self, y_true, y_pred):
"""Bereken nauwkeurigheid, precisie en recall"""
y_true = y_true.cpu().numpy()
y_pred = y_pred.cpu().numpy()
accuracy = accuracy_score(y_true, y_pred)
precision = precision_score(y_true, y_pred, average='weighted', zero_division=0)
recall = recall_score(y_true, y_pred, average='weighted', zero_division=0)
return accuracy, precision, recall
def train_epoch(self, dataloader):
"""Train één epoch"""
self.model.train()
running_loss = 0.0
all_preds = []
all_labels = []
for batch_idx, (data, targets) in enumerate(dataloader):
data, targets = data.to(self.device), targets.to(self.device)
self.optimizer.zero_grad()
outputs = self.model(data)
loss = self.criterion(outputs, targets)
loss.backward()
self.optimizer.step()
running_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
all_preds.extend(predicted.cpu())
all_labels.extend(targets.cpu())
# Log elke 100 batches
if batch_idx % 100 == 0:
print(f'Batch {batch_idx}, Loss: {loss.item():.4f}')
epoch_loss = running_loss / len(dataloader)
accuracy, precision, recall = self.calculate_metrics(
torch.tensor(all_labels), torch.tensor(all_preds)
)
return epoch_loss, accuracy, precision, recall
def validate_epoch(self, dataloader):
"""Valideer één epoch"""
self.model.eval()
running_loss = 0.0
all_preds = []
all_labels = []
with torch.no_grad():
for data, targets in dataloader:
data, targets = data.to(self.device), targets.to(self.device)
outputs = self.model(data)
loss = self.criterion(outputs, targets)
running_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
all_preds.extend(predicted.cpu())
all_labels.extend(targets.cpu())
epoch_loss = running_loss / len(dataloader)
accuracy, precision, recall = self.calculate_metrics(
torch.tensor(all_labels), torch.tensor(all_preds)
)
return epoch_loss, accuracy, precision, recall
def train(self, train_loader, val_loader, epochs):
"""Hoofd trainingsfunctie"""
print("Training gestart...")
for epoch in range(epochs):
start_time = time.time()
# Train epoch
train_loss, train_acc, train_prec, train_rec = self.train_epoch(train_loader)
# Valideer epoch
val_loss, val_acc, val_prec, val_rec = self.validate_epoch(val_loader)
epoch_time = time.time() - start_time
# Update geschiedenis
self.history['train_loss'].append(train_loss)
self.history['val_loss'].append(val_loss)
self.history['train_accuracy'].append(train_acc)
self.history['val_accuracy'].append(val_acc)
self.history['train_precision'].append(train_prec)
self.history['val_precision'].append(val_prec)
self.history['train_recall'].append(train_rec)
self.history['val_recall'].append(val_rec)
self.history['epoch_times'].append(epoch_time)
# Print epoch resultaten
print(f'Epoch {epoch+1}/{epochs}:')
print(f' Train Loss: {train_loss:.4f}, Acc: {train_acc:.4f}, Prec: {train_prec:.4f}, Rec: {train_rec:.4f}')
print(f' Val Loss: {val_loss:.4f}, Acc: {val_acc:.4f}, Prec: {val_prec:.4f}, Rec: {val_rec:.4f}')
print(f' Tijd: {epoch_time:.2f}s')
print('-' * 60)
```
## 3. Gebruik TensorBoard voor geavanceerde logging
```python
from torch.utils.tensorboard import SummaryWriter
class TensorBoardTrainer(ModelTrainer):
def __init__(self, model, optimizer, criterion, device, log_dir='runs/experiment'):
super().__init__(model, optimizer, criterion, device)
self.writer = SummaryWriter(log_dir)
def train(self, train_loader, val_loader, epochs):
for epoch in range(epochs):
start_time = time.time()
# Train en valideer
train_loss, train_acc, train_prec, train_rec = self.train_epoch(train_loader)
val_loss, val_acc, val_prec, val_rec = self.validate_epoch(val_loader)
epoch_time = time.time() - start_time
# Update geschiedenis
self.history['train_loss'].append(train_loss)
self.history['val_loss'].append(val_loss)
self.history['train_accuracy'].append(train_acc)
self.history['val_accuracy'].append(val_acc)
self.history['train_precision'].append(train_prec)
self.history['val_precision'].append(val_prec)
self.history['train_recall'].append(train_rec)
self.history['val_recall'].append(val_rec)
self.history['epoch_times'].append(epoch_time)
# Schrijf naar TensorBoard
self.writer.add_scalar('Loss/train', train_loss, epoch)
self.writer.add_scalar('Loss/val', val_loss, epoch)
self.writer.add_scalar('Accuracy/train', train_acc, epoch)
self.writer.add_scalar('Accuracy/val', val_acc, epoch)
self.writer.add_scalar('Precision/train', train_prec, epoch)
self.writer.add_scalar('Precision/val', val_prec, epoch)
self.writer.add_scalar('Recall/train', train_rec, epoch)
self.writer.add_scalar('Recall/val', val_rec, epoch)
self.writer.add_scalar('Time/epoch', epoch_time, epoch)
print(f'Epoch {epoch+1}/{epochs} - Loss: {train_loss:.4f}/{val_loss:.4f} '
f'Acc: {train_acc:.4f}/{val_acc:.4f}')
self.writer.close()
```
## 4. Visualisatiefuncties
```python
def plot_training_history(history):
"""Plot alle trainingsmetrics"""
fig, axes = plt.subplots(2, 3, figsize=(15, 10))
epochs = range(1, len(history['train_loss']) + 1)
# Verlies
axes[0, 0].plot(epochs, history['train_loss'], 'b-', label='Train Verlies')
axes[0, 0].plot(epochs, history['val_loss'], 'r-', label='Val Verlies')
axes[0, 0].set_title('Verlies per Epoch')
axes[0, 0].set_xlabel('Epochs')
axes[0, 0].set_ylabel('Verlies')
axes[0, 0].legend()
axes[0, 0].grid(True)
# Nauwkeurigheid
axes[0, 1].plot(epochs, history['train_accuracy'], 'b-', label='Train Nauwkeurigheid')
axes[0, 1].plot(epochs, history['val_accuracy'], 'r-', label='Val Nauwkeurigheid')
axes[0, 1].set_title('Nauwkeurigheid per Epoch')
axes[0, 1].set_xlabel('Epochs')
axes[0, 1].set_ylabel('Nauwkeurigheid')
axes[0, 1].legend()
axes[0, 1].grid(True)
# Precisie
axes[0, 2].plot(epochs, history['train_precision'], 'b-', label='Train Precisie')
axes[0, 2].plot(epochs, history['val_precision'], 'r-', label='Val Precisie')
axes[0, 2].set_title('Precisie per Epoch')
axes[0, 2].set_xlabel('Epochs')
axes[0, 2].set_ylabel('Precisie')
axes[0, 2].legend()
axes[0, 2].grid(True)
# Recall
axes[1, 0].plot(epochs, history['train_recall'], 'b-', label='Train Recall')
axes[1, 0].plot(epochs, history['val_recall'], 'r-', label='Val Recall')
axes[1, 0].set_title('Recall per Epoch')
axes[1, 0].set_xlabel('Epochs')
axes[1, 0].set_ylabel('Recall')
axes[1, 0].legend()
axes[1, 0].grid(True)
# Epoch tijden
axes[1, 1].plot(epochs, history['epoch_times'], 'g-', label='Epoch Tijd')
axes[1, 1].set_title('Epoch Tijden')
axes[1, 1].set_xlabel('Epochs')
axes[1, 1].set_ylabel('Tijd (s)')
axes[1, 1].legend()
axes[1, 1].grid(True)
# Lege subplot voor toekomstige uitbreidingen
axes[1, 2].axis('off')
plt.tight_layout()
plt.show()
def save_history(history, filename='training_history.json'):
"""Sla trainingsgeschiedenis op als JSON"""
with open(filename, 'w') as f:
json.dump(history, f, indent=4)
def load_history(filename='training_history.json'):
"""Laad trainingsgeschiedenis van JSON"""
with open(filename, 'r') as f:
return json.load(f)
```
## 5. Gebruiksvoorbeeld
```python
# Voorbeeld gebruik
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Je bestaande componenten
# model = je_model
# optimizer = je_optimizer
# criterion = je_verliesfunctie
# Initialiseer trainer
trainer = ModelTrainer(model, optimizer, criterion, device)
# Of gebruik TensorBoard trainer
# trainer = TensorBoardTrainer(model, optimizer, criterion, device)
# Start training
trainer.train(train_loader, val_loader, epochs=50)
# Visualiseer resultaten
plot_training_history(trainer.history)
# Sla geschiedenis op
save_history(trainer.history)
# Voor TensorBoard: run in terminal:
# tensorboard --logdir=runs/
```
## 6. Aanvullende tips
- **Vroeg stoppen**: Implementeer early stopping om overfitting te voorkomen
- **Learning rate scheduling**: Voeg learning rate annealing toe
- **Model checkpoints**: Sla het beste model op basis van validatieresultaten op
- **Gedetailleerde logging**: Log ook per-class metrics voor multi-class problemen
Deze implementatie geeft je een compleet overzicht van je modelprestaties tijdens de training en helpt bij het identificeren van problemen zoals overfitting of onderfitting.

