這裏是你如何做到這一點:
- 在你的模塊的正向回報最終輸出和圖層的輸出要申請L1正規化
loss
變量將輸出的交叉熵損失的總和WRT目標和L1懲罰。
下面是一個例子代碼
import torch
from torch.autograd import Variable
from torch.nn import functional as F
class MLP(torch.nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.linear1 = torch.nn.Linear(128, 32)
self.linear2 = torch.nn.Linear(32, 16)
self.linear3 = torch.nn.Linear(16, 2)
def forward(self, x):
layer1_out = F.relu(self.linear1(x))
layer2_out = F.relu(self.linear2(layer1_out))
out = self.linear3(layer2_out)
return out, layer1_out, layer2_out
def l1_penalty(var):
return torch.abs(var).sum()
def l2_penalty(var):
return torch.sqrt(torch.pow(var, 2).sum())
batchsize = 4
lambda1, lambda2 = 0.5, 0.01
model = MLP()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
# usually following code is looped over all batches
# but let's just do a dummy batch for brevity
inputs = Variable(torch.rand(batchsize, 128))
targets = Variable(torch.ones(batchsize).long())
optimizer.zero_grad()
outputs, layer1_out, layer2_out = model(inputs)
cross_entropy_loss = F.cross_entropy(outputs, targets)
l1_regularization = lambda1 * l1_penalty(layer1_out)
l2_regularization = lambda2 * l2_penalty(layer2_out)
loss = cross_entropy_loss + l1_regularization + l2_regularization
loss.backward()
optimizer.step()
對於相對高層次的解決方案,可以看[鏈接](https://github.com/ncullen93/torchsample)。這給你一個類似keras的界面,用於在pytorch中輕鬆完成許多事情,特別是添加各種正規化器。 –