本文共 438 字,大约阅读时间需要 1 分钟。
def train(net, train_iter, loss, epochs, lr): trainer = torch.optim.Adam(net.parameters(), lr) for epoch in range(epochs): for X, y in train_iter: trainer.zero_grad() l = loss(net(X), y) l.backward() trainer.step() print(f'epoch {epoch + 1}, ' f'loss: {d2l.evaluate_loss(net, train_iter, loss):f}')net = get_net()train(net, train_iter, loss, 5, 0.01)
转载地址:http://zgzai.baihongyu.com/