亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

在pytorch中實現只讓指定變量向后傳播梯度

發布時間:2020-08-20 23:18:53 來源:腳本之家 閱讀:435 作者:美利堅節度使 欄目:開發技術

pytorch中如何只讓指定變量向后傳播梯度?

(或者說如何讓指定變量不參與后向傳播?)

有以下公式,假如要讓L對xvar求導:

在pytorch中實現只讓指定變量向后傳播梯度

(1)中,L對xvar的求導將同時計算out1部分和out2部分;

(2)中,L對xvar的求導只計算out2部分,因為out1的requires_grad=False;

(3)中,L對xvar的求導只計算out1部分,因為out2的requires_grad=False;

驗證如下:

#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Wed May 23 10:02:04 2018
@author: hy
"""
 
import torch
from torch.autograd import Variable
print("Pytorch version: {}".format(torch.__version__))
x=torch.Tensor([1])
xvar=Variable(x,requires_grad=True)
y1=torch.Tensor([2])
y2=torch.Tensor([7])
y1var=Variable(y1)
y2var=Variable(y2)
#(1)
print("For (1)")
print("xvar requres_grad: {}".format(xvar.requires_grad))
print("y1var requres_grad: {}".format(y1var.requires_grad))
print("y2var requres_grad: {}".format(y2var.requires_grad))
out1 = xvar*y1var
print("out1 requres_grad: {}".format(out1.requires_grad))
out2 = xvar*y2var
print("out2 requres_grad: {}".format(out2.requires_grad))
L=torch.pow(out1-out2,2)
L.backward()
print("xvar.grad: {}".format(xvar.grad))
xvar.grad.data.zero_()
#(2)
print("For (2)")
print("xvar requres_grad: {}".format(xvar.requires_grad))
print("y1var requres_grad: {}".format(y1var.requires_grad))
print("y2var requres_grad: {}".format(y2var.requires_grad))
out1 = xvar*y1var
print("out1 requres_grad: {}".format(out1.requires_grad))
out2 = xvar*y2var
print("out2 requres_grad: {}".format(out2.requires_grad))
out1 = out1.detach()
print("after out1.detach(), out1 requres_grad: {}".format(out1.requires_grad))
L=torch.pow(out1-out2,2)
L.backward()
print("xvar.grad: {}".format(xvar.grad))
xvar.grad.data.zero_()
#(3)
print("For (3)")
print("xvar requres_grad: {}".format(xvar.requires_grad))
print("y1var requres_grad: {}".format(y1var.requires_grad))
print("y2var requres_grad: {}".format(y2var.requires_grad))
out1 = xvar*y1var
print("out1 requres_grad: {}".format(out1.requires_grad))
out2 = xvar*y2var
print("out2 requres_grad: {}".format(out2.requires_grad))
#out1 = out1.detach()
out2 = out2.detach()
print("after out2.detach(), out2 requres_grad: {}".format(out1.requires_grad))
L=torch.pow(out1-out2,2)
L.backward()
print("xvar.grad: {}".format(xvar.grad))
xvar.grad.data.zero_()

pytorch中,將變量的requires_grad設為False,即可讓變量不參與梯度的后向傳播;

但是不能直接將out1.requires_grad=False;

其實,Variable類型提供了detach()方法,所返回變量的requires_grad為False。

注意:如果out1和out2的requires_grad都為False的話,那么xvar.grad就出錯了,因為梯度沒有傳到xvar

補充:

volatile=True表示這個變量不計算梯度, 參考:Volatile is recommended for purely inference mode, when you're sure you won't be even calling .backward(). It's more efficient than any other autograd setting - it will use the absolute minimal amount of memory to evaluate the model. volatile also determines that requires_grad is False.

以上這篇在pytorch中實現只讓指定變量向后傳播梯度就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支持億速云。

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

华坪县| 金平| 米林县| 安新县| 东乡族自治县| 荥阳市| 新田县| 都昌县| 多伦县| 西贡区| 德清县| 碌曲县| 济南市| 民勤县| 亳州市| 自贡市| 轮台县| 昌黎县| 太和县| 百色市| 延安市| 林甸县| 桂林市| 渝中区| 和平区| 昌图县| 鹰潭市| 五大连池市| 全南县| 阳城县| 凉山| 高台县| 邹城市| 东源县| 神农架林区| 建阳市| 玉树县| 肥西县| 阳曲县| 开平市| 尖扎县|