How to print the “actual” learning rate in Adadelta in pytorch
In short:
I can't draw lr/epoch curve when using adadelta optimizer in pytorch because optimizer.param_groups[0]['lr']
always return the same value.
In detail:
Adadelta can dynamically adapts over time using only first order information and
has minimal computational overhead beyond vanilla stochastic gradient descent [1].
In pytorch, the source code of Adadelta is here https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html#Adadelta
Since it requires no manual tuning of learning rate, in my knowledge, we don't have to set any schedular after declare the optimizer
self.optimizer = torch.optim.Adadelta(self.model.parameters(), lr=1)
The way to check learning rate is
current_lr = self.optimizer.param_groups[0]['lr']
The problem is it always return 1 (the initial lr).
Could anyone tell me how can I get the true learning rate so that can I draw a lr/epch curve?
[1] https://arxiv.org/pdf/1212.5701.pdf
python optimization neural-network deep-learning pytorch
add a comment |
In short:
I can't draw lr/epoch curve when using adadelta optimizer in pytorch because optimizer.param_groups[0]['lr']
always return the same value.
In detail:
Adadelta can dynamically adapts over time using only first order information and
has minimal computational overhead beyond vanilla stochastic gradient descent [1].
In pytorch, the source code of Adadelta is here https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html#Adadelta
Since it requires no manual tuning of learning rate, in my knowledge, we don't have to set any schedular after declare the optimizer
self.optimizer = torch.optim.Adadelta(self.model.parameters(), lr=1)
The way to check learning rate is
current_lr = self.optimizer.param_groups[0]['lr']
The problem is it always return 1 (the initial lr).
Could anyone tell me how can I get the true learning rate so that can I draw a lr/epch curve?
[1] https://arxiv.org/pdf/1212.5701.pdf
python optimization neural-network deep-learning pytorch
add a comment |
In short:
I can't draw lr/epoch curve when using adadelta optimizer in pytorch because optimizer.param_groups[0]['lr']
always return the same value.
In detail:
Adadelta can dynamically adapts over time using only first order information and
has minimal computational overhead beyond vanilla stochastic gradient descent [1].
In pytorch, the source code of Adadelta is here https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html#Adadelta
Since it requires no manual tuning of learning rate, in my knowledge, we don't have to set any schedular after declare the optimizer
self.optimizer = torch.optim.Adadelta(self.model.parameters(), lr=1)
The way to check learning rate is
current_lr = self.optimizer.param_groups[0]['lr']
The problem is it always return 1 (the initial lr).
Could anyone tell me how can I get the true learning rate so that can I draw a lr/epch curve?
[1] https://arxiv.org/pdf/1212.5701.pdf
python optimization neural-network deep-learning pytorch
In short:
I can't draw lr/epoch curve when using adadelta optimizer in pytorch because optimizer.param_groups[0]['lr']
always return the same value.
In detail:
Adadelta can dynamically adapts over time using only first order information and
has minimal computational overhead beyond vanilla stochastic gradient descent [1].
In pytorch, the source code of Adadelta is here https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html#Adadelta
Since it requires no manual tuning of learning rate, in my knowledge, we don't have to set any schedular after declare the optimizer
self.optimizer = torch.optim.Adadelta(self.model.parameters(), lr=1)
The way to check learning rate is
current_lr = self.optimizer.param_groups[0]['lr']
The problem is it always return 1 (the initial lr).
Could anyone tell me how can I get the true learning rate so that can I draw a lr/epch curve?
[1] https://arxiv.org/pdf/1212.5701.pdf
python optimization neural-network deep-learning pytorch
python optimization neural-network deep-learning pytorch
asked Nov 21 '18 at 5:47
王智寬王智寬
10210
10210
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
Check: self.optimizer.state
. This is optimized with the lr and used in optimization process.
From documentation a lr is just:
lr (float, optional): coefficient that scale delta before it is
applied
to the parameters (default: 1.0)
https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html
Edited: you may find acc_delta values in self.optimizer.state values but you need to go through dictionaries contained by this dictionary:
dict_with_acc_delta = [self.optimizer.state[i] for i in self.optimizer.state.keys() if "acc_delta" in self.optimizer.state[i].keys()]
acc_deltas = [i["acc_delta"] for i in dict_with_acc_delta]
I have eight layers and shapes of elements in the acc_deltas list are following
[torch.Size([25088]),
torch.Size([25088]),
torch.Size([4096, 25088]),
torch.Size([4096]),
torch.Size([1024, 4096]),
torch.Size([1024]),
torch.Size([102, 1024]),
torch.Size([102])]
But...self.optimizer.state['acc_delta']
always returns an empty dictionary {} in every epoch.
– 王智寬
Nov 21 '18 at 8:08
I have edited my post
– artona
Nov 21 '18 at 10:18
Thank you! I got this.
– 王智寬
Nov 22 '18 at 8:21
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53405934%2fhow-to-print-the-actual-learning-rate-in-adadelta-in-pytorch%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Check: self.optimizer.state
. This is optimized with the lr and used in optimization process.
From documentation a lr is just:
lr (float, optional): coefficient that scale delta before it is
applied
to the parameters (default: 1.0)
https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html
Edited: you may find acc_delta values in self.optimizer.state values but you need to go through dictionaries contained by this dictionary:
dict_with_acc_delta = [self.optimizer.state[i] for i in self.optimizer.state.keys() if "acc_delta" in self.optimizer.state[i].keys()]
acc_deltas = [i["acc_delta"] for i in dict_with_acc_delta]
I have eight layers and shapes of elements in the acc_deltas list are following
[torch.Size([25088]),
torch.Size([25088]),
torch.Size([4096, 25088]),
torch.Size([4096]),
torch.Size([1024, 4096]),
torch.Size([1024]),
torch.Size([102, 1024]),
torch.Size([102])]
But...self.optimizer.state['acc_delta']
always returns an empty dictionary {} in every epoch.
– 王智寬
Nov 21 '18 at 8:08
I have edited my post
– artona
Nov 21 '18 at 10:18
Thank you! I got this.
– 王智寬
Nov 22 '18 at 8:21
add a comment |
Check: self.optimizer.state
. This is optimized with the lr and used in optimization process.
From documentation a lr is just:
lr (float, optional): coefficient that scale delta before it is
applied
to the parameters (default: 1.0)
https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html
Edited: you may find acc_delta values in self.optimizer.state values but you need to go through dictionaries contained by this dictionary:
dict_with_acc_delta = [self.optimizer.state[i] for i in self.optimizer.state.keys() if "acc_delta" in self.optimizer.state[i].keys()]
acc_deltas = [i["acc_delta"] for i in dict_with_acc_delta]
I have eight layers and shapes of elements in the acc_deltas list are following
[torch.Size([25088]),
torch.Size([25088]),
torch.Size([4096, 25088]),
torch.Size([4096]),
torch.Size([1024, 4096]),
torch.Size([1024]),
torch.Size([102, 1024]),
torch.Size([102])]
But...self.optimizer.state['acc_delta']
always returns an empty dictionary {} in every epoch.
– 王智寬
Nov 21 '18 at 8:08
I have edited my post
– artona
Nov 21 '18 at 10:18
Thank you! I got this.
– 王智寬
Nov 22 '18 at 8:21
add a comment |
Check: self.optimizer.state
. This is optimized with the lr and used in optimization process.
From documentation a lr is just:
lr (float, optional): coefficient that scale delta before it is
applied
to the parameters (default: 1.0)
https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html
Edited: you may find acc_delta values in self.optimizer.state values but you need to go through dictionaries contained by this dictionary:
dict_with_acc_delta = [self.optimizer.state[i] for i in self.optimizer.state.keys() if "acc_delta" in self.optimizer.state[i].keys()]
acc_deltas = [i["acc_delta"] for i in dict_with_acc_delta]
I have eight layers and shapes of elements in the acc_deltas list are following
[torch.Size([25088]),
torch.Size([25088]),
torch.Size([4096, 25088]),
torch.Size([4096]),
torch.Size([1024, 4096]),
torch.Size([1024]),
torch.Size([102, 1024]),
torch.Size([102])]
Check: self.optimizer.state
. This is optimized with the lr and used in optimization process.
From documentation a lr is just:
lr (float, optional): coefficient that scale delta before it is
applied
to the parameters (default: 1.0)
https://pytorch.org/docs/stable/_modules/torch/optim/adadelta.html
Edited: you may find acc_delta values in self.optimizer.state values but you need to go through dictionaries contained by this dictionary:
dict_with_acc_delta = [self.optimizer.state[i] for i in self.optimizer.state.keys() if "acc_delta" in self.optimizer.state[i].keys()]
acc_deltas = [i["acc_delta"] for i in dict_with_acc_delta]
I have eight layers and shapes of elements in the acc_deltas list are following
[torch.Size([25088]),
torch.Size([25088]),
torch.Size([4096, 25088]),
torch.Size([4096]),
torch.Size([1024, 4096]),
torch.Size([1024]),
torch.Size([102, 1024]),
torch.Size([102])]
edited Nov 21 '18 at 10:17
answered Nov 21 '18 at 7:40
artonaartona
71247
71247
But...self.optimizer.state['acc_delta']
always returns an empty dictionary {} in every epoch.
– 王智寬
Nov 21 '18 at 8:08
I have edited my post
– artona
Nov 21 '18 at 10:18
Thank you! I got this.
– 王智寬
Nov 22 '18 at 8:21
add a comment |
But...self.optimizer.state['acc_delta']
always returns an empty dictionary {} in every epoch.
– 王智寬
Nov 21 '18 at 8:08
I have edited my post
– artona
Nov 21 '18 at 10:18
Thank you! I got this.
– 王智寬
Nov 22 '18 at 8:21
But...
self.optimizer.state['acc_delta']
always returns an empty dictionary {} in every epoch.– 王智寬
Nov 21 '18 at 8:08
But...
self.optimizer.state['acc_delta']
always returns an empty dictionary {} in every epoch.– 王智寬
Nov 21 '18 at 8:08
I have edited my post
– artona
Nov 21 '18 at 10:18
I have edited my post
– artona
Nov 21 '18 at 10:18
Thank you! I got this.
– 王智寬
Nov 22 '18 at 8:21
Thank you! I got this.
– 王智寬
Nov 22 '18 at 8:21
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53405934%2fhow-to-print-the-actual-learning-rate-in-adadelta-in-pytorch%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown