Is this a proper implementation of RNN in Pytorch? [on hold]
up vote
0
down vote
favorite
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding.from_pretrained(weights)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
embedded = self.embedding(x)
output, hidden = self.rnn(embedded)
assert torch.equal(output[-1,:,:], hidden.squeeze(0))
return self.fc(hidden.squeeze(0))
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch["text"]).squeeze(1)
loss = criterion(predictions, batch["label"])
acc = binary_accuracy(predictions, batch["label"])
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch["text"]).squeeze(1)
loss = criterion(predictions, batch["label"])
acc = binary_accuracy(predictions, batch["label"])
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
I have implemented this code for RNN in Pytorch, the code seems to work fine but is the code properly done with proper memory leak checks? Also, if I need to add pack_padded_sequence in this RNN how do I do it?
python memory-management neural-network cuda pytorch
New contributor
put on hold as off-topic by Toby Speight, Graipher, t3chb0t, Jamal♦ 6 hours ago
This question appears to be off-topic. The users who voted to close gave these specific reasons:
- "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – Jamal
- "Lacks concrete context: Code Review requires concrete code from a project, with sufficient context for reviewers to understand how that code is used. Pseudocode, stub code, hypothetical code, obfuscated code, and generic best practices are outside the scope of this site." – Toby Speight, Graipher
If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |
up vote
0
down vote
favorite
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding.from_pretrained(weights)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
embedded = self.embedding(x)
output, hidden = self.rnn(embedded)
assert torch.equal(output[-1,:,:], hidden.squeeze(0))
return self.fc(hidden.squeeze(0))
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch["text"]).squeeze(1)
loss = criterion(predictions, batch["label"])
acc = binary_accuracy(predictions, batch["label"])
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch["text"]).squeeze(1)
loss = criterion(predictions, batch["label"])
acc = binary_accuracy(predictions, batch["label"])
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
I have implemented this code for RNN in Pytorch, the code seems to work fine but is the code properly done with proper memory leak checks? Also, if I need to add pack_padded_sequence in this RNN how do I do it?
python memory-management neural-network cuda pytorch
New contributor
put on hold as off-topic by Toby Speight, Graipher, t3chb0t, Jamal♦ 6 hours ago
This question appears to be off-topic. The users who voted to close gave these specific reasons:
- "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – Jamal
- "Lacks concrete context: Code Review requires concrete code from a project, with sufficient context for reviewers to understand how that code is used. Pseudocode, stub code, hypothetical code, obfuscated code, and generic best practices are outside the scope of this site." – Toby Speight, Graipher
If this question can be reworded to fit the rules in the help center, please edit the question.
1
For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
– Vogel612♦
yesterday
RNN is a kind of neural network architecture, within Deep Learning
– Jibin Mathew
18 hours ago
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding.from_pretrained(weights)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
embedded = self.embedding(x)
output, hidden = self.rnn(embedded)
assert torch.equal(output[-1,:,:], hidden.squeeze(0))
return self.fc(hidden.squeeze(0))
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch["text"]).squeeze(1)
loss = criterion(predictions, batch["label"])
acc = binary_accuracy(predictions, batch["label"])
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch["text"]).squeeze(1)
loss = criterion(predictions, batch["label"])
acc = binary_accuracy(predictions, batch["label"])
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
I have implemented this code for RNN in Pytorch, the code seems to work fine but is the code properly done with proper memory leak checks? Also, if I need to add pack_padded_sequence in this RNN how do I do it?
python memory-management neural-network cuda pytorch
New contributor
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding.from_pretrained(weights)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
embedded = self.embedding(x)
output, hidden = self.rnn(embedded)
assert torch.equal(output[-1,:,:], hidden.squeeze(0))
return self.fc(hidden.squeeze(0))
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch["text"]).squeeze(1)
loss = criterion(predictions, batch["label"])
acc = binary_accuracy(predictions, batch["label"])
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch["text"]).squeeze(1)
loss = criterion(predictions, batch["label"])
acc = binary_accuracy(predictions, batch["label"])
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
I have implemented this code for RNN in Pytorch, the code seems to work fine but is the code properly done with proper memory leak checks? Also, if I need to add pack_padded_sequence in this RNN how do I do it?
python memory-management neural-network cuda pytorch
python memory-management neural-network cuda pytorch
New contributor
New contributor
edited 18 hours ago
New contributor
asked Nov 16 at 6:25
Jibin Mathew
101
101
New contributor
New contributor
put on hold as off-topic by Toby Speight, Graipher, t3chb0t, Jamal♦ 6 hours ago
This question appears to be off-topic. The users who voted to close gave these specific reasons:
- "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – Jamal
- "Lacks concrete context: Code Review requires concrete code from a project, with sufficient context for reviewers to understand how that code is used. Pseudocode, stub code, hypothetical code, obfuscated code, and generic best practices are outside the scope of this site." – Toby Speight, Graipher
If this question can be reworded to fit the rules in the help center, please edit the question.
put on hold as off-topic by Toby Speight, Graipher, t3chb0t, Jamal♦ 6 hours ago
This question appears to be off-topic. The users who voted to close gave these specific reasons:
- "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – Jamal
- "Lacks concrete context: Code Review requires concrete code from a project, with sufficient context for reviewers to understand how that code is used. Pseudocode, stub code, hypothetical code, obfuscated code, and generic best practices are outside the scope of this site." – Toby Speight, Graipher
If this question can be reworded to fit the rules in the help center, please edit the question.
1
For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
– Vogel612♦
yesterday
RNN is a kind of neural network architecture, within Deep Learning
– Jibin Mathew
18 hours ago
add a comment |
1
For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
– Vogel612♦
yesterday
RNN is a kind of neural network architecture, within Deep Learning
– Jibin Mathew
18 hours ago
1
1
For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
– Vogel612♦
yesterday
For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
– Vogel612♦
yesterday
RNN is a kind of neural network architecture, within Deep Learning
– Jibin Mathew
18 hours ago
RNN is a kind of neural network architecture, within Deep Learning
– Jibin Mathew
18 hours ago
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
1
For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
– Vogel612♦
yesterday
RNN is a kind of neural network architecture, within Deep Learning
– Jibin Mathew
18 hours ago