Is this a proper implementation of RNN in Pytorch? [on hold]











up vote
0
down vote

favorite












class RNN(nn.Module):

def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()

self.embedding = nn.Embedding.from_pretrained(weights)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)

def forward(self, x):

embedded = self.embedding(x)

output, hidden = self.rnn(embedded)

assert torch.equal(output[-1,:,:], hidden.squeeze(0))

return self.fc(hidden.squeeze(0))



def train(model, iterator, optimizer, criterion):

epoch_loss = 0
epoch_acc = 0

model.train()

for batch in iterator:



optimizer.zero_grad()

predictions = model(batch["text"]).squeeze(1)



loss = criterion(predictions, batch["label"])

acc = binary_accuracy(predictions, batch["label"])

loss.backward()

optimizer.step()

epoch_loss += loss.item()
epoch_acc += acc.item()

return epoch_loss / len(iterator), epoch_acc / len(iterator)

def evaluate(model, iterator, criterion):

epoch_loss = 0
epoch_acc = 0

model.eval()

with torch.no_grad():

for batch in iterator:

predictions = model(batch["text"]).squeeze(1)

loss = criterion(predictions, batch["label"])

acc = binary_accuracy(predictions, batch["label"])

epoch_loss += loss.item()
epoch_acc += acc.item()

return epoch_loss / len(iterator), epoch_acc / len(iterator)


I have implemented this code for RNN in Pytorch, the code seems to work fine but is the code properly done with proper memory leak checks? Also, if I need to add pack_padded_sequence in this RNN how do I do it?










share|improve this question









New contributor




Jibin Mathew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











put on hold as off-topic by Toby Speight, Graipher, t3chb0t, Jamal 6 hours ago


This question appears to be off-topic. The users who voted to close gave these specific reasons:



  • "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – Jamal

  • "Lacks concrete context: Code Review requires concrete code from a project, with sufficient context for reviewers to understand how that code is used. Pseudocode, stub code, hypothetical code, obfuscated code, and generic best practices are outside the scope of this site." – Toby Speight, Graipher


If this question can be reworded to fit the rules in the help center, please edit the question.









  • 1




    For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
    – Vogel612
    yesterday










  • RNN is a kind of neural network architecture, within Deep Learning
    – Jibin Mathew
    18 hours ago















up vote
0
down vote

favorite












class RNN(nn.Module):

def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()

self.embedding = nn.Embedding.from_pretrained(weights)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)

def forward(self, x):

embedded = self.embedding(x)

output, hidden = self.rnn(embedded)

assert torch.equal(output[-1,:,:], hidden.squeeze(0))

return self.fc(hidden.squeeze(0))



def train(model, iterator, optimizer, criterion):

epoch_loss = 0
epoch_acc = 0

model.train()

for batch in iterator:



optimizer.zero_grad()

predictions = model(batch["text"]).squeeze(1)



loss = criterion(predictions, batch["label"])

acc = binary_accuracy(predictions, batch["label"])

loss.backward()

optimizer.step()

epoch_loss += loss.item()
epoch_acc += acc.item()

return epoch_loss / len(iterator), epoch_acc / len(iterator)

def evaluate(model, iterator, criterion):

epoch_loss = 0
epoch_acc = 0

model.eval()

with torch.no_grad():

for batch in iterator:

predictions = model(batch["text"]).squeeze(1)

loss = criterion(predictions, batch["label"])

acc = binary_accuracy(predictions, batch["label"])

epoch_loss += loss.item()
epoch_acc += acc.item()

return epoch_loss / len(iterator), epoch_acc / len(iterator)


I have implemented this code for RNN in Pytorch, the code seems to work fine but is the code properly done with proper memory leak checks? Also, if I need to add pack_padded_sequence in this RNN how do I do it?










share|improve this question









New contributor




Jibin Mathew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











put on hold as off-topic by Toby Speight, Graipher, t3chb0t, Jamal 6 hours ago


This question appears to be off-topic. The users who voted to close gave these specific reasons:



  • "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – Jamal

  • "Lacks concrete context: Code Review requires concrete code from a project, with sufficient context for reviewers to understand how that code is used. Pseudocode, stub code, hypothetical code, obfuscated code, and generic best practices are outside the scope of this site." – Toby Speight, Graipher


If this question can be reworded to fit the rules in the help center, please edit the question.









  • 1




    For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
    – Vogel612
    yesterday










  • RNN is a kind of neural network architecture, within Deep Learning
    – Jibin Mathew
    18 hours ago













up vote
0
down vote

favorite









up vote
0
down vote

favorite











class RNN(nn.Module):

def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()

self.embedding = nn.Embedding.from_pretrained(weights)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)

def forward(self, x):

embedded = self.embedding(x)

output, hidden = self.rnn(embedded)

assert torch.equal(output[-1,:,:], hidden.squeeze(0))

return self.fc(hidden.squeeze(0))



def train(model, iterator, optimizer, criterion):

epoch_loss = 0
epoch_acc = 0

model.train()

for batch in iterator:



optimizer.zero_grad()

predictions = model(batch["text"]).squeeze(1)



loss = criterion(predictions, batch["label"])

acc = binary_accuracy(predictions, batch["label"])

loss.backward()

optimizer.step()

epoch_loss += loss.item()
epoch_acc += acc.item()

return epoch_loss / len(iterator), epoch_acc / len(iterator)

def evaluate(model, iterator, criterion):

epoch_loss = 0
epoch_acc = 0

model.eval()

with torch.no_grad():

for batch in iterator:

predictions = model(batch["text"]).squeeze(1)

loss = criterion(predictions, batch["label"])

acc = binary_accuracy(predictions, batch["label"])

epoch_loss += loss.item()
epoch_acc += acc.item()

return epoch_loss / len(iterator), epoch_acc / len(iterator)


I have implemented this code for RNN in Pytorch, the code seems to work fine but is the code properly done with proper memory leak checks? Also, if I need to add pack_padded_sequence in this RNN how do I do it?










share|improve this question









New contributor




Jibin Mathew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











class RNN(nn.Module):

def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()

self.embedding = nn.Embedding.from_pretrained(weights)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)

def forward(self, x):

embedded = self.embedding(x)

output, hidden = self.rnn(embedded)

assert torch.equal(output[-1,:,:], hidden.squeeze(0))

return self.fc(hidden.squeeze(0))



def train(model, iterator, optimizer, criterion):

epoch_loss = 0
epoch_acc = 0

model.train()

for batch in iterator:



optimizer.zero_grad()

predictions = model(batch["text"]).squeeze(1)



loss = criterion(predictions, batch["label"])

acc = binary_accuracy(predictions, batch["label"])

loss.backward()

optimizer.step()

epoch_loss += loss.item()
epoch_acc += acc.item()

return epoch_loss / len(iterator), epoch_acc / len(iterator)

def evaluate(model, iterator, criterion):

epoch_loss = 0
epoch_acc = 0

model.eval()

with torch.no_grad():

for batch in iterator:

predictions = model(batch["text"]).squeeze(1)

loss = criterion(predictions, batch["label"])

acc = binary_accuracy(predictions, batch["label"])

epoch_loss += loss.item()
epoch_acc += acc.item()

return epoch_loss / len(iterator), epoch_acc / len(iterator)


I have implemented this code for RNN in Pytorch, the code seems to work fine but is the code properly done with proper memory leak checks? Also, if I need to add pack_padded_sequence in this RNN how do I do it?







python memory-management neural-network cuda pytorch






share|improve this question









New contributor




Jibin Mathew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Jibin Mathew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 18 hours ago





















New contributor




Jibin Mathew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Nov 16 at 6:25









Jibin Mathew

101




101




New contributor




Jibin Mathew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Jibin Mathew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Jibin Mathew is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




put on hold as off-topic by Toby Speight, Graipher, t3chb0t, Jamal 6 hours ago


This question appears to be off-topic. The users who voted to close gave these specific reasons:



  • "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – Jamal

  • "Lacks concrete context: Code Review requires concrete code from a project, with sufficient context for reviewers to understand how that code is used. Pseudocode, stub code, hypothetical code, obfuscated code, and generic best practices are outside the scope of this site." – Toby Speight, Graipher


If this question can be reworded to fit the rules in the help center, please edit the question.




put on hold as off-topic by Toby Speight, Graipher, t3chb0t, Jamal 6 hours ago


This question appears to be off-topic. The users who voted to close gave these specific reasons:



  • "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – Jamal

  • "Lacks concrete context: Code Review requires concrete code from a project, with sufficient context for reviewers to understand how that code is used. Pseudocode, stub code, hypothetical code, obfuscated code, and generic best practices are outside the scope of this site." – Toby Speight, Graipher


If this question can be reworded to fit the rules in the help center, please edit the question.








  • 1




    For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
    – Vogel612
    yesterday










  • RNN is a kind of neural network architecture, within Deep Learning
    – Jibin Mathew
    18 hours ago














  • 1




    For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
    – Vogel612
    yesterday










  • RNN is a kind of neural network architecture, within Deep Learning
    – Jibin Mathew
    18 hours ago








1




1




For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
– Vogel612
yesterday




For those of us not intimately familiar with ... cuda? I think that's the underlying technology... What's RNN? And what should it do=?
– Vogel612
yesterday












RNN is a kind of neural network architecture, within Deep Learning
– Jibin Mathew
18 hours ago




RNN is a kind of neural network architecture, within Deep Learning
– Jibin Mathew
18 hours ago















active

oldest

votes






















active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes

Popular posts from this blog

Costa Masnaga

Fotorealismo

Sidney Franklin