Trouble fetching results from next pages using post requests
I've written a script in python to get the tabular data populated upon filling in two input boxes (From
and Through
) located at the top right corner of a webpage. The date I filled in to generate results are 08/28/2017
and 11/25/2018
.
When I run my following script, I can get the tabular results from it's first page.
However, the data have spread across multiple pages through pagination and the url remains unchanged. How can I get the next page content?
Url to the site
This is my attempt:
import requests
from bs4 import BeautifulSoup
url = "https://www.myfloridalicense.com/FLABTBeerPricePosting/"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
try:
evtrgt = soup.select_one("#__EVENTTARGET").get('value')
except AttributeError: evtrgt = ""
viewstate = soup.select_one("#__VIEWSTATE").get('value')
viewgen = soup.select_one("#__VIEWSTATEGENERATOR").get('value')
eventval = soup.select_one("#__EVENTVALIDATION").get('value')
payload = {
'__EVENTTARGET': evtrgt,
'__EVENTARGUMENT': '',
'__VIEWSTATE':viewstate,
'__VIEWSTATEGENERATOR':viewgen,
'__VIEWSTATEENCRYPTED':'',
'__EVENTVALIDATION':eventval,
'ctl00$MainContent$txtPermitNo':'',
'ctl00$MainContent$txtPermitName': '',
'ctl00$MainContent$txtBrandName':'',
'ctl00$MainContent$txtPeriodBeginDt':'08/28/2017',
'ctl00$MainContent$txtPeriodEndingDt':'11/25/2018',
'ctl00$MainContent$btnSearch': 'Search'
}
with requests.Session() as s:
s.headers["User-Agent"] = "Mozilla/5.0"
req = s.post(url,data=payload,cookies=res.cookies.get_dict())
sauce = BeautifulSoup(req.text,"lxml")
for items in sauce.select("#MainContent_gvBRCSummary tr"):
data = [item.get_text(strip=True) for item in items.select("th,td")]
print(data)
Any help to solve the issue will be highly appreciated. Once again: the data I wish to grab are the tabular content from the site's next pages as my script can already parse the data from it's first page?
P.S.: Browser simulator is not an option I would like to cope with.
python python-3.x post web-scraping beautifulsoup
add a comment |
I've written a script in python to get the tabular data populated upon filling in two input boxes (From
and Through
) located at the top right corner of a webpage. The date I filled in to generate results are 08/28/2017
and 11/25/2018
.
When I run my following script, I can get the tabular results from it's first page.
However, the data have spread across multiple pages through pagination and the url remains unchanged. How can I get the next page content?
Url to the site
This is my attempt:
import requests
from bs4 import BeautifulSoup
url = "https://www.myfloridalicense.com/FLABTBeerPricePosting/"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
try:
evtrgt = soup.select_one("#__EVENTTARGET").get('value')
except AttributeError: evtrgt = ""
viewstate = soup.select_one("#__VIEWSTATE").get('value')
viewgen = soup.select_one("#__VIEWSTATEGENERATOR").get('value')
eventval = soup.select_one("#__EVENTVALIDATION").get('value')
payload = {
'__EVENTTARGET': evtrgt,
'__EVENTARGUMENT': '',
'__VIEWSTATE':viewstate,
'__VIEWSTATEGENERATOR':viewgen,
'__VIEWSTATEENCRYPTED':'',
'__EVENTVALIDATION':eventval,
'ctl00$MainContent$txtPermitNo':'',
'ctl00$MainContent$txtPermitName': '',
'ctl00$MainContent$txtBrandName':'',
'ctl00$MainContent$txtPeriodBeginDt':'08/28/2017',
'ctl00$MainContent$txtPeriodEndingDt':'11/25/2018',
'ctl00$MainContent$btnSearch': 'Search'
}
with requests.Session() as s:
s.headers["User-Agent"] = "Mozilla/5.0"
req = s.post(url,data=payload,cookies=res.cookies.get_dict())
sauce = BeautifulSoup(req.text,"lxml")
for items in sauce.select("#MainContent_gvBRCSummary tr"):
data = [item.get_text(strip=True) for item in items.select("th,td")]
print(data)
Any help to solve the issue will be highly appreciated. Once again: the data I wish to grab are the tabular content from the site's next pages as my script can already parse the data from it's first page?
P.S.: Browser simulator is not an option I would like to cope with.
python python-3.x post web-scraping beautifulsoup
This is an ASP.net form with doPostBack links. Clicking on those links will set __EVENTTARGET and __EVENTARGUMENT and submit the form.
– pguardiario
Nov 25 '18 at 22:55
add a comment |
I've written a script in python to get the tabular data populated upon filling in two input boxes (From
and Through
) located at the top right corner of a webpage. The date I filled in to generate results are 08/28/2017
and 11/25/2018
.
When I run my following script, I can get the tabular results from it's first page.
However, the data have spread across multiple pages through pagination and the url remains unchanged. How can I get the next page content?
Url to the site
This is my attempt:
import requests
from bs4 import BeautifulSoup
url = "https://www.myfloridalicense.com/FLABTBeerPricePosting/"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
try:
evtrgt = soup.select_one("#__EVENTTARGET").get('value')
except AttributeError: evtrgt = ""
viewstate = soup.select_one("#__VIEWSTATE").get('value')
viewgen = soup.select_one("#__VIEWSTATEGENERATOR").get('value')
eventval = soup.select_one("#__EVENTVALIDATION").get('value')
payload = {
'__EVENTTARGET': evtrgt,
'__EVENTARGUMENT': '',
'__VIEWSTATE':viewstate,
'__VIEWSTATEGENERATOR':viewgen,
'__VIEWSTATEENCRYPTED':'',
'__EVENTVALIDATION':eventval,
'ctl00$MainContent$txtPermitNo':'',
'ctl00$MainContent$txtPermitName': '',
'ctl00$MainContent$txtBrandName':'',
'ctl00$MainContent$txtPeriodBeginDt':'08/28/2017',
'ctl00$MainContent$txtPeriodEndingDt':'11/25/2018',
'ctl00$MainContent$btnSearch': 'Search'
}
with requests.Session() as s:
s.headers["User-Agent"] = "Mozilla/5.0"
req = s.post(url,data=payload,cookies=res.cookies.get_dict())
sauce = BeautifulSoup(req.text,"lxml")
for items in sauce.select("#MainContent_gvBRCSummary tr"):
data = [item.get_text(strip=True) for item in items.select("th,td")]
print(data)
Any help to solve the issue will be highly appreciated. Once again: the data I wish to grab are the tabular content from the site's next pages as my script can already parse the data from it's first page?
P.S.: Browser simulator is not an option I would like to cope with.
python python-3.x post web-scraping beautifulsoup
I've written a script in python to get the tabular data populated upon filling in two input boxes (From
and Through
) located at the top right corner of a webpage. The date I filled in to generate results are 08/28/2017
and 11/25/2018
.
When I run my following script, I can get the tabular results from it's first page.
However, the data have spread across multiple pages through pagination and the url remains unchanged. How can I get the next page content?
Url to the site
This is my attempt:
import requests
from bs4 import BeautifulSoup
url = "https://www.myfloridalicense.com/FLABTBeerPricePosting/"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
try:
evtrgt = soup.select_one("#__EVENTTARGET").get('value')
except AttributeError: evtrgt = ""
viewstate = soup.select_one("#__VIEWSTATE").get('value')
viewgen = soup.select_one("#__VIEWSTATEGENERATOR").get('value')
eventval = soup.select_one("#__EVENTVALIDATION").get('value')
payload = {
'__EVENTTARGET': evtrgt,
'__EVENTARGUMENT': '',
'__VIEWSTATE':viewstate,
'__VIEWSTATEGENERATOR':viewgen,
'__VIEWSTATEENCRYPTED':'',
'__EVENTVALIDATION':eventval,
'ctl00$MainContent$txtPermitNo':'',
'ctl00$MainContent$txtPermitName': '',
'ctl00$MainContent$txtBrandName':'',
'ctl00$MainContent$txtPeriodBeginDt':'08/28/2017',
'ctl00$MainContent$txtPeriodEndingDt':'11/25/2018',
'ctl00$MainContent$btnSearch': 'Search'
}
with requests.Session() as s:
s.headers["User-Agent"] = "Mozilla/5.0"
req = s.post(url,data=payload,cookies=res.cookies.get_dict())
sauce = BeautifulSoup(req.text,"lxml")
for items in sauce.select("#MainContent_gvBRCSummary tr"):
data = [item.get_text(strip=True) for item in items.select("th,td")]
print(data)
Any help to solve the issue will be highly appreciated. Once again: the data I wish to grab are the tabular content from the site's next pages as my script can already parse the data from it's first page?
P.S.: Browser simulator is not an option I would like to cope with.
python python-3.x post web-scraping beautifulsoup
python python-3.x post web-scraping beautifulsoup
edited Nov 25 '18 at 19:36
robots.txt
asked Nov 25 '18 at 16:54
robots.txtrobots.txt
308117
308117
This is an ASP.net form with doPostBack links. Clicking on those links will set __EVENTTARGET and __EVENTARGUMENT and submit the form.
– pguardiario
Nov 25 '18 at 22:55
add a comment |
This is an ASP.net form with doPostBack links. Clicking on those links will set __EVENTTARGET and __EVENTARGUMENT and submit the form.
– pguardiario
Nov 25 '18 at 22:55
This is an ASP.net form with doPostBack links. Clicking on those links will set __EVENTTARGET and __EVENTARGUMENT and submit the form.
– pguardiario
Nov 25 '18 at 22:55
This is an ASP.net form with doPostBack links. Clicking on those links will set __EVENTTARGET and __EVENTARGUMENT and submit the form.
– pguardiario
Nov 25 '18 at 22:55
add a comment |
1 Answer
1
active
oldest
votes
You need to add a loop for each page and assign the requested page number to the __EVENTARGUMENT
parameter as follows:
import requests
from bs4 import BeautifulSoup
url = "https://www.myfloridalicense.com/FLABTBeerPricePosting/"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
try:
evtrgt = soup.select_one("#__EVENTTARGET").get('value')
except AttributeError:
evtrgt = ""
viewstate = soup.select_one("#__VIEWSTATE").get('value')
viewgen = soup.select_one("#__VIEWSTATEGENERATOR").get('value')
eventval = soup.select_one("#__EVENTVALIDATION").get('value')
payload = {
'__EVENTTARGET' : evtrgt,
'__EVENTARGUMENT' : '',
'__VIEWSTATE' : viewstate,
'__VIEWSTATEGENERATOR' : viewgen,
'__VIEWSTATEENCRYPTED' : '',
'__EVENTVALIDATION' : eventval,
'ctl00$MainContent$txtPermitNo' : '',
'ctl00$MainContent$txtPermitName' : '',
'ctl00$MainContent$txtBrandName' : '',
'ctl00$MainContent$txtPeriodBeginDt' : '08/28/2017',
'ctl00$MainContent$txtPeriodEndingDt' : '11/25/2018',
'ctl00$MainContent$btnSearch': 'Search'
}
for page in range(1, 12):
with requests.Session() as s:
s.headers["User-Agent"] = "Mozilla/5.0"
payload['__EVENTARGUMENT'] = f'Page${page}'
req = s.post(url,data=payload,cookies=res.cookies.get_dict())
sauce = BeautifulSoup(req.text, "lxml")
for items in sauce.select("#MainContent_gvBRCSummary tr"):
data = [item.get_text(strip=True) for item in items.select("th,td")]
print(data)
Hi @Martin Evans, you might be interested in solving the issue found in this post I'm currently struggling with. Thanks in advance.
– robots.txt
Jan 25 at 10:25
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53469734%2ftrouble-fetching-results-from-next-pages-using-post-requests%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
You need to add a loop for each page and assign the requested page number to the __EVENTARGUMENT
parameter as follows:
import requests
from bs4 import BeautifulSoup
url = "https://www.myfloridalicense.com/FLABTBeerPricePosting/"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
try:
evtrgt = soup.select_one("#__EVENTTARGET").get('value')
except AttributeError:
evtrgt = ""
viewstate = soup.select_one("#__VIEWSTATE").get('value')
viewgen = soup.select_one("#__VIEWSTATEGENERATOR").get('value')
eventval = soup.select_one("#__EVENTVALIDATION").get('value')
payload = {
'__EVENTTARGET' : evtrgt,
'__EVENTARGUMENT' : '',
'__VIEWSTATE' : viewstate,
'__VIEWSTATEGENERATOR' : viewgen,
'__VIEWSTATEENCRYPTED' : '',
'__EVENTVALIDATION' : eventval,
'ctl00$MainContent$txtPermitNo' : '',
'ctl00$MainContent$txtPermitName' : '',
'ctl00$MainContent$txtBrandName' : '',
'ctl00$MainContent$txtPeriodBeginDt' : '08/28/2017',
'ctl00$MainContent$txtPeriodEndingDt' : '11/25/2018',
'ctl00$MainContent$btnSearch': 'Search'
}
for page in range(1, 12):
with requests.Session() as s:
s.headers["User-Agent"] = "Mozilla/5.0"
payload['__EVENTARGUMENT'] = f'Page${page}'
req = s.post(url,data=payload,cookies=res.cookies.get_dict())
sauce = BeautifulSoup(req.text, "lxml")
for items in sauce.select("#MainContent_gvBRCSummary tr"):
data = [item.get_text(strip=True) for item in items.select("th,td")]
print(data)
Hi @Martin Evans, you might be interested in solving the issue found in this post I'm currently struggling with. Thanks in advance.
– robots.txt
Jan 25 at 10:25
add a comment |
You need to add a loop for each page and assign the requested page number to the __EVENTARGUMENT
parameter as follows:
import requests
from bs4 import BeautifulSoup
url = "https://www.myfloridalicense.com/FLABTBeerPricePosting/"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
try:
evtrgt = soup.select_one("#__EVENTTARGET").get('value')
except AttributeError:
evtrgt = ""
viewstate = soup.select_one("#__VIEWSTATE").get('value')
viewgen = soup.select_one("#__VIEWSTATEGENERATOR").get('value')
eventval = soup.select_one("#__EVENTVALIDATION").get('value')
payload = {
'__EVENTTARGET' : evtrgt,
'__EVENTARGUMENT' : '',
'__VIEWSTATE' : viewstate,
'__VIEWSTATEGENERATOR' : viewgen,
'__VIEWSTATEENCRYPTED' : '',
'__EVENTVALIDATION' : eventval,
'ctl00$MainContent$txtPermitNo' : '',
'ctl00$MainContent$txtPermitName' : '',
'ctl00$MainContent$txtBrandName' : '',
'ctl00$MainContent$txtPeriodBeginDt' : '08/28/2017',
'ctl00$MainContent$txtPeriodEndingDt' : '11/25/2018',
'ctl00$MainContent$btnSearch': 'Search'
}
for page in range(1, 12):
with requests.Session() as s:
s.headers["User-Agent"] = "Mozilla/5.0"
payload['__EVENTARGUMENT'] = f'Page${page}'
req = s.post(url,data=payload,cookies=res.cookies.get_dict())
sauce = BeautifulSoup(req.text, "lxml")
for items in sauce.select("#MainContent_gvBRCSummary tr"):
data = [item.get_text(strip=True) for item in items.select("th,td")]
print(data)
Hi @Martin Evans, you might be interested in solving the issue found in this post I'm currently struggling with. Thanks in advance.
– robots.txt
Jan 25 at 10:25
add a comment |
You need to add a loop for each page and assign the requested page number to the __EVENTARGUMENT
parameter as follows:
import requests
from bs4 import BeautifulSoup
url = "https://www.myfloridalicense.com/FLABTBeerPricePosting/"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
try:
evtrgt = soup.select_one("#__EVENTTARGET").get('value')
except AttributeError:
evtrgt = ""
viewstate = soup.select_one("#__VIEWSTATE").get('value')
viewgen = soup.select_one("#__VIEWSTATEGENERATOR").get('value')
eventval = soup.select_one("#__EVENTVALIDATION").get('value')
payload = {
'__EVENTTARGET' : evtrgt,
'__EVENTARGUMENT' : '',
'__VIEWSTATE' : viewstate,
'__VIEWSTATEGENERATOR' : viewgen,
'__VIEWSTATEENCRYPTED' : '',
'__EVENTVALIDATION' : eventval,
'ctl00$MainContent$txtPermitNo' : '',
'ctl00$MainContent$txtPermitName' : '',
'ctl00$MainContent$txtBrandName' : '',
'ctl00$MainContent$txtPeriodBeginDt' : '08/28/2017',
'ctl00$MainContent$txtPeriodEndingDt' : '11/25/2018',
'ctl00$MainContent$btnSearch': 'Search'
}
for page in range(1, 12):
with requests.Session() as s:
s.headers["User-Agent"] = "Mozilla/5.0"
payload['__EVENTARGUMENT'] = f'Page${page}'
req = s.post(url,data=payload,cookies=res.cookies.get_dict())
sauce = BeautifulSoup(req.text, "lxml")
for items in sauce.select("#MainContent_gvBRCSummary tr"):
data = [item.get_text(strip=True) for item in items.select("th,td")]
print(data)
You need to add a loop for each page and assign the requested page number to the __EVENTARGUMENT
parameter as follows:
import requests
from bs4 import BeautifulSoup
url = "https://www.myfloridalicense.com/FLABTBeerPricePosting/"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
try:
evtrgt = soup.select_one("#__EVENTTARGET").get('value')
except AttributeError:
evtrgt = ""
viewstate = soup.select_one("#__VIEWSTATE").get('value')
viewgen = soup.select_one("#__VIEWSTATEGENERATOR").get('value')
eventval = soup.select_one("#__EVENTVALIDATION").get('value')
payload = {
'__EVENTTARGET' : evtrgt,
'__EVENTARGUMENT' : '',
'__VIEWSTATE' : viewstate,
'__VIEWSTATEGENERATOR' : viewgen,
'__VIEWSTATEENCRYPTED' : '',
'__EVENTVALIDATION' : eventval,
'ctl00$MainContent$txtPermitNo' : '',
'ctl00$MainContent$txtPermitName' : '',
'ctl00$MainContent$txtBrandName' : '',
'ctl00$MainContent$txtPeriodBeginDt' : '08/28/2017',
'ctl00$MainContent$txtPeriodEndingDt' : '11/25/2018',
'ctl00$MainContent$btnSearch': 'Search'
}
for page in range(1, 12):
with requests.Session() as s:
s.headers["User-Agent"] = "Mozilla/5.0"
payload['__EVENTARGUMENT'] = f'Page${page}'
req = s.post(url,data=payload,cookies=res.cookies.get_dict())
sauce = BeautifulSoup(req.text, "lxml")
for items in sauce.select("#MainContent_gvBRCSummary tr"):
data = [item.get_text(strip=True) for item in items.select("th,td")]
print(data)
answered Nov 29 '18 at 10:59
Martin EvansMartin Evans
28.4k133156
28.4k133156
Hi @Martin Evans, you might be interested in solving the issue found in this post I'm currently struggling with. Thanks in advance.
– robots.txt
Jan 25 at 10:25
add a comment |
Hi @Martin Evans, you might be interested in solving the issue found in this post I'm currently struggling with. Thanks in advance.
– robots.txt
Jan 25 at 10:25
Hi @Martin Evans, you might be interested in solving the issue found in this post I'm currently struggling with. Thanks in advance.
– robots.txt
Jan 25 at 10:25
Hi @Martin Evans, you might be interested in solving the issue found in this post I'm currently struggling with. Thanks in advance.
– robots.txt
Jan 25 at 10:25
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53469734%2ftrouble-fetching-results-from-next-pages-using-post-requests%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
This is an ASP.net form with doPostBack links. Clicking on those links will set __EVENTTARGET and __EVENTARGUMENT and submit the form.
– pguardiario
Nov 25 '18 at 22:55