How to resume crawling after last depth I reached when I restart my crawler?












1














Hello Everyone
I am making a web application that crawl lots of pages from a specific website,
I started my crawler4j software with unlimited depth and pages but suddenly it stopped because of internet connection.
Now I want to continue crawling that website and not to fetch the urls I visited before considering I have last pages depth.




Note : I want some way that not to check my stored url with the urls I will fetch because I don't want to send very much requests to this site.




**Thanks **☺










share|improve this question



























    1














    Hello Everyone
    I am making a web application that crawl lots of pages from a specific website,
    I started my crawler4j software with unlimited depth and pages but suddenly it stopped because of internet connection.
    Now I want to continue crawling that website and not to fetch the urls I visited before considering I have last pages depth.




    Note : I want some way that not to check my stored url with the urls I will fetch because I don't want to send very much requests to this site.




    **Thanks **☺










    share|improve this question

























      1












      1








      1







      Hello Everyone
      I am making a web application that crawl lots of pages from a specific website,
      I started my crawler4j software with unlimited depth and pages but suddenly it stopped because of internet connection.
      Now I want to continue crawling that website and not to fetch the urls I visited before considering I have last pages depth.




      Note : I want some way that not to check my stored url with the urls I will fetch because I don't want to send very much requests to this site.




      **Thanks **☺










      share|improve this question













      Hello Everyone
      I am making a web application that crawl lots of pages from a specific website,
      I started my crawler4j software with unlimited depth and pages but suddenly it stopped because of internet connection.
      Now I want to continue crawling that website and not to fetch the urls I visited before considering I have last pages depth.




      Note : I want some way that not to check my stored url with the urls I will fetch because I don't want to send very much requests to this site.




      **Thanks **☺







      java web-crawler crawler4j






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 20 '18 at 19:34









      Ahmed Sakr

      264




      264
























          1 Answer
          1






          active

          oldest

          votes


















          2














          You can use "resumeable" crawling with crawler4j by enabling this feature



          crawlConfig.setResumableCrawling(true);


          in the given configuration. See the documentation of crawler4j here.






          share|improve this answer





















          • Great , but how this method work?,what logic it uses?
            – Ahmed Sakr
            Dec 12 '18 at 0:09










          • If it is enabled, it uses the internal berkley database to store intermediate crawl data (frontier, docid server) in the location you specified by setting the crawl storage folder.
            – rzo
            Dec 12 '18 at 8:00











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400306%2fhow-to-resume-crawling-after-last-depth-i-reached-when-i-restart-my-crawler%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          You can use "resumeable" crawling with crawler4j by enabling this feature



          crawlConfig.setResumableCrawling(true);


          in the given configuration. See the documentation of crawler4j here.






          share|improve this answer





















          • Great , but how this method work?,what logic it uses?
            – Ahmed Sakr
            Dec 12 '18 at 0:09










          • If it is enabled, it uses the internal berkley database to store intermediate crawl data (frontier, docid server) in the location you specified by setting the crawl storage folder.
            – rzo
            Dec 12 '18 at 8:00
















          2














          You can use "resumeable" crawling with crawler4j by enabling this feature



          crawlConfig.setResumableCrawling(true);


          in the given configuration. See the documentation of crawler4j here.






          share|improve this answer





















          • Great , but how this method work?,what logic it uses?
            – Ahmed Sakr
            Dec 12 '18 at 0:09










          • If it is enabled, it uses the internal berkley database to store intermediate crawl data (frontier, docid server) in the location you specified by setting the crawl storage folder.
            – rzo
            Dec 12 '18 at 8:00














          2












          2








          2






          You can use "resumeable" crawling with crawler4j by enabling this feature



          crawlConfig.setResumableCrawling(true);


          in the given configuration. See the documentation of crawler4j here.






          share|improve this answer












          You can use "resumeable" crawling with crawler4j by enabling this feature



          crawlConfig.setResumableCrawling(true);


          in the given configuration. See the documentation of crawler4j here.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Dec 7 '18 at 13:29









          rzo

          3,12921652




          3,12921652












          • Great , but how this method work?,what logic it uses?
            – Ahmed Sakr
            Dec 12 '18 at 0:09










          • If it is enabled, it uses the internal berkley database to store intermediate crawl data (frontier, docid server) in the location you specified by setting the crawl storage folder.
            – rzo
            Dec 12 '18 at 8:00


















          • Great , but how this method work?,what logic it uses?
            – Ahmed Sakr
            Dec 12 '18 at 0:09










          • If it is enabled, it uses the internal berkley database to store intermediate crawl data (frontier, docid server) in the location you specified by setting the crawl storage folder.
            – rzo
            Dec 12 '18 at 8:00
















          Great , but how this method work?,what logic it uses?
          – Ahmed Sakr
          Dec 12 '18 at 0:09




          Great , but how this method work?,what logic it uses?
          – Ahmed Sakr
          Dec 12 '18 at 0:09












          If it is enabled, it uses the internal berkley database to store intermediate crawl data (frontier, docid server) in the location you specified by setting the crawl storage folder.
          – rzo
          Dec 12 '18 at 8:00




          If it is enabled, it uses the internal berkley database to store intermediate crawl data (frontier, docid server) in the location you specified by setting the crawl storage folder.
          – rzo
          Dec 12 '18 at 8:00


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400306%2fhow-to-resume-crawling-after-last-depth-i-reached-when-i-restart-my-crawler%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Create new schema in PostgreSQL using DBeaver

          Deepest pit of an array with Javascript: test on Codility

          Costa Masnaga