Trying to run inference on multiple CPUs











up vote
0
down vote

favorite












Lately, I have been working with Tensorflow's Object Detection API, I have trained the ssd_mobilenet_v2 using my own data and the model works as expected.



I wanted to improve the performance by using multiple CPU cores for certain ops. Currently, Tensorflow utilizes the workload by using a small fraction of each core available on my system (I do not know how this utilization happens in the backend).



I tried adding the parameters device_count={ "CPU": n_cpus }, inter_op_parallelism_threads=n_inter, intra_op_parallelism_threads=n_intra, in the tf.Sessions() method. My aim was to specify the number of cores to be used and the number of threads to be created in order to have a speedup, but this had no impact on the performance.



After that, I also tried to pin certain processes to certain cores by using the following method: with tf.device('/cpu:0'):.



Using with tf.device('/cpu:0'): with trivial operations such as tf.matmul proved to speedup the performance indeed. I could also see the CPU performance and utilization, ex. CPU:0 was being used at around 98-99% (this was what I was looking for).



However, when I tried the same approach tf.device() with model inference, the CPU utilization went back to the default setting, where each the workload is shared between all cores (single op being shared between cores).



I want to know if the inference part can run on multiple CPU cores in parallel. My interest is in executing different ops of the inference in parallel on different cores.
So far, I had no success in making this work, maybe my logic is flawed, I would appreciate some guidance :)



Additional info: I am using TensorFlow 1.11.0, CPU version










share|improve this question




























    up vote
    0
    down vote

    favorite












    Lately, I have been working with Tensorflow's Object Detection API, I have trained the ssd_mobilenet_v2 using my own data and the model works as expected.



    I wanted to improve the performance by using multiple CPU cores for certain ops. Currently, Tensorflow utilizes the workload by using a small fraction of each core available on my system (I do not know how this utilization happens in the backend).



    I tried adding the parameters device_count={ "CPU": n_cpus }, inter_op_parallelism_threads=n_inter, intra_op_parallelism_threads=n_intra, in the tf.Sessions() method. My aim was to specify the number of cores to be used and the number of threads to be created in order to have a speedup, but this had no impact on the performance.



    After that, I also tried to pin certain processes to certain cores by using the following method: with tf.device('/cpu:0'):.



    Using with tf.device('/cpu:0'): with trivial operations such as tf.matmul proved to speedup the performance indeed. I could also see the CPU performance and utilization, ex. CPU:0 was being used at around 98-99% (this was what I was looking for).



    However, when I tried the same approach tf.device() with model inference, the CPU utilization went back to the default setting, where each the workload is shared between all cores (single op being shared between cores).



    I want to know if the inference part can run on multiple CPU cores in parallel. My interest is in executing different ops of the inference in parallel on different cores.
    So far, I had no success in making this work, maybe my logic is flawed, I would appreciate some guidance :)



    Additional info: I am using TensorFlow 1.11.0, CPU version










    share|improve this question


























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      Lately, I have been working with Tensorflow's Object Detection API, I have trained the ssd_mobilenet_v2 using my own data and the model works as expected.



      I wanted to improve the performance by using multiple CPU cores for certain ops. Currently, Tensorflow utilizes the workload by using a small fraction of each core available on my system (I do not know how this utilization happens in the backend).



      I tried adding the parameters device_count={ "CPU": n_cpus }, inter_op_parallelism_threads=n_inter, intra_op_parallelism_threads=n_intra, in the tf.Sessions() method. My aim was to specify the number of cores to be used and the number of threads to be created in order to have a speedup, but this had no impact on the performance.



      After that, I also tried to pin certain processes to certain cores by using the following method: with tf.device('/cpu:0'):.



      Using with tf.device('/cpu:0'): with trivial operations such as tf.matmul proved to speedup the performance indeed. I could also see the CPU performance and utilization, ex. CPU:0 was being used at around 98-99% (this was what I was looking for).



      However, when I tried the same approach tf.device() with model inference, the CPU utilization went back to the default setting, where each the workload is shared between all cores (single op being shared between cores).



      I want to know if the inference part can run on multiple CPU cores in parallel. My interest is in executing different ops of the inference in parallel on different cores.
      So far, I had no success in making this work, maybe my logic is flawed, I would appreciate some guidance :)



      Additional info: I am using TensorFlow 1.11.0, CPU version










      share|improve this question















      Lately, I have been working with Tensorflow's Object Detection API, I have trained the ssd_mobilenet_v2 using my own data and the model works as expected.



      I wanted to improve the performance by using multiple CPU cores for certain ops. Currently, Tensorflow utilizes the workload by using a small fraction of each core available on my system (I do not know how this utilization happens in the backend).



      I tried adding the parameters device_count={ "CPU": n_cpus }, inter_op_parallelism_threads=n_inter, intra_op_parallelism_threads=n_intra, in the tf.Sessions() method. My aim was to specify the number of cores to be used and the number of threads to be created in order to have a speedup, but this had no impact on the performance.



      After that, I also tried to pin certain processes to certain cores by using the following method: with tf.device('/cpu:0'):.



      Using with tf.device('/cpu:0'): with trivial operations such as tf.matmul proved to speedup the performance indeed. I could also see the CPU performance and utilization, ex. CPU:0 was being used at around 98-99% (this was what I was looking for).



      However, when I tried the same approach tf.device() with model inference, the CPU utilization went back to the default setting, where each the workload is shared between all cores (single op being shared between cores).



      I want to know if the inference part can run on multiple CPU cores in parallel. My interest is in executing different ops of the inference in parallel on different cores.
      So far, I had no success in making this work, maybe my logic is flawed, I would appreciate some guidance :)



      Additional info: I am using TensorFlow 1.11.0, CPU version







      python tensorflow parallel-processing neural-network deep-learning






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 20 at 8:39

























      asked Nov 19 at 13:33









      beerjamin

      34




      34





























          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53375762%2ftrying-to-run-inference-on-multiple-cpus%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53375762%2ftrying-to-run-inference-on-multiple-cpus%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Costa Masnaga

          Fotorealismo

          Sidney Franklin