Trying to run inference on multiple CPUs
up vote
0
down vote
favorite
Lately, I have been working with Tensorflow's Object Detection API, I have trained the ssd_mobilenet_v2 using my own data and the model works as expected.
I wanted to improve the performance by using multiple CPU cores for certain ops. Currently, Tensorflow utilizes the workload by using a small fraction of each core available on my system (I do not know how this utilization happens in the backend).
I tried adding the parameters device_count={ "CPU": n_cpus }, inter_op_parallelism_threads=n_inter, intra_op_parallelism_threads=n_intra,
in the tf.Sessions()
method. My aim was to specify the number of cores to be used and the number of threads to be created in order to have a speedup, but this had no impact on the performance.
After that, I also tried to pin certain processes to certain cores by using the following method: with tf.device('/cpu:0'):
.
Using with tf.device('/cpu:0'):
with trivial operations such as tf.matmul
proved to speedup the performance indeed. I could also see the CPU performance and utilization, ex. CPU:0 was being used at around 98-99% (this was what I was looking for).
However, when I tried the same approach tf.device()
with model inference, the CPU utilization went back to the default setting, where each the workload is shared between all cores (single op being shared between cores).
I want to know if the inference part can run on multiple CPU cores in parallel. My interest is in executing different ops of the inference in parallel on different cores.
So far, I had no success in making this work, maybe my logic is flawed, I would appreciate some guidance :)
Additional info: I am using TensorFlow 1.11.0, CPU version
python tensorflow parallel-processing neural-network deep-learning
add a comment |
up vote
0
down vote
favorite
Lately, I have been working with Tensorflow's Object Detection API, I have trained the ssd_mobilenet_v2 using my own data and the model works as expected.
I wanted to improve the performance by using multiple CPU cores for certain ops. Currently, Tensorflow utilizes the workload by using a small fraction of each core available on my system (I do not know how this utilization happens in the backend).
I tried adding the parameters device_count={ "CPU": n_cpus }, inter_op_parallelism_threads=n_inter, intra_op_parallelism_threads=n_intra,
in the tf.Sessions()
method. My aim was to specify the number of cores to be used and the number of threads to be created in order to have a speedup, but this had no impact on the performance.
After that, I also tried to pin certain processes to certain cores by using the following method: with tf.device('/cpu:0'):
.
Using with tf.device('/cpu:0'):
with trivial operations such as tf.matmul
proved to speedup the performance indeed. I could also see the CPU performance and utilization, ex. CPU:0 was being used at around 98-99% (this was what I was looking for).
However, when I tried the same approach tf.device()
with model inference, the CPU utilization went back to the default setting, where each the workload is shared between all cores (single op being shared between cores).
I want to know if the inference part can run on multiple CPU cores in parallel. My interest is in executing different ops of the inference in parallel on different cores.
So far, I had no success in making this work, maybe my logic is flawed, I would appreciate some guidance :)
Additional info: I am using TensorFlow 1.11.0, CPU version
python tensorflow parallel-processing neural-network deep-learning
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Lately, I have been working with Tensorflow's Object Detection API, I have trained the ssd_mobilenet_v2 using my own data and the model works as expected.
I wanted to improve the performance by using multiple CPU cores for certain ops. Currently, Tensorflow utilizes the workload by using a small fraction of each core available on my system (I do not know how this utilization happens in the backend).
I tried adding the parameters device_count={ "CPU": n_cpus }, inter_op_parallelism_threads=n_inter, intra_op_parallelism_threads=n_intra,
in the tf.Sessions()
method. My aim was to specify the number of cores to be used and the number of threads to be created in order to have a speedup, but this had no impact on the performance.
After that, I also tried to pin certain processes to certain cores by using the following method: with tf.device('/cpu:0'):
.
Using with tf.device('/cpu:0'):
with trivial operations such as tf.matmul
proved to speedup the performance indeed. I could also see the CPU performance and utilization, ex. CPU:0 was being used at around 98-99% (this was what I was looking for).
However, when I tried the same approach tf.device()
with model inference, the CPU utilization went back to the default setting, where each the workload is shared between all cores (single op being shared between cores).
I want to know if the inference part can run on multiple CPU cores in parallel. My interest is in executing different ops of the inference in parallel on different cores.
So far, I had no success in making this work, maybe my logic is flawed, I would appreciate some guidance :)
Additional info: I am using TensorFlow 1.11.0, CPU version
python tensorflow parallel-processing neural-network deep-learning
Lately, I have been working with Tensorflow's Object Detection API, I have trained the ssd_mobilenet_v2 using my own data and the model works as expected.
I wanted to improve the performance by using multiple CPU cores for certain ops. Currently, Tensorflow utilizes the workload by using a small fraction of each core available on my system (I do not know how this utilization happens in the backend).
I tried adding the parameters device_count={ "CPU": n_cpus }, inter_op_parallelism_threads=n_inter, intra_op_parallelism_threads=n_intra,
in the tf.Sessions()
method. My aim was to specify the number of cores to be used and the number of threads to be created in order to have a speedup, but this had no impact on the performance.
After that, I also tried to pin certain processes to certain cores by using the following method: with tf.device('/cpu:0'):
.
Using with tf.device('/cpu:0'):
with trivial operations such as tf.matmul
proved to speedup the performance indeed. I could also see the CPU performance and utilization, ex. CPU:0 was being used at around 98-99% (this was what I was looking for).
However, when I tried the same approach tf.device()
with model inference, the CPU utilization went back to the default setting, where each the workload is shared between all cores (single op being shared between cores).
I want to know if the inference part can run on multiple CPU cores in parallel. My interest is in executing different ops of the inference in parallel on different cores.
So far, I had no success in making this work, maybe my logic is flawed, I would appreciate some guidance :)
Additional info: I am using TensorFlow 1.11.0, CPU version
python tensorflow parallel-processing neural-network deep-learning
python tensorflow parallel-processing neural-network deep-learning
edited Nov 20 at 8:39
asked Nov 19 at 13:33
beerjamin
34
34
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53375762%2ftrying-to-run-inference-on-multiple-cpus%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown