UDP, multithreading multiple clients?
I am working on an UDP client-server architecture where the server (naturally) can handle multiple clients. However I've ran into some problems. I currently have a "server" that accepts data from 1 socket, from IPAddress.Any. If it's an unknown IP, a new client instance is created and all the relevant data goes there.
Now here's the problems I've ran into. When trying to BeginReceiveFrom on both the "server" and the "clients", all the data is received twice, so naturally this isn't very helpful.
When restarting BeginReceiveFrom immediately after receiving data (but before parsing, checking etc) the data comes in a random order and sometimes data is missing. (I am connecting locally, so I doubt UDP will lose packets that frequent).
With a similar TCP architecture, you are able to run BeginReceiveFrom on every new client without problems. What's different here? How does the TCP stack handle distributing data to all of the connected clients?
How can I achieve something similar (or at the very least more threads than one)? Having to run one thread for (say 1.000) connections and having to parse/handle every single packet before a new one can be handled, is a massive performance drop.
Is it an idea to parse and "deliver" data to each of the clients and have that data be picked up & processed by 1 thread for every client?
I know people usually want to see some code to provide better help, but I don't think that will help much in this case (ask me if you think it will and I can provide some).
My current solution is, BeginReceiveFrom > EndReceiveFrom > Parse data > Check data > BeginReceiveFrom > etc.
UDP is by choice.
multithreading sockets udp
add a comment |
I am working on an UDP client-server architecture where the server (naturally) can handle multiple clients. However I've ran into some problems. I currently have a "server" that accepts data from 1 socket, from IPAddress.Any. If it's an unknown IP, a new client instance is created and all the relevant data goes there.
Now here's the problems I've ran into. When trying to BeginReceiveFrom on both the "server" and the "clients", all the data is received twice, so naturally this isn't very helpful.
When restarting BeginReceiveFrom immediately after receiving data (but before parsing, checking etc) the data comes in a random order and sometimes data is missing. (I am connecting locally, so I doubt UDP will lose packets that frequent).
With a similar TCP architecture, you are able to run BeginReceiveFrom on every new client without problems. What's different here? How does the TCP stack handle distributing data to all of the connected clients?
How can I achieve something similar (or at the very least more threads than one)? Having to run one thread for (say 1.000) connections and having to parse/handle every single packet before a new one can be handled, is a massive performance drop.
Is it an idea to parse and "deliver" data to each of the clients and have that data be picked up & processed by 1 thread for every client?
I know people usually want to see some code to provide better help, but I don't think that will help much in this case (ask me if you think it will and I can provide some).
My current solution is, BeginReceiveFrom > EndReceiveFrom > Parse data > Check data > BeginReceiveFrom > etc.
UDP is by choice.
multithreading sockets udp
1
You could have one receiver thread and once it receives a packet, send it to another thread for processing, so it can be constantly receiving.
– immibis
Nov 21 at 0:02
After a lot of trial and error, this was what came in my mind as well. But even after doing quite a bit of research, I'm not much further. I was thinking about queueing received data directly as byte and processing it on different thread(s). I understand that lock is fairly expensive in this scenario, what ways are there to make something like this happen? Also, can you advise me on the use of a threadpool for reading/processing the data from the buffer? How can I notify other threads that the queue has data available? Performance is a big issue for me. Thanks!
– Dennis19901
Nov 21 at 3:24
add a comment |
I am working on an UDP client-server architecture where the server (naturally) can handle multiple clients. However I've ran into some problems. I currently have a "server" that accepts data from 1 socket, from IPAddress.Any. If it's an unknown IP, a new client instance is created and all the relevant data goes there.
Now here's the problems I've ran into. When trying to BeginReceiveFrom on both the "server" and the "clients", all the data is received twice, so naturally this isn't very helpful.
When restarting BeginReceiveFrom immediately after receiving data (but before parsing, checking etc) the data comes in a random order and sometimes data is missing. (I am connecting locally, so I doubt UDP will lose packets that frequent).
With a similar TCP architecture, you are able to run BeginReceiveFrom on every new client without problems. What's different here? How does the TCP stack handle distributing data to all of the connected clients?
How can I achieve something similar (or at the very least more threads than one)? Having to run one thread for (say 1.000) connections and having to parse/handle every single packet before a new one can be handled, is a massive performance drop.
Is it an idea to parse and "deliver" data to each of the clients and have that data be picked up & processed by 1 thread for every client?
I know people usually want to see some code to provide better help, but I don't think that will help much in this case (ask me if you think it will and I can provide some).
My current solution is, BeginReceiveFrom > EndReceiveFrom > Parse data > Check data > BeginReceiveFrom > etc.
UDP is by choice.
multithreading sockets udp
I am working on an UDP client-server architecture where the server (naturally) can handle multiple clients. However I've ran into some problems. I currently have a "server" that accepts data from 1 socket, from IPAddress.Any. If it's an unknown IP, a new client instance is created and all the relevant data goes there.
Now here's the problems I've ran into. When trying to BeginReceiveFrom on both the "server" and the "clients", all the data is received twice, so naturally this isn't very helpful.
When restarting BeginReceiveFrom immediately after receiving data (but before parsing, checking etc) the data comes in a random order and sometimes data is missing. (I am connecting locally, so I doubt UDP will lose packets that frequent).
With a similar TCP architecture, you are able to run BeginReceiveFrom on every new client without problems. What's different here? How does the TCP stack handle distributing data to all of the connected clients?
How can I achieve something similar (or at the very least more threads than one)? Having to run one thread for (say 1.000) connections and having to parse/handle every single packet before a new one can be handled, is a massive performance drop.
Is it an idea to parse and "deliver" data to each of the clients and have that data be picked up & processed by 1 thread for every client?
I know people usually want to see some code to provide better help, but I don't think that will help much in this case (ask me if you think it will and I can provide some).
My current solution is, BeginReceiveFrom > EndReceiveFrom > Parse data > Check data > BeginReceiveFrom > etc.
UDP is by choice.
multithreading sockets udp
multithreading sockets udp
asked Nov 20 at 7:21
Dennis19901
533
533
1
You could have one receiver thread and once it receives a packet, send it to another thread for processing, so it can be constantly receiving.
– immibis
Nov 21 at 0:02
After a lot of trial and error, this was what came in my mind as well. But even after doing quite a bit of research, I'm not much further. I was thinking about queueing received data directly as byte and processing it on different thread(s). I understand that lock is fairly expensive in this scenario, what ways are there to make something like this happen? Also, can you advise me on the use of a threadpool for reading/processing the data from the buffer? How can I notify other threads that the queue has data available? Performance is a big issue for me. Thanks!
– Dennis19901
Nov 21 at 3:24
add a comment |
1
You could have one receiver thread and once it receives a packet, send it to another thread for processing, so it can be constantly receiving.
– immibis
Nov 21 at 0:02
After a lot of trial and error, this was what came in my mind as well. But even after doing quite a bit of research, I'm not much further. I was thinking about queueing received data directly as byte and processing it on different thread(s). I understand that lock is fairly expensive in this scenario, what ways are there to make something like this happen? Also, can you advise me on the use of a threadpool for reading/processing the data from the buffer? How can I notify other threads that the queue has data available? Performance is a big issue for me. Thanks!
– Dennis19901
Nov 21 at 3:24
1
1
You could have one receiver thread and once it receives a packet, send it to another thread for processing, so it can be constantly receiving.
– immibis
Nov 21 at 0:02
You could have one receiver thread and once it receives a packet, send it to another thread for processing, so it can be constantly receiving.
– immibis
Nov 21 at 0:02
After a lot of trial and error, this was what came in my mind as well. But even after doing quite a bit of research, I'm not much further. I was thinking about queueing received data directly as byte and processing it on different thread(s). I understand that lock is fairly expensive in this scenario, what ways are there to make something like this happen? Also, can you advise me on the use of a threadpool for reading/processing the data from the buffer? How can I notify other threads that the queue has data available? Performance is a big issue for me. Thanks!
– Dennis19901
Nov 21 at 3:24
After a lot of trial and error, this was what came in my mind as well. But even after doing quite a bit of research, I'm not much further. I was thinking about queueing received data directly as byte and processing it on different thread(s). I understand that lock is fairly expensive in this scenario, what ways are there to make something like this happen? Also, can you advise me on the use of a threadpool for reading/processing the data from the buffer? How can I notify other threads that the queue has data available? Performance is a big issue for me. Thanks!
– Dennis19901
Nov 21 at 3:24
add a comment |
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53388069%2fudp-multithreading-multiple-clients%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53388069%2fudp-multithreading-multiple-clients%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
You could have one receiver thread and once it receives a packet, send it to another thread for processing, so it can be constantly receiving.
– immibis
Nov 21 at 0:02
After a lot of trial and error, this was what came in my mind as well. But even after doing quite a bit of research, I'm not much further. I was thinking about queueing received data directly as byte and processing it on different thread(s). I understand that lock is fairly expensive in this scenario, what ways are there to make something like this happen? Also, can you advise me on the use of a threadpool for reading/processing the data from the buffer? How can I notify other threads that the queue has data available? Performance is a big issue for me. Thanks!
– Dennis19901
Nov 21 at 3:24