C: thread safety and order of operations












0















Consider the following C code:



static sig_atomic_t x;
static sig_atomic_t y;

int foo()
{
x = 1;
y = 2;
}


First question: can the C compiler decide to "optimize" the code for foo to y = 2; x = 1 (in the sense that the memory location for y is changed before the memory location for x)? This would be equivalent, except when multiple threads or signals are involved.



If the answer to the first question is "yes": what should I do if I really want the guarantee that x is stored before y?










share|improve this question





























    0















    Consider the following C code:



    static sig_atomic_t x;
    static sig_atomic_t y;

    int foo()
    {
    x = 1;
    y = 2;
    }


    First question: can the C compiler decide to "optimize" the code for foo to y = 2; x = 1 (in the sense that the memory location for y is changed before the memory location for x)? This would be equivalent, except when multiple threads or signals are involved.



    If the answer to the first question is "yes": what should I do if I really want the guarantee that x is stored before y?










    share|improve this question



























      0












      0








      0








      Consider the following C code:



      static sig_atomic_t x;
      static sig_atomic_t y;

      int foo()
      {
      x = 1;
      y = 2;
      }


      First question: can the C compiler decide to "optimize" the code for foo to y = 2; x = 1 (in the sense that the memory location for y is changed before the memory location for x)? This would be equivalent, except when multiple threads or signals are involved.



      If the answer to the first question is "yes": what should I do if I really want the guarantee that x is stored before y?










      share|improve this question
















      Consider the following C code:



      static sig_atomic_t x;
      static sig_atomic_t y;

      int foo()
      {
      x = 1;
      y = 2;
      }


      First question: can the C compiler decide to "optimize" the code for foo to y = 2; x = 1 (in the sense that the memory location for y is changed before the memory location for x)? This would be equivalent, except when multiple threads or signals are involved.



      If the answer to the first question is "yes": what should I do if I really want the guarantee that x is stored before y?







      c thread-safety






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 23 '18 at 10:25







      Jeroen Demeyer

















      asked Nov 21 '18 at 15:10









      Jeroen DemeyerJeroen Demeyer

      914




      914
























          2 Answers
          2






          active

          oldest

          votes


















          1














          Yes, the compiler may change the order of the two assignments, because the reordering is not "observable" as defined by the C standard, e.g., there are no side-effects to the assignments (again, as defined by the C standard, which does not consider the existence of an outside observer).



          In practice you need some kind of barrier/fence to guarantee the order, e.g., use the services provided by your multithreading environment, or possibly C11 stdatomic.h if available.






          share|improve this answer































            0














            The C standard specifies a term called observable behavior. This means that at a minimum, the compiler/system has a few restrictions: it is not allowed to re-order expressions containing volatile-qualified operands, nor is it allowed to re-order input/output.



            Apart from those special cases, anything is fair game. It may execute y before x, it may execute them in parallel. It might optimize the whole code away as there are no observable side-effects in the code. And so on.



            Please note that thread-safety and order of execution are different things. Threads are created explicitly by the programmer/libraries. A context switch may interrupt any variable acccess which is not atomic. That's another issue and the solution is to use mutex, _Atomic qualifier or similar protection mechanisms.





            If the order matters, you should volatile-qualify the variables. In that case, the following guarantees are made by the language:



            C17 5.1.2.3 § 6 (the definition of observable behavior):




            Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.




            C17 5.1.2.3 § 4:




            In the abstract machine, all expressions are evaluated as specified by the semantics.




            Where "semantics" is pretty much the whole standard, for example the part that specifies that a ; consists of a sequence point. (In this case, C17 6.7.6 "The end of a full
            declarator is a sequence point." The term "sequenced before" is specified in C17 5.1.2.3 §3).



            So given this:



            volatile int x = 1;
            volatile int y = 1;


            then the order of initialization is guaranteed to be x before y, as the ; of the first line guarantees the sequencing order, and volatile guarantees that the program strictly follows the evaluation order specified in the standard.





            Now as it happens in the real world, volatile does not guarantee memory barriers on many compiler implementations for multi-core systems. Those implementations are not conforming.



            Opportunist compilers might claim that the programmer must use system-specific memory barriers to guarantee order of execution. But in case of volatile, that is not true, as proven above. They just want to dodge their responsibility and hand it over to the programmers. The C standard doesn't care if the CPU has 57 cores, branch prediction and instruction pipelining.






            share|improve this answer
























            • An operation on a volatile does not (have to) set a memory barrier because concurrent read/write access is undefined.

              – LWimsey
              Nov 21 '18 at 19:33











            • @LWimsey I just cited all over this answer why it concurrent execution is well-defined. Sequencing is defined in C17 5.1.2.3 §3. Concurrent access of data memory is another story. The purpose of memory barriers is to guarantee order of execution, not to guarantee thread-safety of data. If you use the volatile keyword, then implementing memory barriers correctly in the underlying machine code is the C compiler's job.

              – Lundin
              Nov 22 '18 at 7:26













            • Your quotes are about evaluation order on volatile objects. The compiler must ensure they are evaluated according to the rules of the abstact machine, but that does not mean the effects of those operations are observed by other threads in the same order. In fact, reasoning about ordering with respect to other threads is meaningless because volatile does not make an object data-race-free. I am not sure though how you see the difference between "concurrent execution is well-defined" and then "Concurrent access of data memory is another story", but concurrent read/write access ....

              – LWimsey
              Nov 23 '18 at 4:01











            • .... on a non-atomic object is undefined behavior per C17 5.1.2.4 §35: The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior. Since operations on volatile objects are only well-defined within a single thread (without additional synchronization), memory barriers are not involved. Microsoft compilers have used memory barriers on volatile operations, but that is based on a stronger guarantee than necessary.

              – LWimsey
              Nov 23 '18 at 4:02











            • @LWimsey What I mean is that order of execution per thread is well-defined. If the compiler were to parallelize execution so that it get executed by multiple cores - which the C standard does not inhibit - it must still follow the rules of the abstract machine. This includes things like instruction caching, branch prediction and pipeline execution.

              – Lundin
              Nov 23 '18 at 7:35











            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53415018%2fc-thread-safety-and-order-of-operations%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            Yes, the compiler may change the order of the two assignments, because the reordering is not "observable" as defined by the C standard, e.g., there are no side-effects to the assignments (again, as defined by the C standard, which does not consider the existence of an outside observer).



            In practice you need some kind of barrier/fence to guarantee the order, e.g., use the services provided by your multithreading environment, or possibly C11 stdatomic.h if available.






            share|improve this answer




























              1














              Yes, the compiler may change the order of the two assignments, because the reordering is not "observable" as defined by the C standard, e.g., there are no side-effects to the assignments (again, as defined by the C standard, which does not consider the existence of an outside observer).



              In practice you need some kind of barrier/fence to guarantee the order, e.g., use the services provided by your multithreading environment, or possibly C11 stdatomic.h if available.






              share|improve this answer


























                1












                1








                1







                Yes, the compiler may change the order of the two assignments, because the reordering is not "observable" as defined by the C standard, e.g., there are no side-effects to the assignments (again, as defined by the C standard, which does not consider the existence of an outside observer).



                In practice you need some kind of barrier/fence to guarantee the order, e.g., use the services provided by your multithreading environment, or possibly C11 stdatomic.h if available.






                share|improve this answer













                Yes, the compiler may change the order of the two assignments, because the reordering is not "observable" as defined by the C standard, e.g., there are no side-effects to the assignments (again, as defined by the C standard, which does not consider the existence of an outside observer).



                In practice you need some kind of barrier/fence to guarantee the order, e.g., use the services provided by your multithreading environment, or possibly C11 stdatomic.h if available.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 21 '18 at 16:32









                ArkkuArkku

                30.1k44866




                30.1k44866

























                    0














                    The C standard specifies a term called observable behavior. This means that at a minimum, the compiler/system has a few restrictions: it is not allowed to re-order expressions containing volatile-qualified operands, nor is it allowed to re-order input/output.



                    Apart from those special cases, anything is fair game. It may execute y before x, it may execute them in parallel. It might optimize the whole code away as there are no observable side-effects in the code. And so on.



                    Please note that thread-safety and order of execution are different things. Threads are created explicitly by the programmer/libraries. A context switch may interrupt any variable acccess which is not atomic. That's another issue and the solution is to use mutex, _Atomic qualifier or similar protection mechanisms.





                    If the order matters, you should volatile-qualify the variables. In that case, the following guarantees are made by the language:



                    C17 5.1.2.3 § 6 (the definition of observable behavior):




                    Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.




                    C17 5.1.2.3 § 4:




                    In the abstract machine, all expressions are evaluated as specified by the semantics.




                    Where "semantics" is pretty much the whole standard, for example the part that specifies that a ; consists of a sequence point. (In this case, C17 6.7.6 "The end of a full
                    declarator is a sequence point." The term "sequenced before" is specified in C17 5.1.2.3 §3).



                    So given this:



                    volatile int x = 1;
                    volatile int y = 1;


                    then the order of initialization is guaranteed to be x before y, as the ; of the first line guarantees the sequencing order, and volatile guarantees that the program strictly follows the evaluation order specified in the standard.





                    Now as it happens in the real world, volatile does not guarantee memory barriers on many compiler implementations for multi-core systems. Those implementations are not conforming.



                    Opportunist compilers might claim that the programmer must use system-specific memory barriers to guarantee order of execution. But in case of volatile, that is not true, as proven above. They just want to dodge their responsibility and hand it over to the programmers. The C standard doesn't care if the CPU has 57 cores, branch prediction and instruction pipelining.






                    share|improve this answer
























                    • An operation on a volatile does not (have to) set a memory barrier because concurrent read/write access is undefined.

                      – LWimsey
                      Nov 21 '18 at 19:33











                    • @LWimsey I just cited all over this answer why it concurrent execution is well-defined. Sequencing is defined in C17 5.1.2.3 §3. Concurrent access of data memory is another story. The purpose of memory barriers is to guarantee order of execution, not to guarantee thread-safety of data. If you use the volatile keyword, then implementing memory barriers correctly in the underlying machine code is the C compiler's job.

                      – Lundin
                      Nov 22 '18 at 7:26













                    • Your quotes are about evaluation order on volatile objects. The compiler must ensure they are evaluated according to the rules of the abstact machine, but that does not mean the effects of those operations are observed by other threads in the same order. In fact, reasoning about ordering with respect to other threads is meaningless because volatile does not make an object data-race-free. I am not sure though how you see the difference between "concurrent execution is well-defined" and then "Concurrent access of data memory is another story", but concurrent read/write access ....

                      – LWimsey
                      Nov 23 '18 at 4:01











                    • .... on a non-atomic object is undefined behavior per C17 5.1.2.4 §35: The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior. Since operations on volatile objects are only well-defined within a single thread (without additional synchronization), memory barriers are not involved. Microsoft compilers have used memory barriers on volatile operations, but that is based on a stronger guarantee than necessary.

                      – LWimsey
                      Nov 23 '18 at 4:02











                    • @LWimsey What I mean is that order of execution per thread is well-defined. If the compiler were to parallelize execution so that it get executed by multiple cores - which the C standard does not inhibit - it must still follow the rules of the abstract machine. This includes things like instruction caching, branch prediction and pipeline execution.

                      – Lundin
                      Nov 23 '18 at 7:35
















                    0














                    The C standard specifies a term called observable behavior. This means that at a minimum, the compiler/system has a few restrictions: it is not allowed to re-order expressions containing volatile-qualified operands, nor is it allowed to re-order input/output.



                    Apart from those special cases, anything is fair game. It may execute y before x, it may execute them in parallel. It might optimize the whole code away as there are no observable side-effects in the code. And so on.



                    Please note that thread-safety and order of execution are different things. Threads are created explicitly by the programmer/libraries. A context switch may interrupt any variable acccess which is not atomic. That's another issue and the solution is to use mutex, _Atomic qualifier or similar protection mechanisms.





                    If the order matters, you should volatile-qualify the variables. In that case, the following guarantees are made by the language:



                    C17 5.1.2.3 § 6 (the definition of observable behavior):




                    Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.




                    C17 5.1.2.3 § 4:




                    In the abstract machine, all expressions are evaluated as specified by the semantics.




                    Where "semantics" is pretty much the whole standard, for example the part that specifies that a ; consists of a sequence point. (In this case, C17 6.7.6 "The end of a full
                    declarator is a sequence point." The term "sequenced before" is specified in C17 5.1.2.3 §3).



                    So given this:



                    volatile int x = 1;
                    volatile int y = 1;


                    then the order of initialization is guaranteed to be x before y, as the ; of the first line guarantees the sequencing order, and volatile guarantees that the program strictly follows the evaluation order specified in the standard.





                    Now as it happens in the real world, volatile does not guarantee memory barriers on many compiler implementations for multi-core systems. Those implementations are not conforming.



                    Opportunist compilers might claim that the programmer must use system-specific memory barriers to guarantee order of execution. But in case of volatile, that is not true, as proven above. They just want to dodge their responsibility and hand it over to the programmers. The C standard doesn't care if the CPU has 57 cores, branch prediction and instruction pipelining.






                    share|improve this answer
























                    • An operation on a volatile does not (have to) set a memory barrier because concurrent read/write access is undefined.

                      – LWimsey
                      Nov 21 '18 at 19:33











                    • @LWimsey I just cited all over this answer why it concurrent execution is well-defined. Sequencing is defined in C17 5.1.2.3 §3. Concurrent access of data memory is another story. The purpose of memory barriers is to guarantee order of execution, not to guarantee thread-safety of data. If you use the volatile keyword, then implementing memory barriers correctly in the underlying machine code is the C compiler's job.

                      – Lundin
                      Nov 22 '18 at 7:26













                    • Your quotes are about evaluation order on volatile objects. The compiler must ensure they are evaluated according to the rules of the abstact machine, but that does not mean the effects of those operations are observed by other threads in the same order. In fact, reasoning about ordering with respect to other threads is meaningless because volatile does not make an object data-race-free. I am not sure though how you see the difference between "concurrent execution is well-defined" and then "Concurrent access of data memory is another story", but concurrent read/write access ....

                      – LWimsey
                      Nov 23 '18 at 4:01











                    • .... on a non-atomic object is undefined behavior per C17 5.1.2.4 §35: The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior. Since operations on volatile objects are only well-defined within a single thread (without additional synchronization), memory barriers are not involved. Microsoft compilers have used memory barriers on volatile operations, but that is based on a stronger guarantee than necessary.

                      – LWimsey
                      Nov 23 '18 at 4:02











                    • @LWimsey What I mean is that order of execution per thread is well-defined. If the compiler were to parallelize execution so that it get executed by multiple cores - which the C standard does not inhibit - it must still follow the rules of the abstract machine. This includes things like instruction caching, branch prediction and pipeline execution.

                      – Lundin
                      Nov 23 '18 at 7:35














                    0












                    0








                    0







                    The C standard specifies a term called observable behavior. This means that at a minimum, the compiler/system has a few restrictions: it is not allowed to re-order expressions containing volatile-qualified operands, nor is it allowed to re-order input/output.



                    Apart from those special cases, anything is fair game. It may execute y before x, it may execute them in parallel. It might optimize the whole code away as there are no observable side-effects in the code. And so on.



                    Please note that thread-safety and order of execution are different things. Threads are created explicitly by the programmer/libraries. A context switch may interrupt any variable acccess which is not atomic. That's another issue and the solution is to use mutex, _Atomic qualifier or similar protection mechanisms.





                    If the order matters, you should volatile-qualify the variables. In that case, the following guarantees are made by the language:



                    C17 5.1.2.3 § 6 (the definition of observable behavior):




                    Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.




                    C17 5.1.2.3 § 4:




                    In the abstract machine, all expressions are evaluated as specified by the semantics.




                    Where "semantics" is pretty much the whole standard, for example the part that specifies that a ; consists of a sequence point. (In this case, C17 6.7.6 "The end of a full
                    declarator is a sequence point." The term "sequenced before" is specified in C17 5.1.2.3 §3).



                    So given this:



                    volatile int x = 1;
                    volatile int y = 1;


                    then the order of initialization is guaranteed to be x before y, as the ; of the first line guarantees the sequencing order, and volatile guarantees that the program strictly follows the evaluation order specified in the standard.





                    Now as it happens in the real world, volatile does not guarantee memory barriers on many compiler implementations for multi-core systems. Those implementations are not conforming.



                    Opportunist compilers might claim that the programmer must use system-specific memory barriers to guarantee order of execution. But in case of volatile, that is not true, as proven above. They just want to dodge their responsibility and hand it over to the programmers. The C standard doesn't care if the CPU has 57 cores, branch prediction and instruction pipelining.






                    share|improve this answer













                    The C standard specifies a term called observable behavior. This means that at a minimum, the compiler/system has a few restrictions: it is not allowed to re-order expressions containing volatile-qualified operands, nor is it allowed to re-order input/output.



                    Apart from those special cases, anything is fair game. It may execute y before x, it may execute them in parallel. It might optimize the whole code away as there are no observable side-effects in the code. And so on.



                    Please note that thread-safety and order of execution are different things. Threads are created explicitly by the programmer/libraries. A context switch may interrupt any variable acccess which is not atomic. That's another issue and the solution is to use mutex, _Atomic qualifier or similar protection mechanisms.





                    If the order matters, you should volatile-qualify the variables. In that case, the following guarantees are made by the language:



                    C17 5.1.2.3 § 6 (the definition of observable behavior):




                    Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.




                    C17 5.1.2.3 § 4:




                    In the abstract machine, all expressions are evaluated as specified by the semantics.




                    Where "semantics" is pretty much the whole standard, for example the part that specifies that a ; consists of a sequence point. (In this case, C17 6.7.6 "The end of a full
                    declarator is a sequence point." The term "sequenced before" is specified in C17 5.1.2.3 §3).



                    So given this:



                    volatile int x = 1;
                    volatile int y = 1;


                    then the order of initialization is guaranteed to be x before y, as the ; of the first line guarantees the sequencing order, and volatile guarantees that the program strictly follows the evaluation order specified in the standard.





                    Now as it happens in the real world, volatile does not guarantee memory barriers on many compiler implementations for multi-core systems. Those implementations are not conforming.



                    Opportunist compilers might claim that the programmer must use system-specific memory barriers to guarantee order of execution. But in case of volatile, that is not true, as proven above. They just want to dodge their responsibility and hand it over to the programmers. The C standard doesn't care if the CPU has 57 cores, branch prediction and instruction pipelining.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Nov 21 '18 at 16:30









                    LundinLundin

                    107k17158262




                    107k17158262













                    • An operation on a volatile does not (have to) set a memory barrier because concurrent read/write access is undefined.

                      – LWimsey
                      Nov 21 '18 at 19:33











                    • @LWimsey I just cited all over this answer why it concurrent execution is well-defined. Sequencing is defined in C17 5.1.2.3 §3. Concurrent access of data memory is another story. The purpose of memory barriers is to guarantee order of execution, not to guarantee thread-safety of data. If you use the volatile keyword, then implementing memory barriers correctly in the underlying machine code is the C compiler's job.

                      – Lundin
                      Nov 22 '18 at 7:26













                    • Your quotes are about evaluation order on volatile objects. The compiler must ensure they are evaluated according to the rules of the abstact machine, but that does not mean the effects of those operations are observed by other threads in the same order. In fact, reasoning about ordering with respect to other threads is meaningless because volatile does not make an object data-race-free. I am not sure though how you see the difference between "concurrent execution is well-defined" and then "Concurrent access of data memory is another story", but concurrent read/write access ....

                      – LWimsey
                      Nov 23 '18 at 4:01











                    • .... on a non-atomic object is undefined behavior per C17 5.1.2.4 §35: The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior. Since operations on volatile objects are only well-defined within a single thread (without additional synchronization), memory barriers are not involved. Microsoft compilers have used memory barriers on volatile operations, but that is based on a stronger guarantee than necessary.

                      – LWimsey
                      Nov 23 '18 at 4:02











                    • @LWimsey What I mean is that order of execution per thread is well-defined. If the compiler were to parallelize execution so that it get executed by multiple cores - which the C standard does not inhibit - it must still follow the rules of the abstract machine. This includes things like instruction caching, branch prediction and pipeline execution.

                      – Lundin
                      Nov 23 '18 at 7:35



















                    • An operation on a volatile does not (have to) set a memory barrier because concurrent read/write access is undefined.

                      – LWimsey
                      Nov 21 '18 at 19:33











                    • @LWimsey I just cited all over this answer why it concurrent execution is well-defined. Sequencing is defined in C17 5.1.2.3 §3. Concurrent access of data memory is another story. The purpose of memory barriers is to guarantee order of execution, not to guarantee thread-safety of data. If you use the volatile keyword, then implementing memory barriers correctly in the underlying machine code is the C compiler's job.

                      – Lundin
                      Nov 22 '18 at 7:26













                    • Your quotes are about evaluation order on volatile objects. The compiler must ensure they are evaluated according to the rules of the abstact machine, but that does not mean the effects of those operations are observed by other threads in the same order. In fact, reasoning about ordering with respect to other threads is meaningless because volatile does not make an object data-race-free. I am not sure though how you see the difference between "concurrent execution is well-defined" and then "Concurrent access of data memory is another story", but concurrent read/write access ....

                      – LWimsey
                      Nov 23 '18 at 4:01











                    • .... on a non-atomic object is undefined behavior per C17 5.1.2.4 §35: The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior. Since operations on volatile objects are only well-defined within a single thread (without additional synchronization), memory barriers are not involved. Microsoft compilers have used memory barriers on volatile operations, but that is based on a stronger guarantee than necessary.

                      – LWimsey
                      Nov 23 '18 at 4:02











                    • @LWimsey What I mean is that order of execution per thread is well-defined. If the compiler were to parallelize execution so that it get executed by multiple cores - which the C standard does not inhibit - it must still follow the rules of the abstract machine. This includes things like instruction caching, branch prediction and pipeline execution.

                      – Lundin
                      Nov 23 '18 at 7:35

















                    An operation on a volatile does not (have to) set a memory barrier because concurrent read/write access is undefined.

                    – LWimsey
                    Nov 21 '18 at 19:33





                    An operation on a volatile does not (have to) set a memory barrier because concurrent read/write access is undefined.

                    – LWimsey
                    Nov 21 '18 at 19:33













                    @LWimsey I just cited all over this answer why it concurrent execution is well-defined. Sequencing is defined in C17 5.1.2.3 §3. Concurrent access of data memory is another story. The purpose of memory barriers is to guarantee order of execution, not to guarantee thread-safety of data. If you use the volatile keyword, then implementing memory barriers correctly in the underlying machine code is the C compiler's job.

                    – Lundin
                    Nov 22 '18 at 7:26







                    @LWimsey I just cited all over this answer why it concurrent execution is well-defined. Sequencing is defined in C17 5.1.2.3 §3. Concurrent access of data memory is another story. The purpose of memory barriers is to guarantee order of execution, not to guarantee thread-safety of data. If you use the volatile keyword, then implementing memory barriers correctly in the underlying machine code is the C compiler's job.

                    – Lundin
                    Nov 22 '18 at 7:26















                    Your quotes are about evaluation order on volatile objects. The compiler must ensure they are evaluated according to the rules of the abstact machine, but that does not mean the effects of those operations are observed by other threads in the same order. In fact, reasoning about ordering with respect to other threads is meaningless because volatile does not make an object data-race-free. I am not sure though how you see the difference between "concurrent execution is well-defined" and then "Concurrent access of data memory is another story", but concurrent read/write access ....

                    – LWimsey
                    Nov 23 '18 at 4:01





                    Your quotes are about evaluation order on volatile objects. The compiler must ensure they are evaluated according to the rules of the abstact machine, but that does not mean the effects of those operations are observed by other threads in the same order. In fact, reasoning about ordering with respect to other threads is meaningless because volatile does not make an object data-race-free. I am not sure though how you see the difference between "concurrent execution is well-defined" and then "Concurrent access of data memory is another story", but concurrent read/write access ....

                    – LWimsey
                    Nov 23 '18 at 4:01













                    .... on a non-atomic object is undefined behavior per C17 5.1.2.4 §35: The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior. Since operations on volatile objects are only well-defined within a single thread (without additional synchronization), memory barriers are not involved. Microsoft compilers have used memory barriers on volatile operations, but that is based on a stronger guarantee than necessary.

                    – LWimsey
                    Nov 23 '18 at 4:02





                    .... on a non-atomic object is undefined behavior per C17 5.1.2.4 §35: The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior. Since operations on volatile objects are only well-defined within a single thread (without additional synchronization), memory barriers are not involved. Microsoft compilers have used memory barriers on volatile operations, but that is based on a stronger guarantee than necessary.

                    – LWimsey
                    Nov 23 '18 at 4:02













                    @LWimsey What I mean is that order of execution per thread is well-defined. If the compiler were to parallelize execution so that it get executed by multiple cores - which the C standard does not inhibit - it must still follow the rules of the abstract machine. This includes things like instruction caching, branch prediction and pipeline execution.

                    – Lundin
                    Nov 23 '18 at 7:35





                    @LWimsey What I mean is that order of execution per thread is well-defined. If the compiler were to parallelize execution so that it get executed by multiple cores - which the C standard does not inhibit - it must still follow the rules of the abstract machine. This includes things like instruction caching, branch prediction and pipeline execution.

                    – Lundin
                    Nov 23 '18 at 7:35


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53415018%2fc-thread-safety-and-order-of-operations%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Costa Masnaga

                    Fotorealismo

                    Sidney Franklin