Why aren't primality tests easily linear in time complexity?












1














Why don't we consider them as linear? I don't understand. You just have to check for factorization up to sqrt of n. So it's even faster than linear.
I assume it's not linear only if we compare the number of operations relative to the input in terms of binary representation. But why would we do so?
It seems to me wrong. The growth in calculation should be calculated compared to the number itself. Why do we compare it to the binary representation?










share|cite|improve this question
























  • You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
    – hobbs
    1 hour ago












  • So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
    – bilanush
    1 hour ago










  • No, just pointing out an additional problem.
    – hobbs
    1 hour ago










  • Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
    – bilanush
    1 hour ago












  • Also, I would be happy if anyone answers why we care about the binaric representation.
    – bilanush
    1 hour ago
















1














Why don't we consider them as linear? I don't understand. You just have to check for factorization up to sqrt of n. So it's even faster than linear.
I assume it's not linear only if we compare the number of operations relative to the input in terms of binary representation. But why would we do so?
It seems to me wrong. The growth in calculation should be calculated compared to the number itself. Why do we compare it to the binary representation?










share|cite|improve this question
























  • You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
    – hobbs
    1 hour ago












  • So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
    – bilanush
    1 hour ago










  • No, just pointing out an additional problem.
    – hobbs
    1 hour ago










  • Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
    – bilanush
    1 hour ago












  • Also, I would be happy if anyone answers why we care about the binaric representation.
    – bilanush
    1 hour ago














1












1








1







Why don't we consider them as linear? I don't understand. You just have to check for factorization up to sqrt of n. So it's even faster than linear.
I assume it's not linear only if we compare the number of operations relative to the input in terms of binary representation. But why would we do so?
It seems to me wrong. The growth in calculation should be calculated compared to the number itself. Why do we compare it to the binary representation?










share|cite|improve this question















Why don't we consider them as linear? I don't understand. You just have to check for factorization up to sqrt of n. So it's even faster than linear.
I assume it's not linear only if we compare the number of operations relative to the input in terms of binary representation. But why would we do so?
It seems to me wrong. The growth in calculation should be calculated compared to the number itself. Why do we compare it to the binary representation?







algorithm-analysis time-complexity runtime-analysis






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 1 hour ago









David Richerby

65.9k15100190




65.9k15100190










asked 5 hours ago









bilanush

223




223












  • You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
    – hobbs
    1 hour ago












  • So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
    – bilanush
    1 hour ago










  • No, just pointing out an additional problem.
    – hobbs
    1 hour ago










  • Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
    – bilanush
    1 hour ago












  • Also, I would be happy if anyone answers why we care about the binaric representation.
    – bilanush
    1 hour ago


















  • You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
    – hobbs
    1 hour ago












  • So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
    – bilanush
    1 hour ago










  • No, just pointing out an additional problem.
    – hobbs
    1 hour ago










  • Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
    – bilanush
    1 hour ago












  • Also, I would be happy if anyone answers why we care about the binaric representation.
    – bilanush
    1 hour ago
















You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
– hobbs
1 hour ago






You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
– hobbs
1 hour ago














So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
– bilanush
1 hour ago




So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
– bilanush
1 hour ago












No, just pointing out an additional problem.
– hobbs
1 hour ago




No, just pointing out an additional problem.
– hobbs
1 hour ago












Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
– bilanush
1 hour ago






Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
– bilanush
1 hour ago














Also, I would be happy if anyone answers why we care about the binaric representation.
– bilanush
1 hour ago




Also, I would be happy if anyone answers why we care about the binaric representation.
– bilanush
1 hour ago










2 Answers
2






active

oldest

votes


















3














Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?



And by all means, you are free to choose whichever representation you feel comfortable with.



We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.






share|cite|improve this answer





















  • Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
    – bilanush
    2 hours ago












  • We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
    – bilanush
    2 hours ago






  • 2




    @bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
    – hobbs
    1 hour ago










  • I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
    – bilanush
    1 hour ago










  • In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
    – bilanush
    1 hour ago



















0














We use binary simply because it is the base in which computers operate and it is irrelevant which base is used to analyze algorithm complexity.



The number of digits of a number $n$ in bases $b_1$ and $b_2$ is about $log_{b_1}n$ and $log_{b_2}n$. Those two expressions relate as
$$log_{b_1}n = frac{1}{log_{b_2}b_1} cdot log_{b_2}n$$
So the algorithm will end up (for asymptotical complexity analysis concerns) in the same class no matter which base we use to represent input, because they differ only by a constant factor.






share|cite|improve this answer





















    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "419"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f102099%2fwhy-arent-primality-tests-easily-linear-in-time-complexity%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3














    Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?



    And by all means, you are free to choose whichever representation you feel comfortable with.



    We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.






    share|cite|improve this answer





















    • Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
      – bilanush
      2 hours ago












    • We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
      – bilanush
      2 hours ago






    • 2




      @bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
      – hobbs
      1 hour ago










    • I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
      – bilanush
      1 hour ago










    • In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
      – bilanush
      1 hour ago
















    3














    Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?



    And by all means, you are free to choose whichever representation you feel comfortable with.



    We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.






    share|cite|improve this answer





















    • Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
      – bilanush
      2 hours ago












    • We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
      – bilanush
      2 hours ago






    • 2




      @bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
      – hobbs
      1 hour ago










    • I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
      – bilanush
      1 hour ago










    • In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
      – bilanush
      1 hour ago














    3












    3








    3






    Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?



    And by all means, you are free to choose whichever representation you feel comfortable with.



    We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.






    share|cite|improve this answer












    Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?



    And by all means, you are free to choose whichever representation you feel comfortable with.



    We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered 4 hours ago









    Pål GD

    5,8871939




    5,8871939












    • Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
      – bilanush
      2 hours ago












    • We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
      – bilanush
      2 hours ago






    • 2




      @bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
      – hobbs
      1 hour ago










    • I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
      – bilanush
      1 hour ago










    • In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
      – bilanush
      1 hour ago


















    • Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
      – bilanush
      2 hours ago












    • We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
      – bilanush
      2 hours ago






    • 2




      @bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
      – hobbs
      1 hour ago










    • I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
      – bilanush
      1 hour ago










    • In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
      – bilanush
      1 hour ago
















    Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
    – bilanush
    2 hours ago






    Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
    – bilanush
    2 hours ago














    We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
    – bilanush
    2 hours ago




    We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
    – bilanush
    2 hours ago




    2




    2




    @bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
    – hobbs
    1 hour ago




    @bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
    – hobbs
    1 hour ago












    I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
    – bilanush
    1 hour ago




    I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
    – bilanush
    1 hour ago












    In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
    – bilanush
    1 hour ago




    In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
    – bilanush
    1 hour ago











    0














    We use binary simply because it is the base in which computers operate and it is irrelevant which base is used to analyze algorithm complexity.



    The number of digits of a number $n$ in bases $b_1$ and $b_2$ is about $log_{b_1}n$ and $log_{b_2}n$. Those two expressions relate as
    $$log_{b_1}n = frac{1}{log_{b_2}b_1} cdot log_{b_2}n$$
    So the algorithm will end up (for asymptotical complexity analysis concerns) in the same class no matter which base we use to represent input, because they differ only by a constant factor.






    share|cite|improve this answer


























      0














      We use binary simply because it is the base in which computers operate and it is irrelevant which base is used to analyze algorithm complexity.



      The number of digits of a number $n$ in bases $b_1$ and $b_2$ is about $log_{b_1}n$ and $log_{b_2}n$. Those two expressions relate as
      $$log_{b_1}n = frac{1}{log_{b_2}b_1} cdot log_{b_2}n$$
      So the algorithm will end up (for asymptotical complexity analysis concerns) in the same class no matter which base we use to represent input, because they differ only by a constant factor.






      share|cite|improve this answer
























        0












        0








        0






        We use binary simply because it is the base in which computers operate and it is irrelevant which base is used to analyze algorithm complexity.



        The number of digits of a number $n$ in bases $b_1$ and $b_2$ is about $log_{b_1}n$ and $log_{b_2}n$. Those two expressions relate as
        $$log_{b_1}n = frac{1}{log_{b_2}b_1} cdot log_{b_2}n$$
        So the algorithm will end up (for asymptotical complexity analysis concerns) in the same class no matter which base we use to represent input, because they differ only by a constant factor.






        share|cite|improve this answer












        We use binary simply because it is the base in which computers operate and it is irrelevant which base is used to analyze algorithm complexity.



        The number of digits of a number $n$ in bases $b_1$ and $b_2$ is about $log_{b_1}n$ and $log_{b_2}n$. Those two expressions relate as
        $$log_{b_1}n = frac{1}{log_{b_2}b_1} cdot log_{b_2}n$$
        So the algorithm will end up (for asymptotical complexity analysis concerns) in the same class no matter which base we use to represent input, because they differ only by a constant factor.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered 22 mins ago









        Sandro Lovnički

        8761115




        8761115






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Computer Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f102099%2fwhy-arent-primality-tests-easily-linear-in-time-complexity%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Create new schema in PostgreSQL using DBeaver

            Deepest pit of an array with Javascript: test on Codility

            Fotorealismo