Why this Scala code is apparently not running on the Spark workers instead only on the Spark driver node?











up vote
-2
down vote

favorite












I was using the code mentioned here to create an HashMap in Scala. Copy-pasting below for convenience:



def genList(xx: String) = {
Seq("one", "two", "three", "four")
}

val oriwords = Set("hello", "how", "are", "you")

val newMap = (Map[String, (String, Int)]() /: oriwords) (
(cmap, currentWord) => {
val xv = 2

genList(currentWord).foldLeft(cmap) {
(acc, ps) => {
val src = acc get ps

if (src == None) {
acc + (ps -> ((currentWord, xv)))
}
else {
if (src.get._2 < xv) {
acc + (ps -> ((currentWord, xv)))
}
else acc
}

}
}
}
)

println(newMap)


Note: The above code works for small oriwords, however, it does not work when oriwords is large. Apparantly because the computations are happening at the Spark driver node.



When I run, I get the the out of memory exception as follows:



 WARN  HeartbeatReceiver:66 - Removing executor driver with no recent heartbeats: 159099 ms exceeds timeout 120000 ms
Exception in thread "dispatcher-event-loop-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "dispatcher-event-loop-1"
Exception in thread "refresh progress" java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space


How to force the calculation to happen on the Spark cluster and save the generated HashMap on the Spark cluster itself instead of it being calculated and saved at the Spark driver node?










share|improve this question




























    up vote
    -2
    down vote

    favorite












    I was using the code mentioned here to create an HashMap in Scala. Copy-pasting below for convenience:



    def genList(xx: String) = {
    Seq("one", "two", "three", "four")
    }

    val oriwords = Set("hello", "how", "are", "you")

    val newMap = (Map[String, (String, Int)]() /: oriwords) (
    (cmap, currentWord) => {
    val xv = 2

    genList(currentWord).foldLeft(cmap) {
    (acc, ps) => {
    val src = acc get ps

    if (src == None) {
    acc + (ps -> ((currentWord, xv)))
    }
    else {
    if (src.get._2 < xv) {
    acc + (ps -> ((currentWord, xv)))
    }
    else acc
    }

    }
    }
    }
    )

    println(newMap)


    Note: The above code works for small oriwords, however, it does not work when oriwords is large. Apparantly because the computations are happening at the Spark driver node.



    When I run, I get the the out of memory exception as follows:



     WARN  HeartbeatReceiver:66 - Removing executor driver with no recent heartbeats: 159099 ms exceeds timeout 120000 ms
    Exception in thread "dispatcher-event-loop-1"
    Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "dispatcher-event-loop-1"
    Exception in thread "refresh progress" java.lang.OutOfMemoryError: Java heap space
    java.lang.OutOfMemoryError: Java heap space


    How to force the calculation to happen on the Spark cluster and save the generated HashMap on the Spark cluster itself instead of it being calculated and saved at the Spark driver node?










    share|improve this question


























      up vote
      -2
      down vote

      favorite









      up vote
      -2
      down vote

      favorite











      I was using the code mentioned here to create an HashMap in Scala. Copy-pasting below for convenience:



      def genList(xx: String) = {
      Seq("one", "two", "three", "four")
      }

      val oriwords = Set("hello", "how", "are", "you")

      val newMap = (Map[String, (String, Int)]() /: oriwords) (
      (cmap, currentWord) => {
      val xv = 2

      genList(currentWord).foldLeft(cmap) {
      (acc, ps) => {
      val src = acc get ps

      if (src == None) {
      acc + (ps -> ((currentWord, xv)))
      }
      else {
      if (src.get._2 < xv) {
      acc + (ps -> ((currentWord, xv)))
      }
      else acc
      }

      }
      }
      }
      )

      println(newMap)


      Note: The above code works for small oriwords, however, it does not work when oriwords is large. Apparantly because the computations are happening at the Spark driver node.



      When I run, I get the the out of memory exception as follows:



       WARN  HeartbeatReceiver:66 - Removing executor driver with no recent heartbeats: 159099 ms exceeds timeout 120000 ms
      Exception in thread "dispatcher-event-loop-1"
      Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "dispatcher-event-loop-1"
      Exception in thread "refresh progress" java.lang.OutOfMemoryError: Java heap space
      java.lang.OutOfMemoryError: Java heap space


      How to force the calculation to happen on the Spark cluster and save the generated HashMap on the Spark cluster itself instead of it being calculated and saved at the Spark driver node?










      share|improve this question















      I was using the code mentioned here to create an HashMap in Scala. Copy-pasting below for convenience:



      def genList(xx: String) = {
      Seq("one", "two", "three", "four")
      }

      val oriwords = Set("hello", "how", "are", "you")

      val newMap = (Map[String, (String, Int)]() /: oriwords) (
      (cmap, currentWord) => {
      val xv = 2

      genList(currentWord).foldLeft(cmap) {
      (acc, ps) => {
      val src = acc get ps

      if (src == None) {
      acc + (ps -> ((currentWord, xv)))
      }
      else {
      if (src.get._2 < xv) {
      acc + (ps -> ((currentWord, xv)))
      }
      else acc
      }

      }
      }
      }
      )

      println(newMap)


      Note: The above code works for small oriwords, however, it does not work when oriwords is large. Apparantly because the computations are happening at the Spark driver node.



      When I run, I get the the out of memory exception as follows:



       WARN  HeartbeatReceiver:66 - Removing executor driver with no recent heartbeats: 159099 ms exceeds timeout 120000 ms
      Exception in thread "dispatcher-event-loop-1"
      Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "dispatcher-event-loop-1"
      Exception in thread "refresh progress" java.lang.OutOfMemoryError: Java heap space
      java.lang.OutOfMemoryError: Java heap space


      How to force the calculation to happen on the Spark cluster and save the generated HashMap on the Spark cluster itself instead of it being calculated and saved at the Spark driver node?







      scala apache-spark






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 19 at 22:41









      halfer

      14.3k758107




      14.3k758107










      asked Nov 19 at 12:35









      user3243499

      72911125




      72911125
























          2 Answers
          2






          active

          oldest

          votes

















          up vote
          5
          down vote



          accepted










          Things need to be in RDD, Dataset, Dataframe et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map and foreach on one of those structures.






          share|improve this answer





















          • Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
            – user3243499
            Nov 19 at 12:40








          • 1




            just on the driver. distribution only happens based on the above listed types not for scala std-lib Map
            – Dominic Egger
            Nov 19 at 12:42










          • So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
            – user3243499
            Nov 19 at 12:44






          • 1




            well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
            – Dominic Egger
            Nov 19 at 12:49










          • I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
            – user3243499
            Nov 19 at 12:51


















          up vote
          0
          down vote













          Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.






          share|improve this answer





















            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53374791%2fwhy-this-scala-code-is-apparently-not-running-on-the-spark-workers-instead-only%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            5
            down vote



            accepted










            Things need to be in RDD, Dataset, Dataframe et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map and foreach on one of those structures.






            share|improve this answer





















            • Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
              – user3243499
              Nov 19 at 12:40








            • 1




              just on the driver. distribution only happens based on the above listed types not for scala std-lib Map
              – Dominic Egger
              Nov 19 at 12:42










            • So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
              – user3243499
              Nov 19 at 12:44






            • 1




              well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
              – Dominic Egger
              Nov 19 at 12:49










            • I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
              – user3243499
              Nov 19 at 12:51















            up vote
            5
            down vote



            accepted










            Things need to be in RDD, Dataset, Dataframe et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map and foreach on one of those structures.






            share|improve this answer





















            • Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
              – user3243499
              Nov 19 at 12:40








            • 1




              just on the driver. distribution only happens based on the above listed types not for scala std-lib Map
              – Dominic Egger
              Nov 19 at 12:42










            • So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
              – user3243499
              Nov 19 at 12:44






            • 1




              well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
              – Dominic Egger
              Nov 19 at 12:49










            • I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
              – user3243499
              Nov 19 at 12:51













            up vote
            5
            down vote



            accepted







            up vote
            5
            down vote



            accepted






            Things need to be in RDD, Dataset, Dataframe et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map and foreach on one of those structures.






            share|improve this answer












            Things need to be in RDD, Dataset, Dataframe et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map and foreach on one of those structures.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Nov 19 at 12:38









            Dominic Egger

            64317




            64317












            • Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
              – user3243499
              Nov 19 at 12:40








            • 1




              just on the driver. distribution only happens based on the above listed types not for scala std-lib Map
              – Dominic Egger
              Nov 19 at 12:42










            • So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
              – user3243499
              Nov 19 at 12:44






            • 1




              well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
              – Dominic Egger
              Nov 19 at 12:49










            • I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
              – user3243499
              Nov 19 at 12:51


















            • Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
              – user3243499
              Nov 19 at 12:40








            • 1




              just on the driver. distribution only happens based on the above listed types not for scala std-lib Map
              – Dominic Egger
              Nov 19 at 12:42










            • So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
              – user3243499
              Nov 19 at 12:44






            • 1




              well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
              – Dominic Egger
              Nov 19 at 12:49










            • I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
              – user3243499
              Nov 19 at 12:51
















            Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
            – user3243499
            Nov 19 at 12:40






            Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
            – user3243499
            Nov 19 at 12:40






            1




            1




            just on the driver. distribution only happens based on the above listed types not for scala std-lib Map
            – Dominic Egger
            Nov 19 at 12:42




            just on the driver. distribution only happens based on the above listed types not for scala std-lib Map
            – Dominic Egger
            Nov 19 at 12:42












            So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
            – user3243499
            Nov 19 at 12:44




            So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
            – user3243499
            Nov 19 at 12:44




            1




            1




            well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
            – Dominic Egger
            Nov 19 at 12:49




            well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
            – Dominic Egger
            Nov 19 at 12:49












            I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
            – user3243499
            Nov 19 at 12:51




            I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
            – user3243499
            Nov 19 at 12:51












            up vote
            0
            down vote













            Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.






            share|improve this answer

























              up vote
              0
              down vote













              Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.






              share|improve this answer























                up vote
                0
                down vote










                up vote
                0
                down vote









                Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.






                share|improve this answer












                Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 19 at 12:39









                Blokje5

                873




                873






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53374791%2fwhy-this-scala-code-is-apparently-not-running-on-the-spark-workers-instead-only%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Costa Masnaga

                    Fotorealismo

                    Sidney Franklin