Pyspark HBase bulk load org.apache.hadoop.hbase.client.Put cannot be cast to org.apache.hadoop.hbase.Cell












0














I'm trying to make a bulk load to HBase from Pyspark using Hfiles like in this post: https://stackoverflow.com/a/35077987/10585126



My code:



conf = {"hbase.zookeeper.qourum": host,
"zookeeper.znode.parent": "/hbase",
"hbase.mapred.outputtable": table,
"mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2",
"mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
"mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}

keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"

def csv_to_key_value(row):
puids = row.split(",")
result =
for (num, puid) in list(enumerate(puids))[1:]:
if puid:
val_tup = (puids[0], [puids[0], "sg", 'seg'+str(num)+'value', str(puid)])
result.append(val_tup)
ids_tup = (puids[0], [puids[0], "sg", 'seg'+str(num)+'id', str(num)])
result.append(ids_tup)
return result


data = sc.textFile(path_to_hdfs)
load_rdd = data.flatMap(lambda line: line.split("n")).flatMap(csv_to_key_value).sortByKey(True)
load_rdd.saveAsNewAPIHadoopFile(path + str(sc.startTime),
"org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2",
conf=conf,
keyConverter=keyConv,
valueConverter=valueConv)


But can't overcome exeption of java.lang.ClassCastException: org.apache.hadoop.hbase.client.Put cannot be cast to org.apache.hadoop.hbase.Cell



Did anyone faced this problem?
I'm using pyspark 1.6.0 (CDH 5.9.0) with hbase-examples-1.2.0-cdh5.9.0.jar and spark-examples-1.6.0-cdh5.9.0-hadoop2.6.0-cdh5.9.0.jar



p.s. loading with Puts works well!










share|improve this question





























    0














    I'm trying to make a bulk load to HBase from Pyspark using Hfiles like in this post: https://stackoverflow.com/a/35077987/10585126



    My code:



    conf = {"hbase.zookeeper.qourum": host,
    "zookeeper.znode.parent": "/hbase",
    "hbase.mapred.outputtable": table,
    "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2",
    "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
    "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}

    keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
    valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"

    def csv_to_key_value(row):
    puids = row.split(",")
    result =
    for (num, puid) in list(enumerate(puids))[1:]:
    if puid:
    val_tup = (puids[0], [puids[0], "sg", 'seg'+str(num)+'value', str(puid)])
    result.append(val_tup)
    ids_tup = (puids[0], [puids[0], "sg", 'seg'+str(num)+'id', str(num)])
    result.append(ids_tup)
    return result


    data = sc.textFile(path_to_hdfs)
    load_rdd = data.flatMap(lambda line: line.split("n")).flatMap(csv_to_key_value).sortByKey(True)
    load_rdd.saveAsNewAPIHadoopFile(path + str(sc.startTime),
    "org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2",
    conf=conf,
    keyConverter=keyConv,
    valueConverter=valueConv)


    But can't overcome exeption of java.lang.ClassCastException: org.apache.hadoop.hbase.client.Put cannot be cast to org.apache.hadoop.hbase.Cell



    Did anyone faced this problem?
    I'm using pyspark 1.6.0 (CDH 5.9.0) with hbase-examples-1.2.0-cdh5.9.0.jar and spark-examples-1.6.0-cdh5.9.0-hadoop2.6.0-cdh5.9.0.jar



    p.s. loading with Puts works well!










    share|improve this question



























      0












      0








      0







      I'm trying to make a bulk load to HBase from Pyspark using Hfiles like in this post: https://stackoverflow.com/a/35077987/10585126



      My code:



      conf = {"hbase.zookeeper.qourum": host,
      "zookeeper.znode.parent": "/hbase",
      "hbase.mapred.outputtable": table,
      "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2",
      "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
      "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}

      keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
      valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"

      def csv_to_key_value(row):
      puids = row.split(",")
      result =
      for (num, puid) in list(enumerate(puids))[1:]:
      if puid:
      val_tup = (puids[0], [puids[0], "sg", 'seg'+str(num)+'value', str(puid)])
      result.append(val_tup)
      ids_tup = (puids[0], [puids[0], "sg", 'seg'+str(num)+'id', str(num)])
      result.append(ids_tup)
      return result


      data = sc.textFile(path_to_hdfs)
      load_rdd = data.flatMap(lambda line: line.split("n")).flatMap(csv_to_key_value).sortByKey(True)
      load_rdd.saveAsNewAPIHadoopFile(path + str(sc.startTime),
      "org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2",
      conf=conf,
      keyConverter=keyConv,
      valueConverter=valueConv)


      But can't overcome exeption of java.lang.ClassCastException: org.apache.hadoop.hbase.client.Put cannot be cast to org.apache.hadoop.hbase.Cell



      Did anyone faced this problem?
      I'm using pyspark 1.6.0 (CDH 5.9.0) with hbase-examples-1.2.0-cdh5.9.0.jar and spark-examples-1.6.0-cdh5.9.0-hadoop2.6.0-cdh5.9.0.jar



      p.s. loading with Puts works well!










      share|improve this question















      I'm trying to make a bulk load to HBase from Pyspark using Hfiles like in this post: https://stackoverflow.com/a/35077987/10585126



      My code:



      conf = {"hbase.zookeeper.qourum": host,
      "zookeeper.znode.parent": "/hbase",
      "hbase.mapred.outputtable": table,
      "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2",
      "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
      "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}

      keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
      valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"

      def csv_to_key_value(row):
      puids = row.split(",")
      result =
      for (num, puid) in list(enumerate(puids))[1:]:
      if puid:
      val_tup = (puids[0], [puids[0], "sg", 'seg'+str(num)+'value', str(puid)])
      result.append(val_tup)
      ids_tup = (puids[0], [puids[0], "sg", 'seg'+str(num)+'id', str(num)])
      result.append(ids_tup)
      return result


      data = sc.textFile(path_to_hdfs)
      load_rdd = data.flatMap(lambda line: line.split("n")).flatMap(csv_to_key_value).sortByKey(True)
      load_rdd.saveAsNewAPIHadoopFile(path + str(sc.startTime),
      "org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2",
      conf=conf,
      keyConverter=keyConv,
      valueConverter=valueConv)


      But can't overcome exeption of java.lang.ClassCastException: org.apache.hadoop.hbase.client.Put cannot be cast to org.apache.hadoop.hbase.Cell



      Did anyone faced this problem?
      I'm using pyspark 1.6.0 (CDH 5.9.0) with hbase-examples-1.2.0-cdh5.9.0.jar and spark-examples-1.6.0-cdh5.9.0-hadoop2.6.0-cdh5.9.0.jar



      p.s. loading with Puts works well!







      apache-spark pyspark hbase






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 20 at 13:02

























      asked Nov 20 at 12:02









      Nord1k

      11




      11





























          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53392601%2fpyspark-hbase-bulk-load-org-apache-hadoop-hbase-client-put-cannot-be-cast-to-org%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53392601%2fpyspark-hbase-bulk-load-org-apache-hadoop-hbase-client-put-cannot-be-cast-to-org%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Costa Masnaga

          Fotorealismo

          Sidney Franklin