Pyspark RDD “list index out of range” error












0















I have RDD in this form:



[[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]]


What I've wanted to achieve:



[['a ,b,c,d', 'a ,e,f,g'], ['h ,i,j,k', 'h ,l,m,n']]


What I did:



def pass_row(line):
new_line =
key = ''.join(line[0])
for el in line[1]:
el = key +' ,'+ el
new_line.append(el)
return new_line

rdd.map(pass_row)


It works for smaller samples of data. However, I am getting list index out of range on line for el in line[1]: when I try to run it on my whole dataset...



Basically I have one key (lets say ['a']) for ~100 different sets of values in ['b,c,d','e,f,g']. My ultimate goal is to have it as spark dataframe in form of rows:



col1 col2 col3 col 4
a b c d
a e f g
h i j k
h l m n


Thank you for any advice!










share|improve this question























  • Apparently you have a recored which have one element instead of two. So, you'vw got an error for line[1].

    – OmG
    Nov 21 '18 at 17:33
















0















I have RDD in this form:



[[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]]


What I've wanted to achieve:



[['a ,b,c,d', 'a ,e,f,g'], ['h ,i,j,k', 'h ,l,m,n']]


What I did:



def pass_row(line):
new_line =
key = ''.join(line[0])
for el in line[1]:
el = key +' ,'+ el
new_line.append(el)
return new_line

rdd.map(pass_row)


It works for smaller samples of data. However, I am getting list index out of range on line for el in line[1]: when I try to run it on my whole dataset...



Basically I have one key (lets say ['a']) for ~100 different sets of values in ['b,c,d','e,f,g']. My ultimate goal is to have it as spark dataframe in form of rows:



col1 col2 col3 col 4
a b c d
a e f g
h i j k
h l m n


Thank you for any advice!










share|improve this question























  • Apparently you have a recored which have one element instead of two. So, you'vw got an error for line[1].

    – OmG
    Nov 21 '18 at 17:33














0












0








0








I have RDD in this form:



[[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]]


What I've wanted to achieve:



[['a ,b,c,d', 'a ,e,f,g'], ['h ,i,j,k', 'h ,l,m,n']]


What I did:



def pass_row(line):
new_line =
key = ''.join(line[0])
for el in line[1]:
el = key +' ,'+ el
new_line.append(el)
return new_line

rdd.map(pass_row)


It works for smaller samples of data. However, I am getting list index out of range on line for el in line[1]: when I try to run it on my whole dataset...



Basically I have one key (lets say ['a']) for ~100 different sets of values in ['b,c,d','e,f,g']. My ultimate goal is to have it as spark dataframe in form of rows:



col1 col2 col3 col 4
a b c d
a e f g
h i j k
h l m n


Thank you for any advice!










share|improve this question














I have RDD in this form:



[[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]]


What I've wanted to achieve:



[['a ,b,c,d', 'a ,e,f,g'], ['h ,i,j,k', 'h ,l,m,n']]


What I did:



def pass_row(line):
new_line =
key = ''.join(line[0])
for el in line[1]:
el = key +' ,'+ el
new_line.append(el)
return new_line

rdd.map(pass_row)


It works for smaller samples of data. However, I am getting list index out of range on line for el in line[1]: when I try to run it on my whole dataset...



Basically I have one key (lets say ['a']) for ~100 different sets of values in ['b,c,d','e,f,g']. My ultimate goal is to have it as spark dataframe in form of rows:



col1 col2 col3 col 4
a b c d
a e f g
h i j k
h l m n


Thank you for any advice!







python python-3.x pyspark rdd






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 21 '18 at 17:08









GrevioosGrevioos

668




668













  • Apparently you have a recored which have one element instead of two. So, you'vw got an error for line[1].

    – OmG
    Nov 21 '18 at 17:33



















  • Apparently you have a recored which have one element instead of two. So, you'vw got an error for line[1].

    – OmG
    Nov 21 '18 at 17:33

















Apparently you have a recored which have one element instead of two. So, you'vw got an error for line[1].

– OmG
Nov 21 '18 at 17:33





Apparently you have a recored which have one element instead of two. So, you'vw got an error for line[1].

– OmG
Nov 21 '18 at 17:33












1 Answer
1






active

oldest

votes


















0














Your error seems more related to your data than your function (which seems to be correct, although a bit overcomplicated), and it looks like you applied it to a line that didn't have a line[1].



Could you make sure that the number of elements of line is constant in your actual dataset, for example with :



def pass_row(line):
assert len(line) == 2
return [ "%s, %s" % (''.join(line[0]), el) for el in line[1]]


That being said, for your actual goal, you probably should stop dealing with strings from that point and directly get your data as a 2D-array, for example with :



def pass_row(line):
return [line[0] + el.split(',') for el in line[1]]

>>> a = [[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]]
>>> b = [ pass_row(i) for i in a ]
>>> b
[[['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g']], [['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]]


Warning here, you can't directly feed a DataFrame with that solution, as each group of prefix-generated lines is still nested in its own list (it's only a "list of 2D-arrays").



For example, using sum function, that you could easily transpose in a reduce step :



>>> sum(b, )
[['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]


Your solution would thus need 3 steps :




  • map your dataset with pass_row as you do

  • reduce the result with sum built-in function, applied to the initial accumulator

  • feed the result to a Spark DataFrame


In plain Python, the following one-liner would do the job



>>> fn = lambda ls : sum([ [ i[0] + el.split(',') for el in i[1]] for i in ls ], )
>>> fn([[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]])
[['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]





share|improve this answer

























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53417270%2fpyspark-rdd-list-index-out-of-range-error%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Your error seems more related to your data than your function (which seems to be correct, although a bit overcomplicated), and it looks like you applied it to a line that didn't have a line[1].



    Could you make sure that the number of elements of line is constant in your actual dataset, for example with :



    def pass_row(line):
    assert len(line) == 2
    return [ "%s, %s" % (''.join(line[0]), el) for el in line[1]]


    That being said, for your actual goal, you probably should stop dealing with strings from that point and directly get your data as a 2D-array, for example with :



    def pass_row(line):
    return [line[0] + el.split(',') for el in line[1]]

    >>> a = [[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]]
    >>> b = [ pass_row(i) for i in a ]
    >>> b
    [[['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g']], [['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]]


    Warning here, you can't directly feed a DataFrame with that solution, as each group of prefix-generated lines is still nested in its own list (it's only a "list of 2D-arrays").



    For example, using sum function, that you could easily transpose in a reduce step :



    >>> sum(b, )
    [['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]


    Your solution would thus need 3 steps :




    • map your dataset with pass_row as you do

    • reduce the result with sum built-in function, applied to the initial accumulator

    • feed the result to a Spark DataFrame


    In plain Python, the following one-liner would do the job



    >>> fn = lambda ls : sum([ [ i[0] + el.split(',') for el in i[1]] for i in ls ], )
    >>> fn([[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]])
    [['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]





    share|improve this answer






























      0














      Your error seems more related to your data than your function (which seems to be correct, although a bit overcomplicated), and it looks like you applied it to a line that didn't have a line[1].



      Could you make sure that the number of elements of line is constant in your actual dataset, for example with :



      def pass_row(line):
      assert len(line) == 2
      return [ "%s, %s" % (''.join(line[0]), el) for el in line[1]]


      That being said, for your actual goal, you probably should stop dealing with strings from that point and directly get your data as a 2D-array, for example with :



      def pass_row(line):
      return [line[0] + el.split(',') for el in line[1]]

      >>> a = [[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]]
      >>> b = [ pass_row(i) for i in a ]
      >>> b
      [[['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g']], [['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]]


      Warning here, you can't directly feed a DataFrame with that solution, as each group of prefix-generated lines is still nested in its own list (it's only a "list of 2D-arrays").



      For example, using sum function, that you could easily transpose in a reduce step :



      >>> sum(b, )
      [['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]


      Your solution would thus need 3 steps :




      • map your dataset with pass_row as you do

      • reduce the result with sum built-in function, applied to the initial accumulator

      • feed the result to a Spark DataFrame


      In plain Python, the following one-liner would do the job



      >>> fn = lambda ls : sum([ [ i[0] + el.split(',') for el in i[1]] for i in ls ], )
      >>> fn([[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]])
      [['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]





      share|improve this answer




























        0












        0








        0







        Your error seems more related to your data than your function (which seems to be correct, although a bit overcomplicated), and it looks like you applied it to a line that didn't have a line[1].



        Could you make sure that the number of elements of line is constant in your actual dataset, for example with :



        def pass_row(line):
        assert len(line) == 2
        return [ "%s, %s" % (''.join(line[0]), el) for el in line[1]]


        That being said, for your actual goal, you probably should stop dealing with strings from that point and directly get your data as a 2D-array, for example with :



        def pass_row(line):
        return [line[0] + el.split(',') for el in line[1]]

        >>> a = [[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]]
        >>> b = [ pass_row(i) for i in a ]
        >>> b
        [[['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g']], [['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]]


        Warning here, you can't directly feed a DataFrame with that solution, as each group of prefix-generated lines is still nested in its own list (it's only a "list of 2D-arrays").



        For example, using sum function, that you could easily transpose in a reduce step :



        >>> sum(b, )
        [['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]


        Your solution would thus need 3 steps :




        • map your dataset with pass_row as you do

        • reduce the result with sum built-in function, applied to the initial accumulator

        • feed the result to a Spark DataFrame


        In plain Python, the following one-liner would do the job



        >>> fn = lambda ls : sum([ [ i[0] + el.split(',') for el in i[1]] for i in ls ], )
        >>> fn([[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]])
        [['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]





        share|improve this answer















        Your error seems more related to your data than your function (which seems to be correct, although a bit overcomplicated), and it looks like you applied it to a line that didn't have a line[1].



        Could you make sure that the number of elements of line is constant in your actual dataset, for example with :



        def pass_row(line):
        assert len(line) == 2
        return [ "%s, %s" % (''.join(line[0]), el) for el in line[1]]


        That being said, for your actual goal, you probably should stop dealing with strings from that point and directly get your data as a 2D-array, for example with :



        def pass_row(line):
        return [line[0] + el.split(',') for el in line[1]]

        >>> a = [[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]]
        >>> b = [ pass_row(i) for i in a ]
        >>> b
        [[['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g']], [['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]]


        Warning here, you can't directly feed a DataFrame with that solution, as each group of prefix-generated lines is still nested in its own list (it's only a "list of 2D-arrays").



        For example, using sum function, that you could easily transpose in a reduce step :



        >>> sum(b, )
        [['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]


        Your solution would thus need 3 steps :




        • map your dataset with pass_row as you do

        • reduce the result with sum built-in function, applied to the initial accumulator

        • feed the result to a Spark DataFrame


        In plain Python, the following one-liner would do the job



        >>> fn = lambda ls : sum([ [ i[0] + el.split(',') for el in i[1]] for i in ls ], )
        >>> fn([[['a'],['b,c,d','e,f,g']],[['h'],['i,j,k','l,m,n']]])
        [['a', 'b', 'c', 'd'], ['a', 'e', 'f', 'g'], ['h', 'i', 'j', 'k'], ['h', 'l', 'm', 'n']]






        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Dec 2 '18 at 15:39

























        answered Nov 22 '18 at 10:30









        theplatypustheplatypus

        8115




        8115






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53417270%2fpyspark-rdd-list-index-out-of-range-error%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Create new schema in PostgreSQL using DBeaver

            Deepest pit of an array with Javascript: test on Codility

            Costa Masnaga