A question about sorting tables.

Hello All!
Just a quick question: performance wise, which is better, declare an itab with the 'sorted table of' or use the 'sort itab' later.
Thanks in advance!
Moderator message: please try yourself and search for available information and previous, similar discussions.
Edited by: Thomas Zloch on Feb 23, 2012

Hi Kevin,
follow for example more or less the code given in this thread: Functionality to dynamically sort tableview columns and implement a <i>compare</i> method corresponding to your needs (for example using <i>compareToIgnoreCase</i> method of <i>String</i>).
Hope it helps
Detlev

Similar Messages

  • Question about sorting files in media library

    Hey,
    I'm Sam Hoste from Belgium and I have a question about sorting music files in my media library in iTunes.
    My music collection mainly exists of complete albums instead of individual songs. The problem is that I can not play the songs of an album in the same order as on the tracklist of the album in the media library of iTunes.
    I also take very good care of the ID3 tags from my mp3-files, but that doesn't help.
    For example, if I had an album with the following tracklist,
    01. Artist - Song 1
    02. Artist - Song 2
    03. Artist - Song 3
    04. Artist ft Otherartist - Song 4
    05. Artist ft Otherartist - Song 5
    06. Artist - Song 6
    07. Artist - Song 7,
    every music file would have an ID3 tag for tracknumber (1, 2, 3, ...), an ID3 tag for artist ("Artist" or "Artist ft Otherartist"), an ID3 tag for title (Song 1, ...) and an ID3 tag for album ("Albumname").
    When I load this album in my media library in iTunes, I cannot get them sorted by album and by tracknumber on that album. When I sort by artist I get this list:
    01. Artist - Song 1
    02. Artist - Song 2
    03. Artist - Song 3
    06. Artist - Song 6
    07. Artist - Song 7
    04. Artist ft Otherartist - Song 4
    05. Artist ft Otherartist - Song 5.
    But that's normal, but even if I sort by album I get the same list:
    01. Artist - Song 1
    02. Artist - Song 2
    03. Artist - Song 3
    06. Artist - Song 6
    07. Artist - Song 7
    04. Artist ft Otherartist - Song 4
    05. Artist ft Otherartist - Song 5.
    So the songs aren't in the same order as they are supposed to be on the CD. And I would really like my albums to play in the same order than on the tracklist on the CD.
    I think the problem is caused by the fact that when you sort by "album" in iTunes media library, iTunes first sorts by albumname, and second by artistname, and third by tracknumber. In stead of sorting first by albumname, and then second by tracknumber in stead of the artist.
    My question: Is there a way to make sure iTunes sorts by albumname, and then by tracknumber in stead of artist, so I can play my albums in the same order as on the CD? Or is there an other solution for this issue?
    Thanks and kind regards,
    Sam Hoste

    See my previous post on Grouping Tracks Into Albums, in particular the topics Use an album friendly view and
    Tracks out of sequence.
    tt2

  • Question about Update Tables

    Hello Gurus,
    I have a question about "update table" entries. I read somewhere that an entry in update table  is done at the time of the OLTP transaction. Is this correct? If so, does this happen on a V1 update or V2 update? Please clarify. Similarly, an entry in the "extraction queue" (incase you are using "queued delta" will happen on V1 update. I just want to get a clarification on both these methods. Any help in this matter is highly appreciated.
    Thanks,
    Sreekanth

    Hi
    update tables are temporary table that are refreshed by v3 job.update table is like buffer, it gets filled after updation of application table and statistical table through v1 v2 update .  we can bypass this process by delta methods.
    M Kalpana

  • Question about sorted, hashed tables, mindset when using OO concepts...

    Hello experts,
    I just want to make sure if my idea about sorted and hashed table is correct.Please give tips and suggestions.
    In one of my reports, I declared a structure and an itab.
    TYPES: BEGIN OF t_mkpf,
            mblnr           LIKE mkpf-mblnr,
            mjahr           LIKE mkpf-mjahr,
            budat           LIKE mkpf-budat,
            xblnr(10)       TYPE c,
            tcode2          LIKE mkpf-tcode2,
            cputm           LIKE mkpf-cputm,
            blart           LIKE mkpf-blart,
          END OF t_mkpf.
    it_mkpf       TYPE SORTED   TABLE OF t_mkpf WITH HEADER LINE
                                       WITH NON-UNIQUE KEY mblnr mjahr.
    Now, I declared it as a sorted table with a non-unique key MBLNR and MJAHR. Now suppose I have 1000 records in my itab. how will it search for a particular record?
    2. Is it faster than sorting a standard table then reading it using binary search?
    3. How do I use a hashed table effectively? lets say that I want to use hashed type instead of sorted table in my example above.
    4. I am currently practicing ABAP Objects and my problem is that I think my mindset when programming a report is still the 'procedural one'. How do one use ABAP concepts effectively?
    Again, thank you guys and have a nice day!

    Hi Viray,
    <b>The different ways to fill an Internal Table:</b>
    <b>append&sort</b>
    This is the simplest one. I do appends on a standard table and then a sort.
    data: lt_tab type standard table of ...
    do n times.
    ls_line = ...
    append ls_line to lt_tab.
    enddo.
    sort lt_tab.
    The thing here is the fast appends and the slow sort - so this is interesting how this will compare to the following one.
    <b>read binary search & insert index sy-tabix</b>
    In this type I also use a standard table, but I read to find the correct insert index to get a sorted table also.
    data: lt_tab type standard table of ...
    do n times.
    ls_line = ...
    read table lt_tab transporting no fields with key ... binary search.
    if sy-subrc <> 0.
      insert ls_line into lt_tab index sy-tabix.
    endif.
    enddo.
    <b>sorted table with non-unique key</b>
    Here I used a sorted table with a non-unique key and did inserts...
    data: lt_tab type sorted table of ... with non-unique key ...
    do n times.
    ls_line = ...
    insert ls_line into table lt_tab.
    enddo.
    <b>sorted table with unique key</b>
    The coding is the same instead the sorted table is with a unique key.
    data: lt_tab type sorted table of ... with unique key ...
    do n times.
    ls_line = ...
    insert ls_line into table lt_tab.
    enddo.
    <b>hashed table</b>
    The last one is the hashed table (always with unique key).
    data: lt_tab type hashed table of ... with unique key ...
    do n times.
    ls_line = ...
    insert ls_line into table lt_tab.
    enddo.
    <b>You Can use this Program to Test:</b>
    types:
      begin of local_long,
        key1 type char10,
        key2 type char10,
        data1 type char10,
        data2 type char10,
        data3 type i,
        data4 type sydatum,
        data5 type numc10,
        data6 type char32,
        data7 type i,
        data8 type sydatum,
        data9 type numc10,
        dataa type char32,
        datab type i,
        datac type sydatum,
        datad type numc10,
        datae type char32,
        dataf type i,
        datag type sydatum,
        datah type numc10,
        datai type char32,
        dataj type i,
        datak type sydatum,
        datal type numc10,
        datam type char32,
        datan type i,
        datao type sydatum,
        datap type numc10,
        dataq type char32,
        datar type i,
        datas type sydatum,
        datat type numc10,
        datau type char32,
        datav type i,
        dataw type sydatum,
        datax type numc10,
        datay type char32,
        dataz type i,
        data11 type numc10,
        data21 type char32,
        data31 type i,
        data41 type sydatum,
        data51 type numc10,
        data61 type char32,
        data71 type i,
        data81 type sydatum,
        data91 type numc10,
        dataa1 type char32,
        datab1 type i,
        datac1 type sydatum,
        datad1 type numc10,
        datae1 type char32,
        dataf1 type i,
        datag1 type sydatum,
        datah1 type numc10,
        datai1 type char32,
        dataj1 type i,
        datak1 type sydatum,
        datal1 type numc10,
        datam1 type char32,
        datan1 type i,
        datao1 type sydatum,
        datap1 type numc10,
        dataq1 type char32,
        datar1 type i,
        datas1 type sydatum,
        datat1 type numc10,
        datau1 type char32,
        datav1 type i,
        dataw1 type sydatum,
        datax1 type numc10,
        datay1 type char32,
        dataz1 type i,
      end of local_long.
    data:
      ls_long type local_long,
      lt_binary type standard table of local_long,
      lt_sort_u type sorted table of local_long with unique key key1 key2,
      lt_sort_n type sorted table of local_long with non-unique key key1 key2,
      lt_hash_u type hashed table of local_long with unique key key1 key2,
      lt_apsort type standard table of local_long.
    field-symbols:
      <ls_long> type local_long.
    parameters:
      min1 type i default 1,
      max1 type i default 1000,
      min2 type i default 1,
      max2 type i default 1000,
      i1 type i default 100,
      i2 type i default 200,
      i3 type i default 300,
      i4 type i default 400,
      i5 type i default 500,
      i6 type i default 600,
      i7 type i default 700,
      i8 type i default 800,
      i9 type i default 900,
      fax type i default 1000.
    types:
      begin of measure,
        what(10) type c,
        size(6) type c,
        time type i,
        lines type i,
        reads type i,
        readb type i,
        fax_s type i,
        fax_b type i,
        fax(6) type c,
        iter type i,
      end of measure.
    data:
      lt_time type standard table of measure,
      lt_meantimes type standard table of measure,
      ls_time type measure,
      lv_method(7) type c,
      lv_i1 type char10,
      lv_i2 type char10,
      lv_f type f,
      lv_start type i,
      lv_end type i,
      lv_normal type i,
      lv_size type i,
      lv_order type i,
      lo_rnd1 type ref to cl_abap_random_int,
      lo_rnd2 type ref to cl_abap_random_int.
    get run time field lv_start.
    lo_rnd1 = cl_abap_random_int=>create( seed = lv_start min = min1 max = max1 ).
    add 1 to lv_start.
    lo_rnd2 = cl_abap_random_int=>create( seed = lv_start min = min2 max = max2 ).
    ls_time-fax = fax.
    do 5 times.
      do 9 times.
        case sy-index.
          when 1. lv_size = i1.
          when 2. lv_size = i2.
          when 3. lv_size = i3.
          when 4. lv_size = i4.
          when 5. lv_size = i5.
          when 6. lv_size = i6.
          when 7. lv_size = i7.
          when 8. lv_size = i8.
          when 9. lv_size = i9.
        endcase.
        if lv_size > 0.
          ls_time-iter = 1.
          clear lt_apsort.
          ls_time-what = 'APSORT'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            append ls_long to lt_apsort.
          enddo.
          sort lt_apsort by key1 key2.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_apsort ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_apsort
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_binary.
          ls_time-what = 'BINARY'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            read table lt_binary
              transporting no fields
              with key key1 = ls_long-key1
                       key2 = ls_long-key2
              binary search.
            if sy-index <> 0.
              insert ls_long into lt_binary index sy-tabix.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_binary ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key1 = lv_i1
                       key2 = lv_i2
              binary search.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_binary
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_sort_n.
          ls_time-what = 'SORT_N'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            insert ls_long into table lt_sort_n.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_sort_n ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_n
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_sort_u.
          ls_time-what = 'SORT_U'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            insert ls_long into table lt_sort_u.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_sort_u ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_sort_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
          clear lt_hash_u.
          ls_time-what = 'HASH_U'.
          ls_time-size = lv_size.
          get run time field lv_start.
          do lv_size times.
            perform fill.
            insert ls_long into table lt_hash_u.
          enddo.
          get run time field lv_end.
          ls_time-time = lv_end - lv_start.
          ls_time-reads = 0.
          ls_time-readb = 0.
          ls_time-lines = lines( lt_hash_u ).
          get run time field lv_start.
          do.
            add 1 to ls_time-readb.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do.
            add 1 to ls_time-reads.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data11 = sy-index.
            endif.
            get run time field lv_end.
            subtract lv_start from lv_end.
            if lv_end >= ls_time-time.
              exit.
            endif.
          enddo.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with table key key1 = lv_i1
                             key2 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_b = lv_end - lv_start.
          get run time field lv_start.
          do fax times.
            lv_i1 = lo_rnd1->get_next( ).
            lv_i2 = lo_rnd2->get_next( ).
            read table lt_hash_u
              assigning <ls_long>
              with key key2 = lv_i1
                       key1 = lv_i2.
            if sy-subrc = 0.
              <ls_long>-data21 = sy-index.
            endif.
          enddo.
          get run time field lv_end.
          ls_time-fax_s = lv_end - lv_start.
          collect ls_time into lt_time.
        endif.
      enddo.
    enddo.
    sort lt_time by what size.
    write: / ' type      | size   | time        | tab-size    | directread  | std read    | time direct | time std read'.
    write: / sy-uline.
    loop at lt_time into ls_time.
      write: / ls_time-what, '|', ls_time-size, '|', ls_time-time, '|', ls_time-lines, '|', ls_time-readb, '|', ls_time-reads, '|', ls_time-fax_b, '|', ls_time-fax_s.
    endloop.
    form fill.
      lv_i1 = lo_rnd1->get_next( ).
      lv_i2 = lo_rnd2->get_next( ).
      ls_long-key1 = lv_i1.
      ls_long-key2 = lv_i2.
      ls_long-data1 = lv_i1.
      ls_long-data2 = lv_i2.
      ls_long-data3 = lv_i1.
      ls_long-data4 = sy-datum + lv_i1.
      ls_long-data5 = lv_i1.
      ls_long-data6 = lv_i1.
      ls_long-data7 = lv_i1.
      ls_long-data8 = sy-datum + lv_i1.
      ls_long-data9 = lv_i1.
      ls_long-dataa = lv_i1.
      ls_long-datab = lv_i1.
      ls_long-datac = sy-datum + lv_i1.
      ls_long-datad = lv_i1.
      ls_long-datae = lv_i1.
      ls_long-dataf = lv_i1.
      ls_long-datag = sy-datum + lv_i1.
      ls_long-datah = lv_i1.
      ls_long-datai = lv_i1.
      ls_long-dataj = lv_i1.
      ls_long-datak = sy-datum + lv_i1.
      ls_long-datal = lv_i1.
      ls_long-datam = lv_i1.
      ls_long-datan = sy-datum + lv_i1.
      ls_long-datao = lv_i1.
      ls_long-datap = lv_i1.
      ls_long-dataq = lv_i1.
      ls_long-datar = sy-datum + lv_i1.
      ls_long-datas = lv_i1.
      ls_long-datat = lv_i1.
      ls_long-datau = lv_i1.
      ls_long-datav = sy-datum + lv_i1.
      ls_long-dataw = lv_i1.
      ls_long-datax = lv_i1.
      ls_long-datay = lv_i1.
      ls_long-dataz = sy-datum + lv_i1.
      ls_long-data11 = lv_i1.
      ls_long-data21 = lv_i1.
      ls_long-data31 = lv_i1.
      ls_long-data41 = sy-datum + lv_i1.
      ls_long-data51 = lv_i1.
      ls_long-data61 = lv_i1.
      ls_long-data71 = lv_i1.
      ls_long-data81 = sy-datum + lv_i1.
      ls_long-data91 = lv_i1.
      ls_long-dataa1 = lv_i1.
      ls_long-datab1 = lv_i1.
      ls_long-datac1 = sy-datum + lv_i1.
      ls_long-datad1 = lv_i1.
      ls_long-datae1 = lv_i1.
      ls_long-dataf1 = lv_i1.
      ls_long-datag1 = sy-datum + lv_i1.
      ls_long-datah1 = lv_i1.
      ls_long-datai1 = lv_i1.
      ls_long-dataj1 = lv_i1.
      ls_long-datak1 = sy-datum + lv_i1.
      ls_long-datal1 = lv_i1.
      ls_long-datam1 = lv_i1.
      ls_long-datan1 = sy-datum + lv_i1.
      ls_long-datao1 = lv_i1.
      ls_long-datap1 = lv_i1.
      ls_long-dataq1 = lv_i1.
      ls_long-datar1 = sy-datum + lv_i1.
      ls_long-datas1 = lv_i1.
      ls_long-datat1 = lv_i1.
      ls_long-datau1 = lv_i1.
      ls_long-datav1 = sy-datum + lv_i1.
      ls_long-dataw1 = lv_i1.
      ls_long-datax1 = lv_i1.
      ls_long-datay1 = lv_i1.
      ls_long-dataz1 = sy-datum + lv_i1.
    endform.".
    Thanks & Regards,
    YJR.

  • SSMA (Access - SQL 2012) newbie question about new tables

    I have successfully run SSMA and moved the back end tables of an Access project to SQL Express 2012.
    Some questions:
    - if I add a new table to the SQL backend, how do I get it to appear in the Access project?
    - if I want to move the whole thing to another computer, how do I go about it?
    - do I always have to have the SSMA app installed?
    - I can't see an ODBC datasource on the machine
    Thanks for some help
    CarolChi

    Thanks, I was not clear in my questions:
    1. Given that from now on the tables should all be in SQL, I think it would be better to create new tables in SQL them there rather than create them in Access and have to keep moving them to SQL with SSMA. So if I create a new table in SQL how can I
    see it in Access? Or can I easily create a linked table in Access?
    The table created by SSMA have this property:
    If I create a new table in SQL and link it I get a more classic ODBC connection:
    2. I am happy with SQL backup/restore, attach/detach. But how does Access know where the SQL Database has gone?
    3. I thought the migration was a one time task, but from what you say it is an ongoing "linking" app. So I would be better to export my tables and re-link them to get something that can be moved to another network or computer?
    4. The SSMA looks very much like an ODBC connection.
    This is a new project so I will have to make lots of new tables and also move the whole thing around to different computers, on different networks.
    CarolChi

  • Question about multiple tables and formulas in numbers

    I  was wondering about having a formula that effects one table based on data in the other table. I have a spreedsheet that i created for work. In column A is a list of addresses. In column B-ZZ is a list of "billing codes" the get applied to the service work done for that address.Its kind of difficult to look at this large spreadsheet on my iphone and edit it on the go. Is there a way to, Have another column, possibly on sheet 2 or something, where i type in these "billing codes" and they would fill an x or some other character in cell that corrisponds with that billing code on that address line? So to be as plain as possible, I want to enter data in one cell (another sheet if possible) and have it "mark" another cell in response to the specific data on that "entry cell." Im thinking this way because instead of scrolling back and fourth on the sheet on my iphone, to "mark" the boxes, i could just navigate to sheet 2 enter the data, or "billing codes" and they would "X" the cell that would match up with the code column and address row. each address row would have a seperate "entry field" in sheet 2, to make the formula easier. thanks for any help

    Tom,
    That's a lot of columns for jsut a table of marks that reflect an input. Sure, you could have each cell in that giant table check to see if some corresponding cell contains certain text, but that document is going to be slower than you might be able to tolerate.
    Jerry

  • Question about selection table analysis

    Dear All,
    My last post was closed by the moderator so I would like to ask a different question. Please apologize if my question is trivial.
    1. There is a selection table seltab with one fields of type I, with the following contents:
    I     GT     3     0
    I     NE     8     0
    The literature eg. ABAP Objects says that the control statement like IF or CASE can be used with logical expression IN.
    Now I am checking if the variable test fulfills the conditions defined by the selection table.
    DO 10 TIMES.
      IF test IN seltab.
        write test.
      ENDIF
      test = test + 1.
    ENDDO.
    As a result I get a full list of values.
    1 2 3 4 5 6 7 8 9 10
    2. Now, if the table contains
    I     GT     3     0
    I get
    4 5 6 7 8 10 as expected
    3. If the table seltab contains
    I     NE     3     0
    the result also seems to be ok
    1 2 4 5 6 7 8 9 10
    The final question is. Is it true that the logical expression IN cannot be used with IF for more complex data comparisions?
    Thanks,
    Grzegorz

    Well,  the contents
    I GT 3 0
    I NE 8 0
    I read as:
    the variable fulfills the condition if is greater than 3 and not equal 8
    test > 3 and
    test <> 8
    which should give the result from the example above a list
    4 5 6 7 9 10
    What is wrong with my analysis?
    Regards,
    Grzegorz

  • Question about sort after commit

    Hello!
    I have a view object. I put in "order by clause" some sort(In query tab of this view object). I created jsf page and dropped this view object like table. I also dropped a Commit operation as button. When I change some value in row and press Commit button, my "order by clause" on view object doesn't perform and I get unsorted data. When I press Commit second time, "order by clause" perform. Can you suggest how to workaround this "Commit" behavior?
    JDeveloper version 11.1.2.2.0
    Thank's for answers.
    Regards, Stanislav

    Hi,
    Perform Execute after the commit. You can do this by executing both the actions in the actionListener of the commit button. Search in this forum for how to execute the action binding from backing bean.
    -Arun

  • Question about GLPCA table

    hi gurus,
    i need to find a relation between profit center and G/L account to do same reporting on charges by profit centers.
    in table GLPCA i have the fields i need, but i have a question: are all the charges traced in this table (GLPCA) ? in other words are all the vendors invoices present in this table.
    Thank you.

    Only for those entries in which profit center is assigned.
    Eg :Say expenses or revenue for which profit center is assigned directly. only the expenses line item(i.e Dr) or revenue line item (i.e CR) alone will be there in GLPCA.
    But in BSEG you can find both the entries(Dr & Cr) for the same doc, which you cant find in GLPCA.
    Hope you understood.
    else revert
    Reg
    Sujai

  • Question about temporary tables and imports.

    Hello everybody.
    Interesting one this. We have an application that uses global temporary tables. We refresh live to test using import of the application schema. I noticed when doing this yesterday that the temporary tables were being recreated in test as perminant rather than temporary.
    Is there a reason for this? Has anyone come across this before? Is there a way around it (apart from manual checking)?
    Many thanks
    Rup

    Could you specify how you found out that it is coming in as permanent?
    I believe exp/Imp will export and import it is Temporary table. I have just done a simple test to check and it works fine;
    Here is my test log:
    SQL> connect scott
    Enter password:
    Connected.
    SQL>SQL> CREATE GLOBAL TEMPORARY TABLE test_global
      2          (startdate DATE,
      3           enddate DATE,
      4           class CHAR(20))
      5        ON COMMIT DELETE ROWS;
    Table created.
    SQL>  select table_name,temporary from user_tables where table_name='TEST_GLOBAL';
    TABLE_NAME                     T
    TEST_GLOBAL                    Y
    SQL> select table_name,temporary from user_tables;
    TABLE_NAME                     T
    DEPT                           N
    EMP                            N
    BONUS                          N
    SALGRADE                       N
    TEST_GLOBAL                    Y
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
    C:\Documents\dbmsdirect\Testing\global>exp
    Export: Release 10.2.0.2.0 - Production on Wed Feb 20 12:18:45 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Username: scott
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
    Enter array fetch buffer size: 4096 >
    Export file: EXPDAT.DMP > test_global
    (2)U(sers), or (3)T(ables): (2)U > t
    Export table data (yes/no): yes >
    Compress extents (yes/no): yes >
    Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    server uses AL32UTF8 character set (possible charset conversion)
    About to export specified tables via Conventional Path ...
    Table(T) or Partition(T:P) to be exported: (RETURN to quit) > test_global
    . . exporting table                    TEST_GLOBAL
    Table(T) or Partition(T:P) to be exported: (RETURN to quit) >
    Export terminated successfully without warnings.
    C:\Documents\dbmsdirect\Testing\global>sqlplus /nolog
    SQL*Plus: Release 10.2.0.2.0 - Production on Wed Feb 20 12:19:50 2008
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    SQL> connect scott
    Enter password:
    Connected.
    SQL> drop table test_global purge;
    Table dropped.
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
    C:\Documents\dbmsdirect\Testing\global>imp
    Import: Release 10.2.0.2.0 - Production on Wed Feb 20 12:20:19 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Username: scott
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
    Import file: EXPDAT.DMP > test_global.DMP
    Enter insert buffer size (minimum is 8192) 30720>
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    import server uses AL32UTF8 character set (possible charset conversion)
    List contents of import file only (yes/no): no >
    Ignore create error due to object existence (yes/no): no >
    Import grants (yes/no): yes >
    Import table data (yes/no): yes >
    Import entire export file (yes/no): no > yes
    . importing SCOTT's objects into SCOTT
    . importing SCOTT's objects into SCOTT
    Import terminated successfully without warnings.
    C:\Documents\dbmsdirect\Testing\global>sqlplus /nolog
    SQL*Plus: Release 10.2.0.2.0 - Production on Wed Feb 20 12:22:44 2008
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    SQL> connect scott
    Enter password:
    Connected.
    SQL>
    SQL> select table_name,temporary from user_tables;
    TABLE_NAME                     T
    DEPT                           N
    EMP                            N
    BONUS                          N
    SALGRADE                       N
    TEST_GLOBAL                    Y
    SQL> insert into TEST_GLOBAL values (sysdate,sysdate,'test1');
    1 row created.
    SQL> select * from TEST_GLOBAL;
    STARTDATE ENDDATE   CLASS
    20-FEB-08 20-FEB-08 test1
    SQL> commit;
    Commit complete.
    SQL> select * from TEST_GLOBAL;
    no rows selected
    SQL>  select table_name,temporary from user_tables where table_name='TEST_GLOBAL';
    TABLE_NAME                     T
    TEST_GLOBAL                    Y
    SQL>

  • Question about multiple table JOIN syntax

    Good morning,
    I'm trying to join three tables and getting an error. Here is the query:
    with
      p (messageid, parentmessageid, subject) as
        select 2001, null, 'xyz'    from dual union all
        select 2002, 2001, 'ngh'    from dual union all
        select 2003, 2001, 'nghj'   from dual union all
        select 2004, 2001, 'pppp'   from dual union all
        select 2005, null, 'hhhhh'  from dual union all
        select 2006, 2005, 'hhhhj'  from dual
      m (thread, messageid) as
        select 1004, 2001           from dual union all
        select 1005, 2006           from dual
    select p2.messageid, p2.parentmessageid, p2.subject
      from (p p2 left outer join m m2
                              on (p2.parentmessageid = m2.messageid))
           join m m3
             on m3.messageid = p2.messageid
    ;                which gives an "ORA-00918: column ambiguously defined.
    I can't figure out why it gives that error. I fail to see the ambiguity.
    Thank you for helping,
    John.
    P.S: In case someone wants to get a good laugh, here is a "solution" that does not have any ambiguity:
    with
      parentchild (messageid, parentmessageid, subject) as
        select 2001, null, 'xyz'    from dual union all
        select 2002, 2001, 'ngh'    from dual union all
        select 2003, 2001, 'nghj'   from dual union all
        select 2004, 2001, 'pppp'   from dual union all
        select 2005, null, 'hhhhh'  from dual union all
        select 2006, 2005, 'hhhhj'  from dual
      messagethread (thread, messageid) as
        select 1004, 2001           from dual union all
        select 1005, 2006           from dual
      j1 as
       select p.messageid, p.parentmessageid, p.subject, m.thread
         from parentchild                     p
              left outer join messagethread   m
                           on p.parentmessageid = m.messageid
      j2 as
       select j1.messageid, j1.parentmessageid, j1.subject, j1.thread, m.thread t2
         from j1 left join messagethread m
           on j1.messageid = m.messageid
    select j2.messageid, j2.parentmessageid, j2.subject, nvl(j2.thread, t2)
      from j2
    order by messageid
    ;

    440bx - 11gR2 wrote:
    Hi Sven,
    >
    I guess the explain plan woudl show that there is some strange query rewrite going on. Where the outer join is added later then needed and therefore the columns do not match anymore as it was in the original query.
    >
    Is there a "hint" that could be used to test that hypothesis ?
    John.I tried NO_QUERY_TRANSFORMATION and MATERIALIZE but the error still was there. So it might not be a problem of rewriting the query.
    However I checked the explain plan for a similiar query which is working.
    sql> with
      2    p as ( select 2002 messageid, 2001 parentmessageid, 'ngh' subject   from dual)
      3   ,m as ( select 1004 threadx, 2001 messageid2 from dual )
      4   ,m3 as ( select * from m)
      5  select *
      6  from p p2
      7  left join m m2 on p2.parentmessageid = m2.messageid2
      8  join m3 on m3.messageid2 = p2.messageid
      9  ;   
    no rows selected
    Execution Plan
    Plan hash value: 4005120224
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |      |     1 |     2 |     0   (0)|          |
    |*  1 |  FILTER             |      |       |       |            |          |
    |   2 |   NESTED LOOPS OUTER|      |     1 |     2 |     6   (0)| 00:00:01 |
    |   3 |    NESTED LOOPS     |      |     1 |       |     4   (0)| 00:00:01 |
    |   4 |     FAST DUAL       |      |     1 |       |     2   (0)| 00:00:01 |
    |   5 |     FAST DUAL       |      |     1 |       |     2   (0)| 00:00:01 |
    |*  6 |    TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(NULL IS NOT NULL)
       6 - filter(NVL2(ROWID(+),2001,NULL)=2001)
    Statistics
              0  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            357  bytes sent via SQL*Net to client
            231  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed
    sql> I can remember some other thread here where a very similiar filter criteria was also mentioned in the context of a "bug".

  • Question about sort technique

    we have a report that shows a list of contacts and we have an sub-report that displays (1) one field max(history completed date) so we can see that last time the contact was talked to.   The issue is: if we want to have a sort parameter sorts on the max(history completed date) field.  How would we do this?
    1) pass the field value from the sub-report to the main report?
    2) don't created a sub-report, create field value in the main report using sql
    Please comment on the approaches above or any other methods that I might not have thought of.

    Better yet, here's a sample of what your query should look like.
    Insert it into 'Add Command'
    select contact_names,
    (select max(lasttime) from time.table b where a.contactID=b.contactID)
    from table.contacts a
    Enjoy,
    Zack H.

  • Question about dual table

    Hi,
    Can anyone clarify me this
    Can I insert rows into dual table (being a programmer) ?? Using oracle 9i ... If no, can DBA do that ??? Say suppose, I have inserted 100 records in the dual table .. In any way, my applications are going to get hampered after inserting the records in the dual table
    Regards

    Or perhaps this:
    SQL> SELECT * FROM
      2  ( select rownum from dual connect by rownum <= 10 );
        ROWNUM
             1
             2
             3
             4
             5
             6
             7
             8
             9
            10
    10 rows selected.In 9i at least you most certainly can insert into sys.dual, which is a regular table in the SYS schema, although you deserve all you get if you do so:
    /Users/williamr: su - oracle
    Password:
    /Users/oracle: sqlplus '/ as sysdba'
    SQL*Plus: Release 9.2.0.1.0 - Developer's Release on Fri Mar 10 19:21:45 2006
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Developer's Release
    With the Partitioning and Oracle Data Mining options
    SQL> SELECT COUNT(*) FROM sys.dual;
      COUNT(*)
             1
    1 row selected.
    SQL> INSERT INTO dual VALUES ('Z');
    1 row created.
    SQL> SELECT COUNT(*) FROM sys.dual;
      COUNT(*)
             2
    1 row selected.
    SQL> set serverout on
    SQL> EXEC DBMS_OUTPUT.PUT_LINE(user);
    BEGIN DBMS_OUTPUT.PUT_LINE(user); END;
    ERROR at line 1:
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-06512: at "SYS.STANDARD", line 264
    ORA-06512: at line 1
    SQL> roll
    Rollback complete.
    SQL> EXEC DBMS_OUTPUT.PUT_LINE(user);
    SYS
    PL/SQL procedure successfully completed.
    SQL> So it is technically possible, but monumentally inadvisable to mess about with sys.dual.
    For a rather surreal suggestion related to this scenario, see:
    oracle-wtf.blogspot.com/2006/03/create-your-own-dual-table.html

  • Question about af:table

    Hi all, can you show me the way to turn off the "show all" option in af:table (when we use paging)? Thanks a lot!
    Harry.

    hi Harry
    I'm not sure what your question is, but the af:table component in the SRList.jspx page of the SRDemoSample application uses "paging".
      <af:table value="#{bindings.findServiceRequests1.collectionModel}"
             binding="#{backing_SRList.srTable}" var="row"
             rows="#{bindings.findServiceRequests1.rangeSize}"
             first="#{bindings.findServiceRequests1.rangeStart}"
             emptyText="#{bindings.findServiceRequests1.viewable ? 'No rows yet.' : 'Access Denied.'}"
             selectionState="#{bindings.findServiceRequests1.collectionModel.selectedRow}"
             selectionListener="#{bindings.findServiceRequests1.collectionModel.makeCurrent}">
      <!-- ... -->
      </af:table>(tip : You can use "Your Control Panel" to make your name visible in forum posts.)
    success
    Jan Vervecken

  • Simple beginner question about default tables (LOGMNR, etc.)

    Hi all, I just installed SQL Developer and the Express server. I have worked with other SQL databases and with remote Oracle databases. But never been a DBA for an Oracle DB. I tried searching on LOGMNR and AQ$_, but couldn't find anything, hence this question.
    If there is some online documentation that covers these particular tables and my question, then a pointer to that would be appreciated.
    Anyway, I created a connection per the instructions and it seemed to work. But when I look at tables listview, I see a whole slew of tables. I assume these are all housekeeping/meta data tables (eg. information_schema in MySQL). Is this correct?
    Also, short of manually creating a filter for each of the file types ("%$%"help", "LOGMNR%", "SQLPLUS%", etc.), is there an easy way to ignore these tables and just see the tables I want to create for testing purposes?
    And I am curios why these all show up by default. Could it be that I am expecting something because of experience with other databases and am actually doing things the "wrong" way? Or is this just because I am using the Express Server? Or is there another (very good) reason for these tables to exist?
    Thanks,
    justme

    The complete oracle documentation is at http://tahiti.oracle.com. There is a lot of it, but Log Miner and Advanced Queuing have their own manuals. You should also consider starting with the Concepts Manual followed by one of the "2-day" tutorials such as "2-Day DBA"
    What user are you connecting as? It sounds as if you are connecting as SYS or SYSTEM if you can see the dictionary tables.
    You should create a separate user to connect as.
    (create user myuser identified by mypassword;
    grant connect,resource to myuser;)

Maybe you are looking for

  • Smartforms output as HTML in email

    Hi, I want to send the output of a smartform as html in email body. I found and used this bolg: :<a href="/people/pavan.bayyapu/blog/2005/08/30/sending-html-email-from-sap-crmerp HTML Email from SAP CRM/ERP</a> I did everything as per the blog. Howev

  • How do I add a reply form to a Web App?

    I am building a webapp that allows the client to submit work requests for updates to their website as there are far too many people involved in email and it gets confusing very quickly. I have figured out how to have them submit a web app item and vi

  • Where can i download the drivers (windows 7) for my new macbook pro retina

    I just install windows 7 in my new macbook pro retina display and i cant even use my wireless...

  • User cannot  view shortcut or copy of instance in Inbox?

    Hi, We have a unique requirement where one admin person will send short-cut of the INSTANCE of reports to the End users. We don't want end user to see instance of report( from ADMIN folder) as admin would be scheduling it for different users and one

  • Conversion of source system while transport

    Dear Experts, We have following landscape: BID (BI dev) BIQ (BI QA) BIP (BI Prod) and R3D (R3 dev) R3Q (R3 QA) R3P (R3 Prod) For transport we have made following settings for "conversion of source system name after transport": In BID system (BI dev)