Question about sorted, hashed tables, mindset when using OO concepts...

Hello experts,
I just want to make sure if my idea about sorted and hashed table is correct.Please give tips and suggestions.
In one of my reports, I declared a structure and an itab.
TYPES: BEGIN OF t_mkpf,
        mblnr           LIKE mkpf-mblnr,
        mjahr           LIKE mkpf-mjahr,
        budat           LIKE mkpf-budat,
        xblnr(10)       TYPE c,
        tcode2          LIKE mkpf-tcode2,
        cputm           LIKE mkpf-cputm,
        blart           LIKE mkpf-blart,
      END OF t_mkpf.
it_mkpf       TYPE SORTED   TABLE OF t_mkpf WITH HEADER LINE
                                   WITH NON-UNIQUE KEY mblnr mjahr.
Now, I declared it as a sorted table with a non-unique key MBLNR and MJAHR. Now suppose I have 1000 records in my itab. how will it search for a particular record?
2. Is it faster than sorting a standard table then reading it using binary search?
3. How do I use a hashed table effectively? lets say that I want to use hashed type instead of sorted table in my example above.
4. I am currently practicing ABAP Objects and my problem is that I think my mindset when programming a report is still the 'procedural one'. How do one use ABAP concepts effectively?
Again, thank you guys and have a nice day!

Hi Viray,
<b>The different ways to fill an Internal Table:</b>
<b>append&sort</b>
This is the simplest one. I do appends on a standard table and then a sort.
data: lt_tab type standard table of ...
do n times.
ls_line = ...
append ls_line to lt_tab.
enddo.
sort lt_tab.
The thing here is the fast appends and the slow sort - so this is interesting how this will compare to the following one.
<b>read binary search & insert index sy-tabix</b>
In this type I also use a standard table, but I read to find the correct insert index to get a sorted table also.
data: lt_tab type standard table of ...
do n times.
ls_line = ...
read table lt_tab transporting no fields with key ... binary search.
if sy-subrc <> 0.
  insert ls_line into lt_tab index sy-tabix.
endif.
enddo.
<b>sorted table with non-unique key</b>
Here I used a sorted table with a non-unique key and did inserts...
data: lt_tab type sorted table of ... with non-unique key ...
do n times.
ls_line = ...
insert ls_line into table lt_tab.
enddo.
<b>sorted table with unique key</b>
The coding is the same instead the sorted table is with a unique key.
data: lt_tab type sorted table of ... with unique key ...
do n times.
ls_line = ...
insert ls_line into table lt_tab.
enddo.
<b>hashed table</b>
The last one is the hashed table (always with unique key).
data: lt_tab type hashed table of ... with unique key ...
do n times.
ls_line = ...
insert ls_line into table lt_tab.
enddo.
<b>You Can use this Program to Test:</b>
types:
  begin of local_long,
    key1 type char10,
    key2 type char10,
    data1 type char10,
    data2 type char10,
    data3 type i,
    data4 type sydatum,
    data5 type numc10,
    data6 type char32,
    data7 type i,
    data8 type sydatum,
    data9 type numc10,
    dataa type char32,
    datab type i,
    datac type sydatum,
    datad type numc10,
    datae type char32,
    dataf type i,
    datag type sydatum,
    datah type numc10,
    datai type char32,
    dataj type i,
    datak type sydatum,
    datal type numc10,
    datam type char32,
    datan type i,
    datao type sydatum,
    datap type numc10,
    dataq type char32,
    datar type i,
    datas type sydatum,
    datat type numc10,
    datau type char32,
    datav type i,
    dataw type sydatum,
    datax type numc10,
    datay type char32,
    dataz type i,
    data11 type numc10,
    data21 type char32,
    data31 type i,
    data41 type sydatum,
    data51 type numc10,
    data61 type char32,
    data71 type i,
    data81 type sydatum,
    data91 type numc10,
    dataa1 type char32,
    datab1 type i,
    datac1 type sydatum,
    datad1 type numc10,
    datae1 type char32,
    dataf1 type i,
    datag1 type sydatum,
    datah1 type numc10,
    datai1 type char32,
    dataj1 type i,
    datak1 type sydatum,
    datal1 type numc10,
    datam1 type char32,
    datan1 type i,
    datao1 type sydatum,
    datap1 type numc10,
    dataq1 type char32,
    datar1 type i,
    datas1 type sydatum,
    datat1 type numc10,
    datau1 type char32,
    datav1 type i,
    dataw1 type sydatum,
    datax1 type numc10,
    datay1 type char32,
    dataz1 type i,
  end of local_long.
data:
  ls_long type local_long,
  lt_binary type standard table of local_long,
  lt_sort_u type sorted table of local_long with unique key key1 key2,
  lt_sort_n type sorted table of local_long with non-unique key key1 key2,
  lt_hash_u type hashed table of local_long with unique key key1 key2,
  lt_apsort type standard table of local_long.
field-symbols:
  <ls_long> type local_long.
parameters:
  min1 type i default 1,
  max1 type i default 1000,
  min2 type i default 1,
  max2 type i default 1000,
  i1 type i default 100,
  i2 type i default 200,
  i3 type i default 300,
  i4 type i default 400,
  i5 type i default 500,
  i6 type i default 600,
  i7 type i default 700,
  i8 type i default 800,
  i9 type i default 900,
  fax type i default 1000.
types:
  begin of measure,
    what(10) type c,
    size(6) type c,
    time type i,
    lines type i,
    reads type i,
    readb type i,
    fax_s type i,
    fax_b type i,
    fax(6) type c,
    iter type i,
  end of measure.
data:
  lt_time type standard table of measure,
  lt_meantimes type standard table of measure,
  ls_time type measure,
  lv_method(7) type c,
  lv_i1 type char10,
  lv_i2 type char10,
  lv_f type f,
  lv_start type i,
  lv_end type i,
  lv_normal type i,
  lv_size type i,
  lv_order type i,
  lo_rnd1 type ref to cl_abap_random_int,
  lo_rnd2 type ref to cl_abap_random_int.
get run time field lv_start.
lo_rnd1 = cl_abap_random_int=>create( seed = lv_start min = min1 max = max1 ).
add 1 to lv_start.
lo_rnd2 = cl_abap_random_int=>create( seed = lv_start min = min2 max = max2 ).
ls_time-fax = fax.
do 5 times.
  do 9 times.
    case sy-index.
      when 1. lv_size = i1.
      when 2. lv_size = i2.
      when 3. lv_size = i3.
      when 4. lv_size = i4.
      when 5. lv_size = i5.
      when 6. lv_size = i6.
      when 7. lv_size = i7.
      when 8. lv_size = i8.
      when 9. lv_size = i9.
    endcase.
    if lv_size > 0.
      ls_time-iter = 1.
      clear lt_apsort.
      ls_time-what = 'APSORT'.
      ls_time-size = lv_size.
      get run time field lv_start.
      do lv_size times.
        perform fill.
        append ls_long to lt_apsort.
      enddo.
      sort lt_apsort by key1 key2.
      get run time field lv_end.
      ls_time-time = lv_end - lv_start.
      ls_time-reads = 0.
      ls_time-readb = 0.
      ls_time-lines = lines( lt_apsort ).
      get run time field lv_start.
      do.
        add 1 to ls_time-readb.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_apsort
          assigning <ls_long>
          with key key1 = lv_i1
                   key2 = lv_i2
          binary search.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do.
        add 1 to ls_time-reads.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_apsort
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_apsort
          assigning <ls_long>
          with key key1 = lv_i1
                   key2 = lv_i2
          binary search.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_b = lv_end - lv_start.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_apsort
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_s = lv_end - lv_start.
      collect ls_time into lt_time.
      clear lt_binary.
      ls_time-what = 'BINARY'.
      ls_time-size = lv_size.
      get run time field lv_start.
      do lv_size times.
        perform fill.
        read table lt_binary
          transporting no fields
          with key key1 = ls_long-key1
                   key2 = ls_long-key2
          binary search.
        if sy-index <> 0.
          insert ls_long into lt_binary index sy-tabix.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-time = lv_end - lv_start.
      ls_time-reads = 0.
      ls_time-readb = 0.
      ls_time-lines = lines( lt_binary ).
      get run time field lv_start.
      do.
        add 1 to ls_time-readb.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_binary
          assigning <ls_long>
          with key key1 = lv_i1
                   key2 = lv_i2
          binary search.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do.
        add 1 to ls_time-reads.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_binary
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_binary
          assigning <ls_long>
          with key key1 = lv_i1
                   key2 = lv_i2
          binary search.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_b = lv_end - lv_start.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_binary
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_s = lv_end - lv_start.
      collect ls_time into lt_time.
      clear lt_sort_n.
      ls_time-what = 'SORT_N'.
      ls_time-size = lv_size.
      get run time field lv_start.
      do lv_size times.
        perform fill.
        insert ls_long into table lt_sort_n.
      enddo.
      get run time field lv_end.
      ls_time-time = lv_end - lv_start.
      ls_time-reads = 0.
      ls_time-readb = 0.
      ls_time-lines = lines( lt_sort_n ).
      get run time field lv_start.
      do.
        add 1 to ls_time-readb.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_sort_n
          assigning <ls_long>
          with table key key1 = lv_i1
                         key2 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do.
        add 1 to ls_time-reads.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_sort_n
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_sort_n
          assigning <ls_long>
          with table key key1 = lv_i1
                         key2 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_b = lv_end - lv_start.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_sort_n
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_s = lv_end - lv_start.
      collect ls_time into lt_time.
      clear lt_sort_u.
      ls_time-what = 'SORT_U'.
      ls_time-size = lv_size.
      get run time field lv_start.
      do lv_size times.
        perform fill.
        insert ls_long into table lt_sort_u.
      enddo.
      get run time field lv_end.
      ls_time-time = lv_end - lv_start.
      ls_time-reads = 0.
      ls_time-readb = 0.
      ls_time-lines = lines( lt_sort_u ).
      get run time field lv_start.
      do.
        add 1 to ls_time-readb.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_sort_u
          assigning <ls_long>
          with table key key1 = lv_i1
                         key2 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do.
        add 1 to ls_time-reads.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_sort_u
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_sort_u
          assigning <ls_long>
          with table key key1 = lv_i1
                         key2 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_b = lv_end - lv_start.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_sort_u
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_s = lv_end - lv_start.
      collect ls_time into lt_time.
      clear lt_hash_u.
      ls_time-what = 'HASH_U'.
      ls_time-size = lv_size.
      get run time field lv_start.
      do lv_size times.
        perform fill.
        insert ls_long into table lt_hash_u.
      enddo.
      get run time field lv_end.
      ls_time-time = lv_end - lv_start.
      ls_time-reads = 0.
      ls_time-readb = 0.
      ls_time-lines = lines( lt_hash_u ).
      get run time field lv_start.
      do.
        add 1 to ls_time-readb.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_hash_u
          assigning <ls_long>
          with table key key1 = lv_i1
                         key2 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do.
        add 1 to ls_time-reads.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_hash_u
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data11 = sy-index.
        endif.
        get run time field lv_end.
        subtract lv_start from lv_end.
        if lv_end >= ls_time-time.
          exit.
        endif.
      enddo.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_hash_u
          assigning <ls_long>
          with table key key1 = lv_i1
                         key2 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_b = lv_end - lv_start.
      get run time field lv_start.
      do fax times.
        lv_i1 = lo_rnd1->get_next( ).
        lv_i2 = lo_rnd2->get_next( ).
        read table lt_hash_u
          assigning <ls_long>
          with key key2 = lv_i1
                   key1 = lv_i2.
        if sy-subrc = 0.
          <ls_long>-data21 = sy-index.
        endif.
      enddo.
      get run time field lv_end.
      ls_time-fax_s = lv_end - lv_start.
      collect ls_time into lt_time.
    endif.
  enddo.
enddo.
sort lt_time by what size.
write: / ' type      | size   | time        | tab-size    | directread  | std read    | time direct | time std read'.
write: / sy-uline.
loop at lt_time into ls_time.
  write: / ls_time-what, '|', ls_time-size, '|', ls_time-time, '|', ls_time-lines, '|', ls_time-readb, '|', ls_time-reads, '|', ls_time-fax_b, '|', ls_time-fax_s.
endloop.
form fill.
  lv_i1 = lo_rnd1->get_next( ).
  lv_i2 = lo_rnd2->get_next( ).
  ls_long-key1 = lv_i1.
  ls_long-key2 = lv_i2.
  ls_long-data1 = lv_i1.
  ls_long-data2 = lv_i2.
  ls_long-data3 = lv_i1.
  ls_long-data4 = sy-datum + lv_i1.
  ls_long-data5 = lv_i1.
  ls_long-data6 = lv_i1.
  ls_long-data7 = lv_i1.
  ls_long-data8 = sy-datum + lv_i1.
  ls_long-data9 = lv_i1.
  ls_long-dataa = lv_i1.
  ls_long-datab = lv_i1.
  ls_long-datac = sy-datum + lv_i1.
  ls_long-datad = lv_i1.
  ls_long-datae = lv_i1.
  ls_long-dataf = lv_i1.
  ls_long-datag = sy-datum + lv_i1.
  ls_long-datah = lv_i1.
  ls_long-datai = lv_i1.
  ls_long-dataj = lv_i1.
  ls_long-datak = sy-datum + lv_i1.
  ls_long-datal = lv_i1.
  ls_long-datam = lv_i1.
  ls_long-datan = sy-datum + lv_i1.
  ls_long-datao = lv_i1.
  ls_long-datap = lv_i1.
  ls_long-dataq = lv_i1.
  ls_long-datar = sy-datum + lv_i1.
  ls_long-datas = lv_i1.
  ls_long-datat = lv_i1.
  ls_long-datau = lv_i1.
  ls_long-datav = sy-datum + lv_i1.
  ls_long-dataw = lv_i1.
  ls_long-datax = lv_i1.
  ls_long-datay = lv_i1.
  ls_long-dataz = sy-datum + lv_i1.
  ls_long-data11 = lv_i1.
  ls_long-data21 = lv_i1.
  ls_long-data31 = lv_i1.
  ls_long-data41 = sy-datum + lv_i1.
  ls_long-data51 = lv_i1.
  ls_long-data61 = lv_i1.
  ls_long-data71 = lv_i1.
  ls_long-data81 = sy-datum + lv_i1.
  ls_long-data91 = lv_i1.
  ls_long-dataa1 = lv_i1.
  ls_long-datab1 = lv_i1.
  ls_long-datac1 = sy-datum + lv_i1.
  ls_long-datad1 = lv_i1.
  ls_long-datae1 = lv_i1.
  ls_long-dataf1 = lv_i1.
  ls_long-datag1 = sy-datum + lv_i1.
  ls_long-datah1 = lv_i1.
  ls_long-datai1 = lv_i1.
  ls_long-dataj1 = lv_i1.
  ls_long-datak1 = sy-datum + lv_i1.
  ls_long-datal1 = lv_i1.
  ls_long-datam1 = lv_i1.
  ls_long-datan1 = sy-datum + lv_i1.
  ls_long-datao1 = lv_i1.
  ls_long-datap1 = lv_i1.
  ls_long-dataq1 = lv_i1.
  ls_long-datar1 = sy-datum + lv_i1.
  ls_long-datas1 = lv_i1.
  ls_long-datat1 = lv_i1.
  ls_long-datau1 = lv_i1.
  ls_long-datav1 = sy-datum + lv_i1.
  ls_long-dataw1 = lv_i1.
  ls_long-datax1 = lv_i1.
  ls_long-datay1 = lv_i1.
  ls_long-dataz1 = sy-datum + lv_i1.
endform.".
Thanks & Regards,
YJR.

Similar Messages

  • Question about sorting files in media library

    Hey,
    I'm Sam Hoste from Belgium and I have a question about sorting music files in my media library in iTunes.
    My music collection mainly exists of complete albums instead of individual songs. The problem is that I can not play the songs of an album in the same order as on the tracklist of the album in the media library of iTunes.
    I also take very good care of the ID3 tags from my mp3-files, but that doesn't help.
    For example, if I had an album with the following tracklist,
    01. Artist - Song 1
    02. Artist - Song 2
    03. Artist - Song 3
    04. Artist ft Otherartist - Song 4
    05. Artist ft Otherartist - Song 5
    06. Artist - Song 6
    07. Artist - Song 7,
    every music file would have an ID3 tag for tracknumber (1, 2, 3, ...), an ID3 tag for artist ("Artist" or "Artist ft Otherartist"), an ID3 tag for title (Song 1, ...) and an ID3 tag for album ("Albumname").
    When I load this album in my media library in iTunes, I cannot get them sorted by album and by tracknumber on that album. When I sort by artist I get this list:
    01. Artist - Song 1
    02. Artist - Song 2
    03. Artist - Song 3
    06. Artist - Song 6
    07. Artist - Song 7
    04. Artist ft Otherartist - Song 4
    05. Artist ft Otherartist - Song 5.
    But that's normal, but even if I sort by album I get the same list:
    01. Artist - Song 1
    02. Artist - Song 2
    03. Artist - Song 3
    06. Artist - Song 6
    07. Artist - Song 7
    04. Artist ft Otherartist - Song 4
    05. Artist ft Otherartist - Song 5.
    So the songs aren't in the same order as they are supposed to be on the CD. And I would really like my albums to play in the same order than on the tracklist on the CD.
    I think the problem is caused by the fact that when you sort by "album" in iTunes media library, iTunes first sorts by albumname, and second by artistname, and third by tracknumber. In stead of sorting first by albumname, and then second by tracknumber in stead of the artist.
    My question: Is there a way to make sure iTunes sorts by albumname, and then by tracknumber in stead of artist, so I can play my albums in the same order as on the CD? Or is there an other solution for this issue?
    Thanks and kind regards,
    Sam Hoste

    See my previous post on Grouping Tracks Into Albums, in particular the topics Use an album friendly view and
    Tracks out of sequence.
    tt2

  • SORTED & HASHED tables

    Hi all
         what exactly are the SORTED & HASHED tables??
    Regards
    Srini

    Internal tables are the core of ABAP. They are like soul of a body. For any program we use
    internal tables extensively. We can use Internal tables like normal data base tables only, but the
    basic difference is the memory allocated for internal tables is temporary. Once the program is
    closed the memory allocated for internal tables will also be out of memory.
    But while using the internal tables, there are many performance issues to be considered. i.e which
    type of internal table to be used for the program..like standard internal table, hashed internal
    table or sorted internal table etc..
    Internal tables
    Internal tables provide a means of taking data from a fixed structure and storing it in working memory in ABAP. The data is stored line by
    line in memory, and each line has the same structure. In ABAP, internal tables fulfill the function of arrays. Since they are dynamic data
    objects, they save the programmer the task of dynamic memory management in his or her programs. You should use internal tables
    whenever you want to process a dataset with a fixed structure within a program. A particularly important use for internal tables is for
    storing and formatting data from a database table within a program. They are also a good way of including very complicated data
    structures in an ABAP program.
    Like all elements in the ABAP type concept, internal tables can exist both as data types and as data objects A data type is the abstract
    description of an internal table, either in a program or centrally in the ABAP Dictionary, that you use to create a concrete data object. The
    data type is also an attribute of an existing data object.
    Internal Tables as Data Types
    Internal tables and structures are the two structured data types in ABAP. The data type of an internal table is fully specified by its line type,
    key, and table type.
    Line type
    The line type of an internal table can be any data type. The data type of an internal table is normally a structure. Each component of the
    structure is a column in the internal table. However, the line type may also be elementary or another internal table.
    Key
    The key identifies table rows. There are two kinds of key for internal tables - the standard key and a user-defined key. You can specify
    whether the key should be UNIQUE or NON-UNIQUE. Internal tables with a unique key cannot contain duplicate entries. The uniqueness
    depends on the table access method.
    If a table has a structured line type, its default key consists of all of its non-numerical columns that are not references or themselves
    internal tables. If a table has an elementary line type, the default key is the entire line. The default key of an internal table whose line type
    is an internal table, the default key is empty.
    The user-defined key can contain any columns of the internal table that are not references or themselves internal tables. Internal tables
    with a user-defined key are called key tables. When you define the key, the sequence of the key fields is significant. You should remember
    this, for example, if you intend to sort the table according to the key.
    Table type
    The table type determines how ABAP will access individual table entries. Internal tables can be divided into three types:
    Standard tables have an internal linear index. From a particular size upwards, the indexes of internal tables are administered as trees. In
    this case, the index administration overhead increases in logarithmic and not linear relation to the number of lines. The system can access
    records either by using the table index or the key. The response time for key access is proportional to the number of entries in the table.
    The key of a standard table is always non-unique. You cannot specify a unique key. This means that standard tables can always be filled
    very quickly, since the system does not have to check whether there are already existing entries.
    Sorted tables are always saved sorted by the key. They also have an internal index. The system can access records either by using the
    table index or the key. The response time for key access is logarithmically proportional to the number of table entries, since the system
    uses a binary search. The key of a sorted table can be either unique or non-unique. When you define the table, you must specify whether
    the key is to be unique or not. Standard tables and sorted tables are known generically as index tables.
    Hashed tables have no linear index. You can only access a hashed table using its key. The response time is independent of the number of
    table entries, and is constant, since the system access the table entries using a hash algorithm. The key of a hashed table must be unique.
    When you define the table, you must specify the key as UNIQUE.
    Generic Internal Tables
    Unlike other local data types in programs, you do not have to specify the data type of an internal table fully. Instead, you can specify a
    generic construction, that is, the key or key and line type of an internal table data type may remain unspecified. You can use generic
    internal tables to specify the types of field symbols and the interface parameters of procedures . You cannot use them to declare data
    objects.
    Internal Tables as Dynamic Data Objects
    Data objects that are defined either with the data type of an internal table, or directly as an internal table, are always fully defined in
    respect of their line type, key and access method. However, the number of lines is not fixed. Thus internal tables are dynamic data objects,
    since they can contain any number of lines of a particular type. The only restriction on the number of lines an internal table may contain are
    the limits of your system installation. The maximum memory that can be occupied by an internal table (including its internal administration)
    is 2 gigabytes. A more realistic figure is up to 500 megabytes. An additional restriction for hashed tables is that they may not contain more
    than 2 million entries. The line types of internal tables can be any ABAP data types - elementary, structured, or internal tables. The
    individual lines of an internal table are called table lines or table entries. Each component of a structured line is called a column in the
    internal table.
    Choosing a Table Type
    The table type (and particularly the access method) that you will use depends on how the typical internal table operations will be most
    frequently executed.
    Standard tables
    This is the most appropriate type if you are going to address the individual table entries using the index. Index access is the quickest
    possible access. You should fill a standard table by appending lines (ABAP APPEND statement), and read, modify and delete entries by
    specifying the index (INDEX option with the relevant ABAP command). The access time for a standard table increases in a linear relationship
    with the number of table entries. If you need key access, standard tables are particularly useful if you can fill and process the table in
    separate steps. For example, you could fill the table by appending entries, and then sort it. If you use the binary search option with key
    access, the response time is logarithmically proportional to the number of table entries.
    Sorted tables
    This is the most appropriate type if you need a table which is sorted as you fill it. You fill sorted tables using the INSERT statement. Entries
    are inserted according to the sort sequence defined through the table key. Any illegal entries are recognized as soon as you try to add
    them to the table. The response time for key access is logarithmically proportional to the number of table entries, since the system always
    uses a binary search. Sorted tables are particularly useful for partially sequential processing in a LOOP if you specify the beginning of the
    table key in the WHERE condition.
    Hashed tables
    This is the most appropriate type for any table where the main operation is key access. You cannot access a hashed table using its index.
    The response time for key access remains constant, regardless of the number of table entries. Like database tables, hashed tables always
    have a unique key. Hashed tables are useful if you want to construct and use an internal table which resembles a database table or for
    processing large amounts of data.
    Creating Internal Tables
    Like other elements in the ABAP type concept, you can declare internal tables as abstract data
    types in programs or in the ABAP Dictionary, and then use them to define data objects.
    Alternatively, you can define them directly as data objects. When you create an internal table as a
    data object, you should ensure that only the administration entry which belongs to an internal
    table is declared statically. The minimum size of an internal table is 256 bytes. This is important if an
    internal table occurs as a component of an aggregated data object, since even empty internal
    tables within tables can lead to high memory usage. (In the next functional release, the size of the
    table header for an initial table will be reduced to 8 bytes). Unlike all other ABAP data objects, you
    do not have to specify the memory required for an internal table. Table rows are added to and
    deleted from the table dynamically at runtime by the various statements for adding and deleting
    records.
    You can create internal tables in different types.
    You can create standard internal table and then make it sort in side the program.
    The same way you can change to hashed internal tables also.
    There will be some performance issues with regard to standard internal tables/ hashed internal
    tables/ sorted internal tables.
    Internal table types
    This section describes how to define internal tables locally in a program. You can also define internal tables globally as data types in the
    ABAP Dictionary.
    Like all local data types in programs , you define internal tables using the TYPES statement. If you do not refer to an existing table type
    using the TYPE or LIKE addition, you can use the TYPES statement to construct a new local internal table in your program.
    TYPES <t> TYPE|LIKE <tabkind> OF <linetype> [WITH <key>]
    [INITIAL SIZE <n>].
    After TYPE or LIKE, there is no reference to an existing data type. Instead, the type constructor occurs:
    <tabkind> OF <linetype> [WITH <key>]
    The type constructor defines the table type <tabkind>, the line type <linetype>, and the key <key> of the internal table <t>.
    You can, if you wish, allocate an initial amount of memory to the internal table using the INITIAL SIZE addition.
    Table type
    You can specify the table type <tabkind> as follows:
    Generic table types
    INDEX TABLE
    For creating a generic table type with index access.
    ANY TABLE
    For creating a fully-generic table type.
    Data types defined using generic types can currently only be used for field symbols and for interface parameters in procedures . The generic
    type INDEX TABLE includes standard tables and sorted tables. These are the two table types for which index access is allowed. You cannot
    pass hashed tables to field symbols or interface parameters defined in this way. The generic type ANY TABLE can represent any table. You
    can pass tables of all three types to field symbols and interface parameters defined in this way. However, these field symbols and
    parameters will then only allow operations that are possible for all tables, that is, index operations are not allowed.
    Fully-Specified Table Types
    STANDARD TABLE or TABLE
    For creating standard tables.
    <b>SORTED TABLE</b>
    For creating sorted tables.
    <b>HASHED TABLE</b>
    For creating hashed tables.
    Fully-specified table types determine how the system will access the entries in the table in key operations. It uses a linear search for
    standard tables, a binary search for sorted tables, and a search using a hash algorithm for hashed tables.
    Line type
    For the line type <linetype>, you can specify:
    Any data type if you are using the TYPE addition. This can be a predefined ABAP type, a local type in the program, or a data type from the
    ABAP Dictionary. If you specify any of the generic elementary types C, N, P, or X, any attributes that you fail to specify (field length, number
    of decimal places) are automatically filled with the default values. You cannot specify any other generic types.
    Any data object recognized within the program at that point if you are using the LIKE addition. The line type adopts the fully-specified data
    type of the data object to which you refer. Except for within classes, you can still use the LIKE addition to refer to database tables and
    structures in the ABAP Dictionary (for compatibility reasons).
    All of the lines in the internal table have the fully-specified technical attributes of the specified data type.
    Key
    You can specify the key <key> of an internal table as follows:
    [UNIQUE|NON-UNIQUE] KEY <col1> ... <col n>
    In tables with a structured line type, all of the components <coli> belong to the key as long as they are not internal tables or references,
    and do not contain internal tables or references. Key fields can be nested structures. The substructures are expanded component by
    component when you access the table using the key. The system follows the sequence of the key fields.
    [UNIQUE|NON-UNIQUE] KEY TABLE LINE
    If a table has an elementary line type (C, D, F, I, N, P, T, X), you can define the entire line as the key. If you try this for a table whose line
    type is itself a table, a syntax error occurs. If a table has a structured line type, it is possible to specify the entire line as the key. However,
    you should remember that this is often not suitable.
    [UNIQUE|NON-UNIQUE] DEFAULT KEY
    This declares the fields of the default key as the key fields. If the table has a structured line type, the default key contains all non-numeric
    columns of the internal table that are not and do not contain references or internal tables. If the table has an elementary line type, the
    default key is the entire line. The default key of an internal table whose line type is an internal table, the default key is empty.
    Specifying a key is optional. If you do not specify a key, the system defines a table type with an arbitrary key. You can only use this to
    define the types of field symbols and the interface parameters of procedures . For exceptions, refer to Special Features of Standard Tables.
    The optional additions UNIQUE or NON-UNIQUE determine whether the key is to be unique or non-unique, that is, whether the table can
    accept duplicate entries. If you do not specify UNIQUE or NON-UNIQUE for the key, the table type is generic in this respect. As such, it can
    only be used for specifying types. When you specify the table type simultaneously, you must note the following restrictions:
    You cannot use the UNIQUE addition for standard tables. The system always generates the NON-UNIQUE addition automatically.
    You must always specify the UNIQUE option when you create a hashed table.
    Initial Memory Requirement
    You can specify the initial amount of main memory assigned to an internal table object when you define the data type using the following
    addition:
    INITIAL SIZE <n>
    This size does not belong to the data type of the internal table, and does not affect the type check. You can use the above addition to
    reserve memory space for <n> table lines when you declare the table object.
    When this initial area is full, the system makes twice as much extra space available up to a limit of 8KB. Further memory areas of 12KB each
    are then allocated.
    You can usually leave it to the system to work out the initial memory requirement. The first time you fill the table, little memory is used. The
    space occupied, depending on the line width, is 16 <= <n> <= 100.
    It only makes sense to specify a concrete value of <n> if you can specify a precise number of table entries when you create the table and
    need to allocate exactly that amount of memory (exception: Appending table lines to ranked lists). This can be particularly important for
    deep-structured internal tables where the inner table only has a few entries (less than 5, for example).
    To avoid excessive requests for memory, large values of <n> are treated as follows: The largest possible value of <n> is 8KB divided by the
    length of the line. If you specify a larger value of <n>, the system calculates a new value so that n times the line width is around 12KB.
    Examples
    TYPES: BEGIN OF LINE,
    COLUMN1 TYPE I,
    COLUMN2 TYPE I,
    COLUMN3 TYPE I,
    END OF LINE.
    TYPES ITAB TYPE SORTED TABLE OF LINE WITH UNIQUE KEY COLUMN1.
    The program defines a table type ITAB. It is a sorted table, with line type of the structure LINE and a unique key of the component
    COLUMN1.
    TYPES VECTOR TYPE HASHED TABLE OF I WITH UNIQUE KEY TABLE LINE.
    TYPES: BEGIN OF LINE,
    COLUMN1 TYPE I,
    COLUMN2 TYPE I,
    COLUMN3 TYPE I,
    END OF LINE.
    TYPES ITAB TYPE SORTED TABLE OF LINE WITH UNIQUE KEY COLUMN1.
    TYPES: BEGIN OF DEEPLINE,
    FIELD TYPE C,
    TABLE1 TYPE VECTOR,
    TABLE2 TYPE ITAB,
    END OF DEEPLINE.
    TYPES DEEPTABLE TYPE STANDARD TABLE OF DEEPLINE
    WITH DEFAULT KEY.
    The program defines a table type VECTOR with type hashed table, the elementary line type I and a unique key of the entire table line. The
    second table type is the same as in the previous example. The structure DEEPLINE contains the internal table as a component. The table
    type DEEPTABLE has the line type DEEPLINE. Therefore, the elements of this internal table are themselves internal tables. The key is the
    default key - in this case the column FIELD. The key is non-unique, since the table is a standard table.
    Internal table objects
    Internal tables are dynamic variable data objects. Like all variables, you declare them using the DATA statement. You can also declare static
    internal tables in procedures using the STATICS statement, and static internal tables in classes using the CLASS-DATA statement. This
    description is restricted to the DATA statement. However, it applies equally to the STATICS and CLASS-DATA statements.
    Reference to Declared Internal Table Types
    Like all other data objects, you can declare internal table objects using the LIKE or TYPE addition of the DATA statement.
    DATA <itab> TYPE <type>|LIKE <obj> [WITH HEADER LINE].
    Here, the LIKE addition refers to an existing table object in the same program. The TYPE addition can refer to an internal type in the
    program declared using the TYPES statement, or a table type in the ABAP Dictionary.
    You must ensure that you only refer to tables that are fully typed. Referring to generic table types (ANY TABLE, INDEX TABLE) or not
    specifying the key fully is not allowed (for exceptions, refer to Special Features of Standard Tables).
    The optional addition WITH HEADER line declares an extra data object with the same name and line type as the internal table. This data
    object is known as the header line of the internal table. You use it as a work area when working with the internal table (see Using the
    Header Line as a Work Area). When you use internal tables with header lines, you must remember that the header line and the body of the
    table have the same name. If you have an internal table with header line and you want to address the body of the table, you must indicate
    this by placing brackets after the table name (<itab>[]). Otherwise, ABAP interprets the name as the name of the header line and not of the
    body of the table. You can avoid this potential confusion by using internal tables without header lines. In particular, internal tables nested
    in structures or other internal tables must not have a header line, since this can lead to ambiguous expressions.
    TYPES VECTOR TYPE SORTED TABLE OF I WITH UNIQUE KEY TABLE LINE.
    DATA: ITAB TYPE VECTOR,
    JTAB LIKE ITAB WITH HEADER LINE.
    MOVE ITAB TO JTAB. <- Syntax error!
    MOVE ITAB TO JTAB[].
    The table object ITAB is created with reference to the table type VECTOR. The table object JTAB has the same data type as ITAB. JTAB also
    has a header line. In the first MOVE statement, JTAB addresses the header line. Since this has the data type I, and the table type of ITAB
    cannot be converted into an elementary type, the MOVE statement causes a syntax error. The second MOVE statement is correct, since
    both operands are table objects.
    Declaring New Internal Tables
    You can use the DATA statement to construct new internal tables as well as using the LIKE or TYPE addition to refer to existing types or
    objects. The table type that you construct does not exist in its own right; instead, it is only an attribute of the table object. You can refer to
    it using the LIKE addition, but not using TYPE. The syntax for constructing a table object in the DATA statement is similar to that for defining
    a table type in the TYPES statement.
    DATA <itab> TYPE|LIKE <tabkind> OF <linetype> WITH <key>
    [INITIAL SIZE <n>]
    [WITH HEADER LINE].
    As when you define a table type , the type constructor
    <tabkind> OF <linetype> WITH <key>
    defines the table type <tabkind>, the line type <linekind>, and the key <key> of the internal table <itab>. Since the technical attributes of
    data objects are always fully specified, the table must be fully specified in the DATA statement. You cannot create generic table types (ANY
    TABLE, INDEX TABLE), only fully-typed tables (STANDARD TABLE, SORTED TABLE, HASHED TABLE). You must also specify the key and whether
    it is to be unique (for exceptions, refer to Special Features of Standard Tables).
    As in the TYPES statement, you can, if you wish, allocate an initial amount of memory to the internal table using the INITIAL SIZE addition.
    You can create an internal table with a header line using the WITH HEADER LINE addition. The header line is created under the same
    conditions as apply when you refer to an existing table type.
    DATA ITAB TYPE HASHED TABLE OF SPFLI
    WITH UNIQUE KEY CARRID CONNID.
    The table object ITAB has the type hashed table, a line type corresponding to the flat structure SPFLI from the ABAP Dictionary, and a
    unique key with the key fields CARRID and CONNID. The internal table ITAB can be regarded as an internal template for the database table
    SPFLI. It is therefore particularly suitable for working with data from this database table as long as you only access it using the key.

  • A question about sorting tables.

    Hello All!
    Just a quick question: performance wise, which is better, declare an itab with the 'sorted table of' or use the 'sort itab' later.
    Thanks in advance!
    Moderator message: please try yourself and search for available information and previous, similar discussions.
    Edited by: Thomas Zloch on Feb 23, 2012

    Hi Kevin,
    follow for example more or less the code given in this thread: Functionality to dynamically sort tableview columns and implement a <i>compare</i> method corresponding to your needs (for example using <i>compareToIgnoreCase</i> method of <i>String</i>).
    Hope it helps
    Detlev

  • URGENT - Sorting Hash Tables

    I am using a hash table with 2 columns. The first one has strings and is the key. The second column has integers.
    I need to sort this table on the first column and print the contents of the table.
    Then i need to sort it on the second column and print the results.
    How do i sort the hastables.
    Please let me know as soon as possible.
    Thanks and Regards,
    Vijay

    You got it all wrong. Hashtables cannot be sorted because then it would not be a hashtable. The content of the Hashtable can be sorted.
    What you want to do is get the key Set (keySet() method) of the Hashtable, wrap it in a List (e.g. LinkedList), sort that (see java.util.Collections for sorting) and then print out the contents of the Hashtable in the order pointed out by the keys in the sorted List.
    Then you can do the same for the values() Collection of the Hashtable.
    Pointers:
    http://java.sun.com/j2se/1.4/docs/api/java/util/Hashtable.html
    http://java.sun.com/j2se/1.4/docs/api/java/util/Set.html
    http://java.sun.com/j2se/1.4/docs/api/java/util/List.html
    http://java.sun.com/j2se/1.4/docs/api/java/util/LinkedList.html
    http://java.sun.com/j2se/1.4/docs/api/java/util/Collections.html

  • Another one question about how to download applet (not using external tool)

    Hi
    i write tool for downloading applet on card. I use apdu trace from NXP eclipse plugin as source of information and i read also about cap-file format in Java Card Virtual Machine specification. And i have questions about data transferred by LOAD command.
    As example - from apdu trace i see that transferred data are "C4820E33 DATA1 DATA2". Full length of transferred data is 0x2EE2.
    C4 - OK, this is "Load File Data Block" tag as specified in Global Platform
    820E33 - OK, this length of tag, =0x0E33
    DATA1 - sequence of cap-file components: Header.cap, Directory.cap, Import.cap, Applet.cap, Class.cap, Method.cap, StaticField.cap, ConstantPool.cap, RefLocation.cap. Length of DATA1 is 0x0E33, i.e. DATA1 = 'C4'-tag value.
    DATA2 - sequence of two cap-file components: Descriptor.cap and Debug.cap. These components are out of 'C4'-tag.
    the questions mentioned above... here they are:
    1. Global Platform does not define any data in LOAD command except 'E2' and 'C4' tag. Why DATA2 is transferred out of any tags?
    2. Whether the sequence of cap-file components is important? i.e. Can i load Header.cap, Directory.cap etc. in other order than i see in DATA1 field from apdu-trace?
    3. Debug.cap seems to be optional component. And what about Descriptor.cap component? Need i load it on card?

    666 wrote:
    1. Global Platform does not define any data in LOAD command except 'E2' and 'C4' tag. Why DATA2 is transferred out of any tags?Because the components are either optional or only required when communicating with a JCRE that has debugging capabilities. I assume you ran the project in JCOP Tools in debug mode against the simulator? If you did this against a real card it would fail as it does not have an instrumented JCRE capable of debugging code. You could try running the project as opposed to debugging to see the difference.
    2. Whether the sequence of cap-file components is important? i.e. Can i load Header.cap, Directory.cap etc. in other order than i see in DATA1 field from apdu-trace?Yes it is. It is defined in the JCVM specification available from the Oracle website.
    3. Debug.cap seems to be optional component. And what about Descriptor.cap component? Need i load it on card?No, it is optional and is not referenced by any other CAP file component.
    Cheers,
    Shane

  • Question about Workspace enabled tables, triggers

    Hello,
    We have created our tables structured from Oracle Designer and we also have several triggers, API for almost all the tables. While reading Workspace Manager, it seems that after I would execute dbms_wm.enableversioning, it woould rename my table to a view and would creat other views and tables. I have following questions:
    1)What would happen to those triggers which are based on the original tables. Would trigger be attached to the VIEW definition of the origianl table?
    2) Lets assume I have a customer and cust_address tables respectively, I have a customer id =100 which has two customer address rows in the LIVE. I have a workspace is based on cust_address table. Would it be possible for a user to see all the address data ( live + workspace) for customer=100 in one SQL query?
    3) What if I have two live customer address rows in cust_address tables and 3 rows in workspace of cust_address table, does workspace manager allow me to conditionally add and update worskpace rows to LIVE data, ie could I add 1 row from workspace to LIVE and update one row from workspace to LIVE data?
    Thanks for your answers?
    Syed

    Ben,
    Here is our business problem. We receive data in terms of text file regarding our customers. Files contain customer name and address information and customers payments and debts information. For the sake of simplicity,lets assume that I have customer , address, payment , debts and referral table . From the file we only load to teh customer,adderss,payments and debts table, once data is loaded, end user creates referral data which is stored in referral table.
    When we receive data, we are not sure if data contain any customer unique key ( customer id, or SSN etc) , we try to match the incoming data based on last name, first name, date of birth , SSN and address etc, if system find a match then it creates rows in all these 4 tables (customer, address, payments and debts). If system did not find any match then I have another table called potential_match ( with similar columns like customer table plus matched_customer_id) and system insert the customer rows in this table and also insert other data such as address, payments and debts to the respective table. I want to process all these data immediately because business need is to immediately create referral data.
    When an end user find some time, they come to potential_match table and they try to identify if two customers are really same or not, if they decide that two customers data are same then we have to associate address , payments, debts and referral data ( and some data in other tables which have been created by other process) to the original customers. So I need to write a PL/SQL which will basically add all the children records of customer say B to customer say A if A and B are same.
    I was looking into workspace manager and I found that it might be a possible solution and I am not sure how would I utilize WSM so that end user does not have to change the way they used to be working. IF I donot add customer's other data ( address, payments, debt etc) to LIVE immediately then they would not be able to search for the customer debt if the debt is in workspace. I want to give them a transparent look to LIVE and production data all time ( I donot want to give them gotoWorkspace feature). Only few user will have access to potential_match table and gotoWorspace and they will be able to merge /associate to the original customers ie removing from workspace and adding to LIVE. Some time users may want to unmatched also, meaning if they matched two customers into one then if due to some problem it was a wrong mergd then they should be able to unmerged into two different customers and associative data. Users also want to run temporal query such as they want to go back intime and want to see customer debt data as on Dec 2004.
    Based on your experience, how would I be able to design my workspace, do you think WSM is the way to go or I should use my own PL/SQL custom code to implement the problem.? I have never used it so I am not sure, I need to understand its implication. Currently we are using Oracle 8.17 but very soon we are moving to 10G.
    Thanks for all your help.
    Syed

  • How  Hash tables can be used  in PI mapping

    Hi Experts,
    I'm don't have any idea how we store the values in hash tables and how to implement them in mapping.
    In my scenario I have two fields matnum and quantity.if matnum is not null ,then we have to check whether the matnum exists in hash table and also check whether the hash table is empty or not.
    How we can do this in graphical message mapping? 
    how to store the variable matnum in a table?
    If global variables are used, how to implement in mapping.how we call the keys from hash table ?

    Divya,
    We have a similiar requirement for getting different values. Below param1 may you be matnum,param2 is quantity
    What you need to do is first declare global varaible(A), fill hash table as below(B) and retrieve(C) based on index. You can tweak code based on your requirement
    (A) Declare global variable(last icon in message mapping tool bar)
         String globlalString[] = new String[10];
    (B) Fill Hash Table
    import java.util.Hashtable;
    public void saveparam1(String[] param1,String[] param2,ResultList result,Container container){
    Hashtable htparam1 = new Hashtable();
    int Indx = 0;
    for (int i = 0 ;i < param1.length ; i++) {
      String strparam1 = param1<i>.trim();
      if (strparam1.length() > 0) {
        Object obj = htparam1.get(strparam1);
        if (obj == null){
          globlalString[Indx++] = strparam1 ;
          htparam1.put(strparam1,strparam1);
    if (Indx < globalString.length) {
      for (int i = 0;  i < param2.length ; i++) {
        String strparam2 = param2<i>.trim();
        if (strparam2.length() > 0) {
          Object obj = htparam1.get(strparam2);
          if (obj == null){
            globalString[Indx++] = strparam2 ;
            htparam1.put(strparam2,strparam2);
    result.addValue(globalString[0]); // for first value
    (C) for subsequent reading/accessing
    //pass constant whatever number is required to this function
    String retValue = "";
      int indx = Integer.parseInt(index);
      indx = indx - 1;
      if ((indx >= 0) && (indx < globalString.length)){
       retValue = globalString[indx];
    return retValue;
    Hope this helps!

  • Purchase requsitions into table EPRTRANS when using release strategy

    Hi,
    We are working with release strategy.
    It is not passible to insert purchase requsitions into the table EPRTRANS when I choose e.g.a material group which
    has been declared as a relevant criteria for release strategy. How can this problem be achived ?
    Regards
    Alexander
    Edited by: AlexanderZiegler on Sep 23, 2009 11:25 AM

    pls. comment out this perform
         PERFORM bbp_release_check USING l_xeban
                                         l_yeban
                                CHANGING no_update
                                         now_released.
         IF no_update = 'X'.                   " preq not yet released
           CONTINUE.
         ENDIF.

  • Question about Finder file listing order when sorting by "Name"

    I quite often add a suffix to some of my file names when I save them so that I can keep a complete history of changes. For example, I usually add "_000" at the end, and progress to "_001" through "_009". However after "_009" I don't increase the next digit to "_010" but instead, after "_009" comes "_00A" then "_00B" on to "_00Z" and then to "_010". I have been doing this for a while now (several years) and it has worked well for me.
    I have noticed in the finder window, when in "List" view a file ending in "_00A" is listed lexicographically before a file ending in "009" instead of the other way around.
    I think that this is due to my UNIX Locale being set to LANG="en_US.UTF-8"
    1. Is this a correct assumption?
    I would like it to be listed in POSIX order.
    2. Is there a way I can make just specific folders list file names in POSIX order?
    I was thinking of changing my Language to POSIX using the terminal command:
    $ export LANG=POSIX
    but this may have undesirable effects with the rest of Mac OS X.
    3. Is there a finder preference (plist item) I can change so this just changes the listing order to POSIX? (I have no problem if all Finder windows use this sort order for file names.)
    Thank you in advance.
    Jeff Cameron

    I don't know of a GUI way to change it or if you unix trick will work/do anything, but the Mac OS has always sorted that way, if I'm understanding you correctly. It looks at the entirety of the naming and puts 2 immediately after 1 instead of putting 10 there like Windows does (and Alpha comes before numeric).
    I don't think the unix underpinnings changed sort order, but I may be remembering wrong. It's been a long time since OS 9. That's why I don't think the LANG switch will work.

  • One question in measures table heading of a pivot table view when using 11g

    Hi experts,
    I have been working on OBIEE 10g for 2 years, and lately I started developing on OBIEE 11g. It is great to discover so many new features which make reporting so much easier and look better, meanwhile, I got confused by some detail changes they have made. I was able to overcome most of the difficulties I have met with, but now there is *1* I can not figure out. Any idea or suggestion is appreciated.
    In OBIEE 10g, when we develop a report using a pivot table, we put column A and B in Rows aera and column C Measures aera, the result would look like this:
    / / A / / B / / C / /
    //VA1 //VB1// VC1//
    //VA1// VB1 //VC1//
    But now I am using 11g, when I put these columns in the same aeres as I did in 10g, the result is surprising like this:
    / / BLANK / / C / /
    / / A / / B / / blank //
    //VA1 //VB1/|/ VC1//
    //VA1/|/VB1 // VC1//
    It looks weird that it should put Rows labels and Measure labels in different lines and create blank labels, and barely makes any sense. I am OK with the blank above the Row labels, but not with the blank below the measure label. A blank label that separates a label and its value is not what I want.
    Can anyone help me to make the pivot table look the same as it does in 10g? Or at least remove the blank label below the Measure label?
    Thanks in advance.

    Hi,
    I am facing the same issue, have you solved it?

  • Question about creating new tables using SQL script in WebLogic Server

    Hi,
    I am new to WebLogic and I am following a book Java EE Development with Eclipse published by PACKT Publishing to learn
    Java EE.  I have installed Oracle Enterprise Pack for Eclipse on the PC and I am able to log into the WebLogic Server Administration Console
    and set up a Data Source.  However the next step is to create tables for the database.  The book says that the tables can be created using
    SQL script run from the SQL command line.
    I cannot see any way of inputting SQL script into the WebLogic Server Admistration Console.  Aslo there is no SQL Command line in DOS.
    Thanks  for your help.
    Brian.

    Sounds like you are to run the scripts provided by a tutorial to create the tables, right?  In that case, you may need to install an Oracle client to connect to your database.  The client is automatically installed with the database, so if you have access to the server that hosts the database, you should be able to run SQLplus from there.
    As far as I know, there is no way to run a script from the Admin Console.  I could be wrong, however.

  • Sorting a table which is used as a reference

    Hi,
    I am not 100% sure that i have all the terminology correct here so please excuse me if i am calling "things" by there wrong name.
    I have a table which has a list of racing car numbers which are not in any particular order.
    !http://img571.imageshack.us/img571/4305/screenshot20100602at122.png!
    I then have lots of smaller tables looking at this list, for instance, table 1 looks at the first car number cell then table 2 looks at the second car number cell etc..
    So before the list of car numbers is sorted table 1 looks at the list and takes the first number off the list, in this example 5, and table 2 looks at the list and takes the second number off list, in this example 3. etc.....
    so table 1 has a 5 in it and table 2 has a 3 in it.
    to look this number up i use this formula
    =IF(ISBLANK(Race 1::Data :: A6),"",Race 1::Data :: A6)
    which if the cell it looks at is blank then it does not show anything.
    The problem I have is that when I sort my list of car numbers into numerical order( 1,2,3,4,5,6,7,8,9,10) the table 1,table 2 etc do not sort.
    So after sorting table 1 still as 5 in it and table 2 still has 3 in it.
    How can i do it so after sorting the list of car numbers table 1 has 1 in it and table 2 has two in it.
    I have tried making the cells in table 1, table 2 to absolute row/column but this did not work.
    Thanks
    Charlie

    Charlie,
    When your smaller table uses an expression like your "=IF(ISBLANK(Race 1::Data :: A6),"",Race 1::Data :: A6)", it will continue to access the same data even if you move that data about by sorting or other means. Numbers does this, as do all the other spreadsheet programs. They do it by tracking the movement and adjusting the references.
    A way to avoid this is to use another form of addressing, such as indirect or indexed addressing. For example, you could have used:
    =IF(ISBLANK(OFFSET(Data :: $A$1, 5,0)),"",OFFSET(Data :: $A$1, 5,0))
    Now this particular expression will always grab the content of Race 1::Data :: A6.
    Regards,
    Jerry

  • Problem with table-indexes when using select-options in select

    Hello experts,
    is it right that table-indexes will not be used if you take select-options to select data from the database?
    in detail:
    i have build up an table-index for one of our db-tables and test it via an test-programm. The first test with '=' comparisons worked fine. Every key of the index was used; checked via ST05!
    e.g.:    SELECT * FROM TABLEA INTO ITAB WHERE keya = '1' AND keyb = '2' AND keyc = '3'.
    Now i startet the test with select-options
    e.g.:   SELECT * FROM TABLEA INTO ITAB WHERE keya IN seltabA  AND keyb IN seltabB AND keyc IN seltabC.
    First of all i just filled the seltabs with only 1 value:    eg:  seltabA=      SIGN = 'I'   OPTION = 'EQ'   LOW = '1'     etc.
    Everything worked fine. Every key of the index was used.
    But now, I putted more than one entries in the seltabs e.g.
    seltabA:      SIGN = 'I'   OPTION = 'EQ'   LOW = '1'
                       SIGN = 'I'   OPTION = 'EQ'   LOW = '2'   
                       SIGN = 'I'   OPTION = 'EQ'   LOW = '3'
    From now on, the indexed was not used completely (with all keys).
    Isn't that strange? How can i use select-options or sel-ranges with using the complete table-indexes?
    Thanks a lot,
    Marcel

    Hi Hermann,
    i hope this helps:
    this is the first one, which uses the complete index:
    SELECT                                                                     
      "KOWID" , "LIFNR" , "KLPOS" , "ORGID" , "KOART" , "MATNR" , "GLTVON" ,   
      "GLTBIS" , "WERT" , "ABLIF" , "FAKIV" , "AENAM" , "AEDAT" , "AFORM" ,    
      "HERSTELLER" , "ARTGRP" , "OE_FREITXT" , "ARTFREITEXT" , "STATUS" ,      
      "TERDAT"                                                                 
    FROM                                                                       
      "/dbcon/01_con"                                                       
    WHERE                                                                      
      "MANDT" = ? AND "LIFNR" = ? AND "ORGID" = ? AND "KOART_BASIS" = ? AND    
      "STATUS" = ? AND "GEWAEHR_KOWID" < ? AND ( "STATUS" = ? OR "STATUS" = ? OR
      "STATUS" = ? )  WITH UR                 
    RESULT: 5 IXSCAN /dbcon/01_con05 #key columns:  4
    And the second one, which does not use the complete index! The 3 ranges are filled each with 2 values. Remember; when i fill them each with only one value, the result is the same as you can see above(/dbcon/01_con05 #key columns:  4):
    SELECT                                                                     
      "KOWID" , "LIFNR" , "KLPOS" , "ORGID" , "KOART" , "MATNR" , "GLTVON" ,   
      "GLTBIS" , "WERT" , "ABLIF" , "FAKIV" , "AENAM" , "AEDAT" , "AFORM" ,    
      "HERSTELLER" , "ARTGRP" , "OE_FREITXT" , "ARTFREITEXT" , "STATUS" ,      
      "TERDAT"                                                                 
    FROM                                                                       
      "/dbcon/01_con"                                                       
    WHERE                                                                      
      "MANDT" = ? AND "LIFNR" IN ( ? , ? ) AND "ORGID" IN ( ? , ? ) AND        
      "KOART_BASIS" IN ( ? , ? ) AND "GEWAEHR_KOWID" < ? AND ( "STATUS" = ? OR 
      "STATUS" = ? OR "STATUS" = ? )  WITH UR                                  
    and here the access-plan
       0 SELECT STATEMENT ( Estimated Costs =  5,139E+01 [timerons] )                                                                               
    5     1 RETURN                                                                               
    5     2 NLJOIN                                                                               
    5     3 [O] TBSCAN                                                                               
    5     4 SORT                                                                               
    5 TBSCAN GENROW                                                                               
    5     6 <i> FETCH /dbcon/01_con                                                                               
    7 IXSCAN /dbcon/01_con05 #key columns:  2   
    As you can see, only 2 keys were taken for indexed selection!
    Any idea?
    Kind regards,
    MArcel
    Edited by: Marcel Ebert on Jul 28, 2009 5:25 PM

  • Question about global temp tables

    I have global temporary table with ON COMMIT setting ON COMMIT PRESERVE ROWS. E.g.:
    CREATE GLOBAL TEMPORARY TABLE admin_work_area
            (startdate DATE,
             enddate DATE,
             class CHAR(20))
          ON COMMIT PRESERVE ROWS;On application start procedure inserting data into table, on application end is DELETE statement used to make table empty.
    Interestingly if application is started again (in same session!) deleted rows appear again in table before call of insert-procedure. So after call of insert-procedure data will be doubled... :(
    So my question is:
    Does COMMIT in this constellation making ROLLBACK of deleted rows?
    Sounds unlogical to me, but appear to be like that...
    Message was edited by:
    Faust
    Edit: ON COMMIT setting

    Are you sure that the rows somehow just appear back
    and it's not the application which inserts them
    twice.Yes I'm sure, there is only one call of insert-procedure (on application start).
    Are you using autonomous transactions for
    those inserts by any chance?No.
    SID is just an index into session fixed array, so the
    only way to get the same SID in an instance is when
    the previous session ends.
    Each session array slot contains a SERIAL# field
    which is zero at instance start and is incremented
    every time the slot is reused by next session.
    So, as long as your session exists, it is impossible
    that someone else gets same SID + SERIAL# combination
    in an instance.
    Note that the SESSION_ADDR and SESSION_NUM give you
    the address and SERIAL# of the session owning a
    temporary segment.Original session exist...
    Thank you Tanel for your replay!

Maybe you are looking for

  • The report options under "configure" and CSV or xls.

    The report options under "configure" does not give .csv or xls as options. Instead of using vi to pass data to database, is there a simple way to transfer all the results (just like the way in the report that automatically generated) to .csv or xls..

  • Issue in development oF Customised report in abap

    Hi all, i am facing an issue while developing a report. i am using a table QAMV (LOGIC  stated below) pass inspection lot no. pick MIC , MIC Description issue is that against single inspection lot no. multiple MIC are assigned. How i can pick all MIC

  • After Downloading and Installing any version of Camera Raw after 8.3 the installation completes but 8.3 remains, how can I resolve this?

    I am using Windows 7 Home Premium, with Adobe Design and Web Premium CS6.. I previously had this same problem installing Camera Raw 8.3  and as a work around back then, Adobe Support manually edited the Registry of my Windows 7 laptop to enable Photo

  • Save as file dialog

    Hello everybody, I am new in the J2EE environment and would appreciate if somebody could extend their help to answer my question. I have developed few servlets that speak to the database and display the query results to the jsp page. Now I would like

  • PS3 slim on a 2001 Cinema Display?

    Hi, I was wondering..... can you connect a new PS3 to an 2001 cinema display via HDMI to VGI cable? I don't want to go out and buy a random cable if the monitor can't handle it. If not, is there something else I can use? Thanks, Alex