Hash Table efficiency question
Hi experts,
In my program, I want to read in a lot of string and store the occurance of each string. I found that hash table is the best and most efficient option, but the problem is that, hash table only store one item.
So, I either have to:
1) store an array object into each hash table entries. ie String[String][Occurance]
2) create two hash table based on the hash code of the string.
For 2) I am planning to store all distinct String into one hashtable using the string as the hashcode, then create another hashtable to store the occurance using the String as the hashcode.
My question is that:
1)which implementation is more efficient? Constantly creating String array for each entry or create two hashtables?
2) Is the second implementation possible? Would the hashcode be mapped to different cell in the hashtable even the two hashtable are using the same hashcode and the same size?
Thank you very much for your help.
Kevin
I am wondering what it is you are trying to do, but I am assuming you are trying to find the number of occurrences for a particular word, and then determining which word has the highest value or the lowest value? You can retrieve the initial String value by using the keys() method of the hashtable. You can use it to traverse the entire table and compare the counts there.
If you really wanna store another reference for that string, create a simple object
public final class WordCount {
* The Word being counted.
* @since 1.1
private String _word;
* Count for the Number of Words.
* @since 1.1
private int _count;
* Creates a new instance of the Word Count Object.
* @param word The Word being counted.
* @since 1.1
public WordCount(final String word) {
super();
_word = word;
_count = 0;
* Call this method to increment the Count for the Word.
* @since 1.1
public void increment() {
_count++;
* Retrieves the word being counted.
* @return Word being counted.
* @since 1.1
public String getWord() {
return _word;
* Return the Count for the Word.
* @return Non-negative count for the Word.
* @since 1.1
public int getCount() {
return _count;
}Then your method can be as follows
* Counts the Number of Occurrences of Words within the String
* @param someString The String to be counted for
* @param pattern Pattern to be used to split the String
* @since 1.1
public static final WordCount[] countWords(final String someString, final String pattern) {
StringTokenizer st = new StringTokenizer(someString, pattern);
HashMap wordCountMap = new HashMap();
while (st.hasMoreTokens()) {
String token = st.nextToken();
if (wordCountMap.containsKey(token)) {
((WordCount) wordCountMap.get(token)).increment();
} else {
WordCount count = new WordCount(token);
count.increment();
wordCountMap.put(token, count);
Collection values = wordCountMap.values();
return (WordCount[]) values.toArray(new WordCount[values.size()]);
}Now you can create your own comparator classes to sort the entire array of Word Count Objects. I hope that helps
Regards
Jega
Similar Messages
-
How to make sorting in hash table efficient
Hi guys,
I am relatively new to collections.
I have a HashTable which has 100000 records in it, when I sort it, it takes a lot of time to get it done.
As the Hashtable methods are synchronised so even I can't use threads to perform this quickely for me.
Can any one provide any solution using multi threading or any other method to achieve the optimization in
sorting the hashtable.
Thanks,
AtulAs already suggested, use TreeMap if you want to build your table in sorted key order. Otherwise, if you want to sort after the fact, you could do one of the following:
SortedSet sortedKeys = new TreeSet(map.keySet());
// or
List sortedKeys = new TreeSet(map.keySet());
Collections.sort(sortedKeys);Note that this will NOT affect that order in the map. It only affects the order of the extracted keys.
Also not that if sorting 100,000 items is taking "a long time," then you probably wrote your own sort algorithm and it's probably O(n^2) or something. -
Question on the use of hash tables
I have created a extract program that extract data from the bkpf and the bseg tables. I am extracting a lot of data which is needed for the auditors and depending on the selection criteria (company code and date range), this extract takes quite awhile to run. I had heard from another developer working in a different project that hash tables are used when dealing with a lot of data. I am not that familiar with hash tables and was wondering if the hash table approach would help with the processing time of my process.
thanks in advance for the helpthis is only part of the code but this is the part when the selects and the writing of the file are. let me know if I have to post the entire program.
FORM f_get_data .
SELECT * INTO TABLE wt_bkpf
FROM bkpf
WHERE bukrs IN s_bukrs
AND belnr IN s_belnr
AND blart IN s_blart
AND bldat IN s_bldat
AND budat IN s_budat
AND bstat IN s_bstat.
IF sy-dbcnt IS INITIAL.
MESSAGE i208(00) WITH text-001.
STOP.
ENDIF.
SORT wt_bkpf BY bukrs belnr gjahr.
SELECT mandt bukrs belnr buzei buzid bschl koart shkzg dmbtr
wrbtr pswbt sgtxt kostl saknr hkont dmbe2
INTO TABLE wt_bseg
FROM bseg
FOR ALL ENTRIES IN wt_bkpf
WHERE bukrs EQ wt_bkpf-bukrs
AND belnr EQ wt_bkpf-belnr.
ENDFORM. " f_get_data
FORM f_split_data .
DATA wlv_index LIKE sy-tabix.
DESCRIBE TABLE wt_bkpf LINES wv_index.
wlv_index = 0.
wv_item_index = 1.
WHILE wlv_index LT wv_index.
ADD 1 TO wlv_index.
CLEAR wt_bkpf.
READ TABLE wt_bkpf INDEX wlv_index.
IF NOT sy-subrc IS INITIAL. EXIT. ENDIF.
LOOP AT wt_bseg FROM wv_item_index
WHERE bukrs EQ wt_bkpf-bukrs
AND belnr EQ wt_bkpf-belnr.
wv_item_index = sy-tabix + 1.
move wt_bkpf-bukrs to ws_bseg_hold-bukrs.
move wt_bkpf-belnr to ws_bseg_hold-belnr.
move wt_bkpf-gjahr to ws_bseg_hold-gjahr.
move wt_bkpf-blart to ws_bseg_hold-blart.
move wt_bkpf-bldat to ws_bseg_hold-bldat.
move wt_bkpf-budat to ws_bseg_hold-budat.
move wt_bkpf-monat to ws_bseg_hold-monat.
move wt_bkpf-cpudt to ws_bseg_hold-cpudt.
move wt_bkpf-cputm to ws_bseg_hold-cputm.
move wt_bkpf-usnam to ws_bseg_hold-usnam.
move wt_bkpf-tcode to ws_bseg_hold-tcode.
move wt_bkpf-xblnr to ws_bseg_hold-xblnr.
move wt_bkpf-bktxt to ws_bseg_hold-bktxt.
move wt_bkpf-waers to ws_bseg_hold-waers.
move wt_bkpf-bstat to ws_bseg_hold-bstat.
move wt_bkpf-ausbk to ws_bseg_hold-ausbk.
move wt_bseg-mandt to ws_bseg_hold-mandt.
move wt_bseg-buzei to ws_bseg_hold-buzei.
move wt_bseg-buzid to ws_bseg_hold-buzid.
move wt_bseg-bschl to ws_bseg_hold-bschl.
move wt_bseg-koart to ws_bseg_hold-koart.
move wt_bseg-shkzg to ws_bseg_hold-shkzg.
move wt_bseg-dmbtr to ws_bseg_hold-dmbtr.
move wt_bseg-wrbtr to ws_bseg_hold-wrbtr.
move wt_bseg-pswbt to ws_bseg_hold-pswbt.
move wt_bseg-sgtxt to ws_bseg_hold-sgtxt.
move wt_bseg-kostl to ws_bseg_hold-kostl.
move wt_bseg-saknr to ws_bseg_hold-saknr.
move wt_bseg-hkont to ws_bseg_hold-hkont.
move wt_bseg-dmbe2 to ws_bseg_hold-dmbe2.
APPEND ws_bseg_hold TO wt_bseg_output.
ENDLOOP.
ENDWHILE.
ENDFORM. " f_split_data -
Question about comparing an array of names to a hash table
I'm still learning Powershell but feel like I have the basics now. I have a new project I'm working on and want some input as the best way to do this:
The Problem:
Let's say you have a list of several hundred video game titles and the dollar value of each in a text or two column CSV file.
The example CSV file looks likes this:
Game, Price
Metroid, $15.00
The Legend of Zelda!, $12.00
Mike Tyson's Punch-Out!, $18.00
Super Mario Bros., $16.00
Kung Fu, $7.00
You have another list of just the video game titles, this most likely is just a text file with each title listed on its own line.
The example text file looks like this:
Kung Fu
Metroid
Mike Tysons Punch-Out
Legend of Zelda
What I think would happen in the Script:
Use import-csv and create a hash table that will contain a key = Title and the value = the price.
Use import-csv and create an array for the title names.
Foreach loop through each Game Title and match the value against the Hash Table, if there's a match found, put that into another array that will later add all prices of each item and give you a total sum.
The challenge:
So far when I try and do one line examples of comparing names against the hash table it seems to only work with exact name matches. In the above example I've purposely made the game titles slightly different because in the real world people just write things
differently.
With that said, I've tried using the following single line to match things up and it only seems to work if the values match exactly.
$hash_table.ContainsKey("Game Title")
Is there a regex I should use to change the input of the game titles before creating the hash table or doing the compare? Is there another matching operator that is better and matching with String values that have slightly different grammar. An example would
be the game "The Legend of Zelda". Sometimes people just put "Legend of Zelda", that's close but not exact. I think using a regex to remove extra spaces and symbols would work, but what about for matching of words or letters??
Any ideas would be very helpful and thanks!There's no pat answer for this.
You can create an array from the hash table keys:
$hashtable = @{"The Legend of Zelda" = 15.00}
$titles = $hashtable.getenumerator() | select -ExpandProperty Name
And then test that using a wildcard match:
$titles -like "*Game Title*"
and see if it returns just one match. If it does then use that match to do your lookup in the hash table. If it returns 0, or more than one match then you need to check the spelling or qualify the title search some more.
[string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " " -
Hash Table/Binary Search Tree question
I'm creating a hash table that uses a Binary Search Tree in each element to handle collisions. I isolated the problem to where and how it occurs to two lines of code; here is the relevant code(There is a working method "insert(String s)" in the BinarySearchTree class):
x = new BinarySearchTree[size];
x[0].insert("pppppp");When I try to insert ANY String into ANY array element of x (not just 0), I get a NullPointerException. Keep in mind that BinarySearchTree works perfectly when it's not an array.
Message was edited by:
rpelepx = new BinarySearchTree[size];
x[0].insert("pppppp");You should know that when you allocate an array of objects, each entry is initialized to null.
Instead, you must do this:
x = new BinarySearchTree[size];
for(int i=0; i<size; i++) {
x[i] = new BinarySearchTree();
}After that, you can then do
x[0].insert("pppppp"); -
Hello experts,
I just want to make sure if my idea about sorted and hashed table is correct.Please give tips and suggestions.
In one of my reports, I declared a structure and an itab.
TYPES: BEGIN OF t_mkpf,
mblnr LIKE mkpf-mblnr,
mjahr LIKE mkpf-mjahr,
budat LIKE mkpf-budat,
xblnr(10) TYPE c,
tcode2 LIKE mkpf-tcode2,
cputm LIKE mkpf-cputm,
blart LIKE mkpf-blart,
END OF t_mkpf.
it_mkpf TYPE SORTED TABLE OF t_mkpf WITH HEADER LINE
WITH NON-UNIQUE KEY mblnr mjahr.
Now, I declared it as a sorted table with a non-unique key MBLNR and MJAHR. Now suppose I have 1000 records in my itab. how will it search for a particular record?
2. Is it faster than sorting a standard table then reading it using binary search?
3. How do I use a hashed table effectively? lets say that I want to use hashed type instead of sorted table in my example above.
4. I am currently practicing ABAP Objects and my problem is that I think my mindset when programming a report is still the 'procedural one'. How do one use ABAP concepts effectively?
Again, thank you guys and have a nice day!Hi Viray,
<b>The different ways to fill an Internal Table:</b>
<b>append&sort</b>
This is the simplest one. I do appends on a standard table and then a sort.
data: lt_tab type standard table of ...
do n times.
ls_line = ...
append ls_line to lt_tab.
enddo.
sort lt_tab.
The thing here is the fast appends and the slow sort - so this is interesting how this will compare to the following one.
<b>read binary search & insert index sy-tabix</b>
In this type I also use a standard table, but I read to find the correct insert index to get a sorted table also.
data: lt_tab type standard table of ...
do n times.
ls_line = ...
read table lt_tab transporting no fields with key ... binary search.
if sy-subrc <> 0.
insert ls_line into lt_tab index sy-tabix.
endif.
enddo.
<b>sorted table with non-unique key</b>
Here I used a sorted table with a non-unique key and did inserts...
data: lt_tab type sorted table of ... with non-unique key ...
do n times.
ls_line = ...
insert ls_line into table lt_tab.
enddo.
<b>sorted table with unique key</b>
The coding is the same instead the sorted table is with a unique key.
data: lt_tab type sorted table of ... with unique key ...
do n times.
ls_line = ...
insert ls_line into table lt_tab.
enddo.
<b>hashed table</b>
The last one is the hashed table (always with unique key).
data: lt_tab type hashed table of ... with unique key ...
do n times.
ls_line = ...
insert ls_line into table lt_tab.
enddo.
<b>You Can use this Program to Test:</b>
types:
begin of local_long,
key1 type char10,
key2 type char10,
data1 type char10,
data2 type char10,
data3 type i,
data4 type sydatum,
data5 type numc10,
data6 type char32,
data7 type i,
data8 type sydatum,
data9 type numc10,
dataa type char32,
datab type i,
datac type sydatum,
datad type numc10,
datae type char32,
dataf type i,
datag type sydatum,
datah type numc10,
datai type char32,
dataj type i,
datak type sydatum,
datal type numc10,
datam type char32,
datan type i,
datao type sydatum,
datap type numc10,
dataq type char32,
datar type i,
datas type sydatum,
datat type numc10,
datau type char32,
datav type i,
dataw type sydatum,
datax type numc10,
datay type char32,
dataz type i,
data11 type numc10,
data21 type char32,
data31 type i,
data41 type sydatum,
data51 type numc10,
data61 type char32,
data71 type i,
data81 type sydatum,
data91 type numc10,
dataa1 type char32,
datab1 type i,
datac1 type sydatum,
datad1 type numc10,
datae1 type char32,
dataf1 type i,
datag1 type sydatum,
datah1 type numc10,
datai1 type char32,
dataj1 type i,
datak1 type sydatum,
datal1 type numc10,
datam1 type char32,
datan1 type i,
datao1 type sydatum,
datap1 type numc10,
dataq1 type char32,
datar1 type i,
datas1 type sydatum,
datat1 type numc10,
datau1 type char32,
datav1 type i,
dataw1 type sydatum,
datax1 type numc10,
datay1 type char32,
dataz1 type i,
end of local_long.
data:
ls_long type local_long,
lt_binary type standard table of local_long,
lt_sort_u type sorted table of local_long with unique key key1 key2,
lt_sort_n type sorted table of local_long with non-unique key key1 key2,
lt_hash_u type hashed table of local_long with unique key key1 key2,
lt_apsort type standard table of local_long.
field-symbols:
<ls_long> type local_long.
parameters:
min1 type i default 1,
max1 type i default 1000,
min2 type i default 1,
max2 type i default 1000,
i1 type i default 100,
i2 type i default 200,
i3 type i default 300,
i4 type i default 400,
i5 type i default 500,
i6 type i default 600,
i7 type i default 700,
i8 type i default 800,
i9 type i default 900,
fax type i default 1000.
types:
begin of measure,
what(10) type c,
size(6) type c,
time type i,
lines type i,
reads type i,
readb type i,
fax_s type i,
fax_b type i,
fax(6) type c,
iter type i,
end of measure.
data:
lt_time type standard table of measure,
lt_meantimes type standard table of measure,
ls_time type measure,
lv_method(7) type c,
lv_i1 type char10,
lv_i2 type char10,
lv_f type f,
lv_start type i,
lv_end type i,
lv_normal type i,
lv_size type i,
lv_order type i,
lo_rnd1 type ref to cl_abap_random_int,
lo_rnd2 type ref to cl_abap_random_int.
get run time field lv_start.
lo_rnd1 = cl_abap_random_int=>create( seed = lv_start min = min1 max = max1 ).
add 1 to lv_start.
lo_rnd2 = cl_abap_random_int=>create( seed = lv_start min = min2 max = max2 ).
ls_time-fax = fax.
do 5 times.
do 9 times.
case sy-index.
when 1. lv_size = i1.
when 2. lv_size = i2.
when 3. lv_size = i3.
when 4. lv_size = i4.
when 5. lv_size = i5.
when 6. lv_size = i6.
when 7. lv_size = i7.
when 8. lv_size = i8.
when 9. lv_size = i9.
endcase.
if lv_size > 0.
ls_time-iter = 1.
clear lt_apsort.
ls_time-what = 'APSORT'.
ls_time-size = lv_size.
get run time field lv_start.
do lv_size times.
perform fill.
append ls_long to lt_apsort.
enddo.
sort lt_apsort by key1 key2.
get run time field lv_end.
ls_time-time = lv_end - lv_start.
ls_time-reads = 0.
ls_time-readb = 0.
ls_time-lines = lines( lt_apsort ).
get run time field lv_start.
do.
add 1 to ls_time-readb.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_apsort
assigning <ls_long>
with key key1 = lv_i1
key2 = lv_i2
binary search.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do.
add 1 to ls_time-reads.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_apsort
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_apsort
assigning <ls_long>
with key key1 = lv_i1
key2 = lv_i2
binary search.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_b = lv_end - lv_start.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_apsort
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_s = lv_end - lv_start.
collect ls_time into lt_time.
clear lt_binary.
ls_time-what = 'BINARY'.
ls_time-size = lv_size.
get run time field lv_start.
do lv_size times.
perform fill.
read table lt_binary
transporting no fields
with key key1 = ls_long-key1
key2 = ls_long-key2
binary search.
if sy-index <> 0.
insert ls_long into lt_binary index sy-tabix.
endif.
enddo.
get run time field lv_end.
ls_time-time = lv_end - lv_start.
ls_time-reads = 0.
ls_time-readb = 0.
ls_time-lines = lines( lt_binary ).
get run time field lv_start.
do.
add 1 to ls_time-readb.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_binary
assigning <ls_long>
with key key1 = lv_i1
key2 = lv_i2
binary search.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do.
add 1 to ls_time-reads.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_binary
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_binary
assigning <ls_long>
with key key1 = lv_i1
key2 = lv_i2
binary search.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_b = lv_end - lv_start.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_binary
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_s = lv_end - lv_start.
collect ls_time into lt_time.
clear lt_sort_n.
ls_time-what = 'SORT_N'.
ls_time-size = lv_size.
get run time field lv_start.
do lv_size times.
perform fill.
insert ls_long into table lt_sort_n.
enddo.
get run time field lv_end.
ls_time-time = lv_end - lv_start.
ls_time-reads = 0.
ls_time-readb = 0.
ls_time-lines = lines( lt_sort_n ).
get run time field lv_start.
do.
add 1 to ls_time-readb.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_sort_n
assigning <ls_long>
with table key key1 = lv_i1
key2 = lv_i2.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do.
add 1 to ls_time-reads.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_sort_n
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_sort_n
assigning <ls_long>
with table key key1 = lv_i1
key2 = lv_i2.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_b = lv_end - lv_start.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_sort_n
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_s = lv_end - lv_start.
collect ls_time into lt_time.
clear lt_sort_u.
ls_time-what = 'SORT_U'.
ls_time-size = lv_size.
get run time field lv_start.
do lv_size times.
perform fill.
insert ls_long into table lt_sort_u.
enddo.
get run time field lv_end.
ls_time-time = lv_end - lv_start.
ls_time-reads = 0.
ls_time-readb = 0.
ls_time-lines = lines( lt_sort_u ).
get run time field lv_start.
do.
add 1 to ls_time-readb.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_sort_u
assigning <ls_long>
with table key key1 = lv_i1
key2 = lv_i2.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do.
add 1 to ls_time-reads.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_sort_u
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_sort_u
assigning <ls_long>
with table key key1 = lv_i1
key2 = lv_i2.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_b = lv_end - lv_start.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_sort_u
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_s = lv_end - lv_start.
collect ls_time into lt_time.
clear lt_hash_u.
ls_time-what = 'HASH_U'.
ls_time-size = lv_size.
get run time field lv_start.
do lv_size times.
perform fill.
insert ls_long into table lt_hash_u.
enddo.
get run time field lv_end.
ls_time-time = lv_end - lv_start.
ls_time-reads = 0.
ls_time-readb = 0.
ls_time-lines = lines( lt_hash_u ).
get run time field lv_start.
do.
add 1 to ls_time-readb.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_hash_u
assigning <ls_long>
with table key key1 = lv_i1
key2 = lv_i2.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do.
add 1 to ls_time-reads.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_hash_u
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data11 = sy-index.
endif.
get run time field lv_end.
subtract lv_start from lv_end.
if lv_end >= ls_time-time.
exit.
endif.
enddo.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_hash_u
assigning <ls_long>
with table key key1 = lv_i1
key2 = lv_i2.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_b = lv_end - lv_start.
get run time field lv_start.
do fax times.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
read table lt_hash_u
assigning <ls_long>
with key key2 = lv_i1
key1 = lv_i2.
if sy-subrc = 0.
<ls_long>-data21 = sy-index.
endif.
enddo.
get run time field lv_end.
ls_time-fax_s = lv_end - lv_start.
collect ls_time into lt_time.
endif.
enddo.
enddo.
sort lt_time by what size.
write: / ' type | size | time | tab-size | directread | std read | time direct | time std read'.
write: / sy-uline.
loop at lt_time into ls_time.
write: / ls_time-what, '|', ls_time-size, '|', ls_time-time, '|', ls_time-lines, '|', ls_time-readb, '|', ls_time-reads, '|', ls_time-fax_b, '|', ls_time-fax_s.
endloop.
form fill.
lv_i1 = lo_rnd1->get_next( ).
lv_i2 = lo_rnd2->get_next( ).
ls_long-key1 = lv_i1.
ls_long-key2 = lv_i2.
ls_long-data1 = lv_i1.
ls_long-data2 = lv_i2.
ls_long-data3 = lv_i1.
ls_long-data4 = sy-datum + lv_i1.
ls_long-data5 = lv_i1.
ls_long-data6 = lv_i1.
ls_long-data7 = lv_i1.
ls_long-data8 = sy-datum + lv_i1.
ls_long-data9 = lv_i1.
ls_long-dataa = lv_i1.
ls_long-datab = lv_i1.
ls_long-datac = sy-datum + lv_i1.
ls_long-datad = lv_i1.
ls_long-datae = lv_i1.
ls_long-dataf = lv_i1.
ls_long-datag = sy-datum + lv_i1.
ls_long-datah = lv_i1.
ls_long-datai = lv_i1.
ls_long-dataj = lv_i1.
ls_long-datak = sy-datum + lv_i1.
ls_long-datal = lv_i1.
ls_long-datam = lv_i1.
ls_long-datan = sy-datum + lv_i1.
ls_long-datao = lv_i1.
ls_long-datap = lv_i1.
ls_long-dataq = lv_i1.
ls_long-datar = sy-datum + lv_i1.
ls_long-datas = lv_i1.
ls_long-datat = lv_i1.
ls_long-datau = lv_i1.
ls_long-datav = sy-datum + lv_i1.
ls_long-dataw = lv_i1.
ls_long-datax = lv_i1.
ls_long-datay = lv_i1.
ls_long-dataz = sy-datum + lv_i1.
ls_long-data11 = lv_i1.
ls_long-data21 = lv_i1.
ls_long-data31 = lv_i1.
ls_long-data41 = sy-datum + lv_i1.
ls_long-data51 = lv_i1.
ls_long-data61 = lv_i1.
ls_long-data71 = lv_i1.
ls_long-data81 = sy-datum + lv_i1.
ls_long-data91 = lv_i1.
ls_long-dataa1 = lv_i1.
ls_long-datab1 = lv_i1.
ls_long-datac1 = sy-datum + lv_i1.
ls_long-datad1 = lv_i1.
ls_long-datae1 = lv_i1.
ls_long-dataf1 = lv_i1.
ls_long-datag1 = sy-datum + lv_i1.
ls_long-datah1 = lv_i1.
ls_long-datai1 = lv_i1.
ls_long-dataj1 = lv_i1.
ls_long-datak1 = sy-datum + lv_i1.
ls_long-datal1 = lv_i1.
ls_long-datam1 = lv_i1.
ls_long-datan1 = sy-datum + lv_i1.
ls_long-datao1 = lv_i1.
ls_long-datap1 = lv_i1.
ls_long-dataq1 = lv_i1.
ls_long-datar1 = sy-datum + lv_i1.
ls_long-datas1 = lv_i1.
ls_long-datat1 = lv_i1.
ls_long-datau1 = lv_i1.
ls_long-datav1 = sy-datum + lv_i1.
ls_long-dataw1 = lv_i1.
ls_long-datax1 = lv_i1.
ls_long-datay1 = lv_i1.
ls_long-dataz1 = sy-datum + lv_i1.
endform.".
Thanks & Regards,
YJR. -
I would like to know if there is a method of class Hashtable to update values in a Hash Table. I know about methods "put", "get", "remove" and "contains", but what I need is to modify the field "quantity" of a certain row (element) in the Hash Table.
Do Hash Tables allow to update? I'm surprised my book does not mention any method for updating a Hash Table.
Thank you in advance for any information.put() will replace whatever exists under the specified key, if anything, with whatever new value you put. If you want to modify the contents of an object stored in the hashtable, you'd have to get the object, modify it, and put it back. At least for non-immutable objects you could just get and modify. The modify will modify that object, the reference to it would already exist in the table, so the put is kinda redundant, and would affect performance somewhat (although not noticeable for many apps). This wouldn't work for String or Integer, for example, you'd really just be putting a new String or Integer, so no need for get, just do put. Get it?
-
Hello,
I am debating whether or not to use hash tables for something I need to do.. Here is the scenario
I was given a list of data, this data contains a string of indexes.
Throughout my program, I took that list of data and had to sort it. Now after I have done the calculations I needed, I need to re-output the
data from the original file in the same order, using some new information that I have retrieved.
Basically here is my question, should I iterate through the original file, searching the index per line, then do a manual search through my maniupulated sorted list which contains the information i want?
OR
Should I learn to use hashing, hash out the index's in the list, and hash the sorted list, and find matches? To be honest I'm not too sure how hashing works and how it can benefit.Don't worry about efficiency now. Worry about correctness. You're far more likely to make your program unusably incorrect by chasing efficiency, than by making it unusable inefficient by chasing correctness.
Anyway, I don't see how hashing has any relevance to this issue.
You could just create a list of courses when you read in the input, and then make another list for purposes of sorting. Then when you produce output, use the original list. Actually I'm not convinced that you even need to make that second sorted list -- the efficiency gains are probably minuscule or possibly even negative -- but whatever. Since the objects in both lists are the same, changes you make to the objects in the second list are present in the objects in the first list. -
How about use partial key to loop at a hashed table?
Such as I want to loop a Internal table of BSID according to BKPF.
data itab_bsid type hashed table of BSID with unique key bukrs belnr gjahr buzid.
Loop at itab_bsid where bukrs = wa_bkpf-bukrs
and belnr = wa_bkpf-belnr
and gjahr = wa_bkpf-gjahr.
endloop.
I know if you use all key to access this hashed table ,it is certainly quick, and my question is when i use partial key of this internal hashed table to loop it, how about its performance.
Another question is in this case(BSID have many many record) , Sorted table and Hashed table , Which is better in performance.You can't cast b/w data reference which l_tax is and object reference which l_o_tax_code is.
osref is a generic object type and you store a reference to some object in it, right? So the question is: what kind of object you store there? Please note - this must be an object reference , not data reference .
i.e
"here goes some class
class zcl_spfli definition.
endclass.
class zcl_spfli implementation.
endclass.
"here is an OBJECT REFERENCE for it, (so I refer to a class) i.e persistent object to table SPFLI
data oref_spfli type ref to zcl_spfli.
"but here I have a DATA REFERENCE (so I refer to some data object) i.e DDIC structure SPFLI
data dref_spfli type ref to spfli.
So my OSREF can hold only oref_spfli but it not intended for dref_spfli . That's why you get this syntax error. Once you have stored reference to zcl_spfli in osref then you will be able to dereference it and access this object's attributes.
data: osref type osref.
create object osref_spfli.
osref = osref_spfli.
"now osref holds reference to object, you can deference it
oref_spfli ?= osref.
osref_spfli->some_attribute = ....
OSREFTAB is just a table whose line is of type OSREF (so can hold multiple object references - one in each line).
Regards
Marcin -
How to create hashed table in runtime
hi experts
how to create hashed table in runtime, please give me the coading style.
please help me.
regards
subhasisHi,
Have alook at the code, and pls reward points.
Use Hashed Tables to Improve Performance :
report zuseofhashedtables.
Program: ZUseOfHashedTables **
Author: XXXXXXXXXXXXXXXXXX **
Versions: 4.6b - 4.6c **
Notes: **
this program shows how we can use hashed tables to improve **
the responce time. **
It shows, **
1. how to declare hashed tables **
2. a cache-like technique to improve access to master data **
3. how to collect data using hashed tables **
4. how to avoid deletions of unwanted data **
Results: the test we run read about 31000 rows from mkpf, 150000 **
rows from mseg, 500 rows from makt and 400 from lfa1. **
it filled ht_lst with 24500 rows and displayed them in **
alv grid format. **
It needed about 65 seconds to perform this task (with **
all the db buffers empty) **
The same program with standard tables needed 140 seconds **
to run with the same recordset and with buffers filled in **
Objetive: show a list that consists of all the material movements **
'101' - '901' for a certain range of dates in mkpf-budat. **
the columns to be displayed are: **
mkpf-budat, **
mkpf-mblnr, **
mseg-lifnr, **
lfa1-name1, **
mkpf-xblnr, **
mseg-zeile **
mseg-charg, **
mseg-matnr, **
makt-maktx, **
mseg-erfmg, **
mseg-erfme. **
or show a sumary list by matnr - menge **
You'll have to create a pf-status called vista - **
See form set_pf_status for details **
tables used -
tables: mkpf,
mseg,
lfa1,
makt.
global hashed tables used
data: begin of wa_mkpf, "header
mblnr like mkpf-mblnr,
mjahr like mkpf-mjahr,
budat like mkpf-budat,
xblnr like mkpf-xblnr,
end of wa_mkpf.
data: ht_mkpf like hashed table of wa_mkpf
with unique key mblnr mjahr
with header line.
data: begin of wa_mseg, " line items
mblnr like mseg-mblnr,
mjahr like mseg-mjahr,
zeile like mseg-zeile,
bwart like mseg-bwart,
charg like mseg-charg,
matnr like mseg-matnr,
lifnr like mseg-lifnr,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
end of wa_mseg.
data ht_mseg like hashed table of wa_mseg
with unique key mblnr mjahr zeile
with header line.
cache structure for lfa1 records
data: begin of wa_lfa1,
lifnr like lfa1-lifnr,
name1 like lfa1-name1,
end of wa_lfa1.
data ht_lfa1 like hashed table of wa_lfa1
with unique key lifnr
with header line.
cache structure for material related data
data: begin of wa_material,
matnr like makt-matnr,
maktx like makt-maktx,
end of wa_material.
data: ht_material like hashed table of wa_material
with unique key matnr
with header line.
result table
data: begin of wa_lst, "
budat like mkpf-budat,
mblnr like mseg-mblnr,
lifnr like mseg-lifnr,
name1 like lfa1-name1,
xblnr like mkpf-xblnr,
zeile like mseg-zeile,
charg like mseg-charg,
matnr like mseg-matnr,
maktx like makt-maktx,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
mjahr like mseg-mjahr,
end of wa_lst.
data: ht_lst like hashed table of wa_lst
with unique key mblnr mjahr zeile
with header line.
data: begin of wa_lst1, " sumary by material
matnr like mseg-matnr,
maktx like makt-maktx,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
end of wa_lst1.
data: ht_lst1 like hashed table of wa_lst1
with unique key matnr
with header line.
structures for alv grid display.
itabs
type-pools: slis.
data: it_lst like standard table of wa_lst with header line,
it_fieldcat_lst type slis_t_fieldcat_alv with header line,
it_sort_lst type slis_t_sortinfo_alv,
it_lst1 like standard table of wa_lst1 with header line,
it_fieldcat_lst1 type slis_t_fieldcat_alv with header line,
it_sort_lst1 type slis_t_sortinfo_alv.
structures
data: wa_sort type slis_sortinfo_alv,
ls_layout type slis_layout_alv.
global varialbes
data: g_lines type i.
data: g_repid like sy-repid,
ok_code like sy-ucomm.
selection-screen
"text: Dates:
select-options: so_budat for mkpf-budat default sy-datum.
"text: Material numbers.
select-options: so_matnr for mseg-matnr.
selection-screen uline.
selection-screen skip 1.
"Text: show summary by material.
parameters: gp_bymat as checkbox default ''.
start-of-selection.
perform get_data.
perform show_data.
end-of-selection.
FORM get_data *
form get_data.
select mblnr mjahr budat xblnr
into table ht_mkpf
from mkpf
where budat in so_budat. " make use of std index.
have we retrieved data from mkpf?
describe table ht_mkpf lines g_lines.
if g_lines > 0.
if true then retrieve all related records from mseg.
Doing this way we make sure that the access is by primary key
of mseg.
The reason is that is faster to filter them in memory
than to allow the db server to do it.
select mblnr mjahr zeile bwart charg
matnr lifnr erfmg erfme
into table ht_mseg
from mseg
for all entries in ht_mkpf
where mblnr = ht_mkpf-mblnr
and mjahr = ht_mkpf-mjahr.
endif.
fill t_lst or t_lst1 according to user's choice.
if gp_bymat = ' '.
perform fill_ht_lst.
else.
perform fill_ht_lst1.
endif.
endform.
form fill_ht_lst.
refresh ht_lst.
Example: how to discard unwanted data in an efficient way.
loop at ht_mseg.
filter unwanted data
check ht_mseg-bwart = '101' or ht_mseg-bwart = '901'.
check ht_mseg-matnr in so_matnr.
read header line.
read table ht_mkpf with table key mblnr = ht_mseg-mblnr
mjahr = ht_mseg-mjahr.
clear ht_lst.
* note : this may be faster if you specify field by field.
move-corresponding ht_mkpf to ht_lst.
move-corresponding ht_mseg to ht_lst.
perform read_lfa1 using ht_mseg-lifnr changing ht_lst-name1.
perform read_material using ht_mseg-matnr changing ht_lst-maktx.
insert table ht_lst.
endloop.
endform.
form fill_ht_lst1.
refresh ht_lst1.
Example: how to discard unwanted data in an efficient way.
hot to simulate a collect in a faster way
loop at ht_mseg.
filter unwanted data
check ht_mseg-bwart = '101' or ht_mseg-bwart = '901'.
check ht_mseg-matnr in so_matnr.
* note : this may be faster if you specify field by field.
read table ht_lst1 with table key matnr = ht_mseg-matnr
transporting erfmg.
if sy-subrc <> 0. " if matnr doesn't exist in sumary table
" insert a new record
ht_lst1-matnr = ht_mseg-matnr.
perform read_material using ht_mseg-matnr changing ht_lst1-maktx.
ht_lst1-erfmg = ht_mseg-erfmg.
ht_lst1-erfme = ht_mseg-erfme.
insert table ht_lst1.
else." a record was found.
" collect erfmg. To do so, fill in the unique key and add
" the numeric fields.
ht_lst1-matnr = ht_mseg-matnr.
add ht_mseg-erfmg to ht_lst1-erfmg.
modify table ht_lst1 transporting erfmg.
endif.
endloop.
endform.
implementation of cache for lfa1.
form read_lfa1 using p_lifnr changing p_name1.
read table ht_lfa1 with table key lifnr = p_lifnr
transporting name1.
if sy-subrc <> 0.
clear ht_lfa1.
ht_lfa1-lifnr = p_lifnr.
select single name1
into ht_lfa1-name1
from lfa1
where lifnr = p_lifnr.
if sy-subrc <> 0. ht_lfa1-name1 = 'n/a in lfa1'. endif.
insert table ht_lfa1.
endif.
p_name1 = ht_lfa1-name1.
endform.
implementation of cache for material data
form read_material using p_matnr changing p_maktx.
read table ht_material with table key matnr = p_matnr
transporting maktx.
if sy-subrc <> 0.
ht_material-matnr = p_matnr.
select single maktx into ht_material-maktx
from makt
where spras = sy-langu
and matnr = p_matnr.
if sy-subrc <> 0. ht_material-maktx = 'n/a in makt'. endif.
insert table ht_material.
endif.
p_maktx = ht_material-maktx.
endform.
form show_data.
if gp_bymat = ' '.
perform show_ht_lst.
else.
perform show_ht_lst1.
endif.
endform.
form show_ht_lst.
"needed because the FM can't use a hashed table.
it_lst[] = ht_lst[].
perform fill_layout using 'full display'
changing ls_layout.
perform fill_columns_lst.
perform sort_lst.
g_repid = sy-repid.
call function 'REUSE_ALV_GRID_DISPLAY'
exporting
i_callback_program = g_repid
i_callback_pf_status_set = 'SET_PF_STATUS'
is_layout = ls_layout
it_fieldcat = it_fieldcat_lst[]
it_sort = it_sort_lst
tables
t_outtab = it_lst
exceptions
program_error = 1
others = 2.
endform.
form show_ht_lst1.
"needed because the FM can't use a hashed table.
it_lst1[] = ht_lst1[].
perform fill_layout using 'Sumary by matnr'
changing ls_layout.
perform fill_columns_lst1.
perform sort_lst.
g_repid = sy-repid.
call function 'REUSE_ALV_GRID_DISPLAY'
exporting
i_callback_program = g_repid
i_callback_pf_status_set = 'SET_PF_STATUS'
is_layout = ls_layout
it_fieldcat = it_fieldcat_lst1[]
it_sort = it_sort_lst
tables
t_outtab = it_lst1
exceptions
program_error = 1
others = 2.
endform.
form fill_layout using p_window_titlebar
changing cs_layo type slis_layout_alv.
clear cs_layo.
cs_layo-window_titlebar = p_window_titlebar.
cs_layo-edit = 'X'.
cs_layo-edit_mode = space.
endform. " armar_layout_stock
form set_pf_status using rt_extab type slis_t_extab.
create a new status
and then select extras -> adjust template -> listviewer
set pf-status 'VISTA'.
endform. "set_pf_status
define add_lst.
clear it_fieldcat_lst.
it_fieldcat_lst-fieldname = &1.
it_fieldcat_lst-outputlen = &2.
it_fieldcat_lst-ddictxt = 'L'.
it_fieldcat_lst-seltext_l = &1.
it_fieldcat_lst-seltext_m = &1.
it_fieldcat_lst-seltext_m = &1.
if &1 = 'MATNR'.
it_fieldcat_lst-emphasize = 'C111'.
endif.
append it_fieldcat_lst.
end-of-definition.
define add_lst1.
clear it_fieldcat_lst.
it_fieldcat_lst1-fieldname = &1.
it_fieldcat_lst1-outputlen = &2.
it_fieldcat_lst1-ddictxt = 'L'.
it_fieldcat_lst1-seltext_l = &1.
it_fieldcat_lst1-seltext_m = &1.
it_fieldcat_lst1-seltext_m = &1.
append it_fieldcat_lst1.
end-of-definition.
form fill_columns_lst.
set columns for output.
refresh it_fieldcat_lst.
add_lst 'BUDAT' 10.
add_lst 'MBLNR' 10.
add_lst 'LIFNR' 10.
add_lst 'NAME1' 35.
add_lst 'XBLNR' 15.
add_lst 'ZEILE' 5.
add_lst 'CHARG' 10.
add_lst 'MATNR' 18.
add_lst 'MAKTX' 30.
add_lst 'ERFMG' 17.
add_lst 'ERFME' 5.
add_lst 'MJAHR' 4.
endform.
form fill_columns_lst1.
set columns for output.
refresh it_fieldcat_lst1.
add_lst1 'MATNR' 18.
add_lst1 'MAKTX' 30.
add_lst1 'ERFMG' 17.
add_lst1 'ERFME' 5..
endform.
Regards,
Ameet -
Bitmap..hash tables?
Hi guys, just 2 questions:
1. There are four types of compression (bitmap, run-length encoding (RLE), zlib, and index-value)which affect disk space differently. What is the effect of chosing 'bitmap' on calc script optimization?
2. What are hash tables?
Edited by: user3952257 on 29-Apr-2010 07:56If you select bitmap, in newer versions, it will select bitmap, or index value pair block by block depending on the contents of the block. If you select RLE it will select the best of the three. Because of that I'm not sure why RLE is not the default
-
Hash table and function module input
Hi ABAP Expert,
Please advise what happening if i am passing the intertal table (hashtable) become input of function module (table).
so insite the function module is this table still hashtable type or just normal internal table ?
Thank you and Regards
FernandTyping of such parameter should be either generic (i.e ANY TABLE) or fully specified (HASHED/SORTED/STANDARD TABLE). In both cases when you pass i.e. HASHED table to that formal parameter the dynamic type will be inherited by the actual paremeter.
This means that inside the function module you will not be able to use HASHED table "banned" statement i.e. not appending to this table. The system must be fully convinced about the type of passed parameter to allow certain access. Without that knowledge it won't pass you through the syntax checker or will trigger runtime error.
I.e
"1) parameter is typed
CHANGING
C_TAB type ANY TABLE
"here you can't use STANDARD/SORTED table specific statements as the dynamic type of param might be HASHED TABLE
append ... to c_tab. "error during runtime
"2) parameter is typed
CHANGING
C_TAB type HASHED TABLE
"here system explicitly knows that dynamic type is the same as static one so you can append to this table too
append ... to c_tab. "syntax error before runtime
So the anwser to your question
so insite the function module is this table still hashtable type or just normal internal table ?
is...
During syntax check system takes static type of table and shouts if table related operation is not allowed for this kind.
During runtime system takes dynamic type of the table and checks whether particular statement is allowed for this kind of table, if not triggers an exception.
Regards
Marcin -
Cluster tables , pool tables ,hashed tables?
give me the examles of cluster and pool tables & hashed tables ?
<b>I. Transparent tables (BKPF, VBAK, VBAP, KNA1, COEP)</b>
Allows secondary indexes (SE11->Display Table->Indexes)
Can be buffered (SE11->Display Table->technical settings) Heavily updated tables should not be buffered.
<b>
II. Pool Tables (match codes, look up tables)</b>
Should be accessed via primary key or
Should be buffered (SE11->Display Table->technical settings)
No secondary indexes
Select * is Ok because all columns retrieved anyway
<b>III. Cluster Tables (BSEG,BSEC)</b>
Should be accessed via primary key - very fast retrieval otherwise very slow
No secondary indexes
Select * is Ok because all columns retrieved anyway. Performing an operation on multiple rows is more efficient than single row operations. Therefore you still want to select into an internal table. If many rows are being selected into the internal table, you might still like to retrieve specific columns to cut down on the memory required.
Statistical SQL functions (SUM, AVG, MIN, MAX, etc) not supported
Can not be buffered
<b>IV. Buffered Tables (includes both Transparent & Pool Tables)</b>
While buffering database tables in program memory (SELECT into internal table) is generally a good idea for performance, it is not always necessary. Some tables are already buffered in memory. These are mostly configuration tables. If a table is already buffered, then a select statement against it is very fast. To determine if a table is buffered, choose the 'technical settings' soft button from the data dictionary display of a table (SE12). Pool tables should all be buffered.
regards,
srinivas
<b>*reward for useful answers*</b> -
The Born collection classes don't have an alternative implementation of
HashTable. Instead they are wrappers around the Forté Framework classes
using Array, HashTable etc.
Raymond Blum wrote:
>
Eric
Although I have not looked at them in a while, I remember the Born
Collection Classes having some HashTable and Iterator functionality that
might be worth looking at as an alternative.
The last version I had is 1.2 and was offered under the GNU license,
there may be something newer, check out
http://www.born.com
or email
[email protected]
On Thu, 6 May 1999, Fingerhut, Eric wrote:
We have been using the hashtable class in our application, and have recently
hit performance problems that lead us to the following questions:
1) Forte's hashing algorithm - we are concerned that Forte's hashing
algorithm may not be ideal or even adequate for our data; does anyone know
how it works?
2) Hash table performance - any ideas on how to guage it?
3) Is anyone aware of a binary tree implementation that might be
available?
Dr. Thomas Kunst mailto:[email protected]
sd&m GmbH & Co. KG http://www.sdm.de
software design & management
Thomas-Dehler-Str. 27, 81737 Muenchen, Germany
Tel +49 89 63812-221 Fax -444
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>I'm trying to understand what you have and what you are trying to do. First of all, the Web PL/SQL generator requires you to generate Table APIs (TAPIs) for every table referenced in a module component. If you created one based on a view, the only way to get it to work is to generate a TAPI on the view, which Designer doesn't do natively. I've fooled Designer into generating a view TAPI by creating a table definition that looks exactly like the view and has the same name, and generating the TAPI for that table. Even then, I have to hand-modify the resulting TAPI to remove references to ROWID, since views don't have rowids.
I've never tried generating TAPI against a materialized view (MV) - it might work, since there really is a table to instantiate the MV, but it might require the same work around as the view.
As for the reference in the module component definition, I think you have to delete the old table reference and create a new one. I'm not sure what you are doing with "copy object". Remapping is usually done when you are copying an object from one application system to another, and want the references to point at equivalent objects in the new application system, not the objects in the old one. -
We have been using the hashtable class in our application, and have recently
hit performance problems that lead us to the following questions:
1) Forte's hashing algorithm - we are concerned that Forte's hashing
algorithm may not be ideal or even adequate for our data; does anyone know
how it works?
2) Hash table performance - any ideas on how to guage it?
3) Is anyone aware of a binary tree implementation that might be
available?
Much thanks in advance,
Eric
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>We have been using the hashtable class in our application, and have recently
hit performance problems that lead us to the following questions:
1) Forte's hashing algorithm - we are concerned that Forte's hashing
algorithm may not be ideal or even adequate for our data; does anyone know
how it works?
2) Hash table performance - any ideas on how to guage it?
3) Is anyone aware of a binary tree implementation that might be
available?
Much thanks in advance,
Eric
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
Maybe you are looking for
-
I have a 5th generation pod and when i was trying to put music on my ipod it deleted all of my music and when i tryed restoring it it made me update it also well when i did it erased half of my photos and my music wasnt there anymor. i need help asap
-
Hi , Im using flash 8 , I always used to import graphics rather than to work with original drawing tools of flash, im studding those tools and I have some questions. is there a way to do mathematic in the parameter like I can do in illustrator (
-
I keep getting the message when I open Muse CC, that the latest version is here and that i need to download it... But when i click "Download Latest Version" and it takes me to the applications manager it tell me that Muse is "Up to date"..... I also
-
Question about synchronized static function.
Hi, everyone! The synchronized statement often deals with an object (called monitor). But a static function can be invoked without an object. So if a static function is declared as static, what is the function of static and synchronized here? Which l
-
I have a downloaded movie that has frozen 90% of the way thru watching it. When I try to launch it I just get a black iPad screen. Any way to remedy this?