Problem using HASH TABLES
Im trying to use hash table in a program but Im gettin an error I cant understand...
DATA gt_kna1 TYPE HASHED TABLE OF kna1 WITH UNIQUE KEY kunnr.
SELECT kunnr INTO TABLE gt_kna1
FROM kna1.
Im gettin this error:
An entry was to be entered into the table "\PROGRAM=ZTABLA_HASH\DATA=GT_KNA1" (which should havehad a unique table key (UNIQUE KEY)).
However, there already existed a line with an identical key.
But this is impossible, the field kunnr is key in table kna1.
Does Anybody know why Im gettin this dump?
Regards
Try this way
TYPES: BEGIN OF TY_KUNNR,
KUNNR LIKE KNA1-KUNNR.
TYPES: END OF TY_KUNNR.
DATA: IT_KUNNR TYPE HASHED TABLE OF TY_KUNNR WITH UNIQUE KEY KUNNR.
SELECT KUNNR INTO TABLE IT_KUNNR
FROM KNA1
a®
Similar Messages
-
Header, Line Item and Cache Techniques Using Hashed Tables
Hi,
How can I work with header, line item, and a cache techniques using hashed tables?
Thanks,
Shah.Hi,
Here is an example to clarify the ideas:
In general, every time you have a header-> lines structure you have a unique key for the lines that has at least header key plus one or more fields. I'll make use of this fact.
I'll try to put an example of how to work with header -> line items and a cache technique using hashed tables.
Just suppose that you need a list of all the material movements '101'-'901' for a certain range of dates in mkpf-budat. We'll extract these fields:
mkpf-budat
mkpf-mblnr,
mseg-lifnr,
lfa1-name1,
mkpf-xblnr,
mseg-zeile
mseg-charg,
mseg-matnr,
makt-maktx,
mseg-erfmg,
mseg-erfme.
I'll use two cache: one for maintaining lfa1 related data and the other to maintain makt related data. Also I'll only describe the data gathering part. The showing of the data is left to your own imagination.
The main ideas are:
1. As this is an example I won't use inner join. If properly desingned may be faster .
2. I'll use four hashed tables: ht_mkpf, ht_mseg, ht_lfa1 and ht_makt to get data into memory. Then I'll collect all the data I want to list into a fifth table ht_lst.
3. ht_mkpf should have (at least) mkpf's primary key fields : mjahr, mblnr.
4. ht_mseg should have (at least) mseg primary key fields: mjahr mblnr and zeile.
5. ht_lfa1 should have an unique key by lifnr.
6. ht_makt should have an unique key by matnr.
7. I prefer using with header line because makes the code easier to follow and understand. The waste of time isn't quite significant (in my experience at least).
Note: When I've needed to work from header to item lines then I added a counter in ht_header that maintains the count of item lines, and I added an id in the ht_lines so I can read straight by key a given item line. But this is very tricky to implement and to follow. (Nevertheless I've programmed it and it works well.)
The data will be read in this sequence:
select data from mkpf into table ht_mkpf
select data from mseg int table ht_mseg having in count all the data in ht_mkpf
loop at ht_mseg (lines)
filter unwanted records
read cache for lfa1 and makt
fill in ht_lst and collect data
endloop.
tables
tables: mkpf, mseg, lfa1, makt.
internal tables:
data: begin of wa_mkpf, "header
mblnr like mkpf-mblnr,
mjahr like mkpf-mjahr,
budat like mkpf-budat,
xblnr like mkpf-xblnr,
end of wa_mkpf.
data ht_mkpf like hashed table of wa_mkpf
with unique key mblnr mjahr
with header line.
data: begin of wa_mseg, " line items
mblnr like mseg-mblnr,
mjahr like mseg-mjahr,
zeile like mseg-zeile,
bwart like mseg-bwart,
charg like mseg-charg,
matnr like mseg-matnr,
lifnr like mseg-lifnr,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
end of wa_mseg,
data ht_mseg like hashed table of wa_mseg
with unique key mblnr mjahr zeile
with header line.
data: begin of wa_lfa1,
lifnr like lfa1-lifnr,
name1 like lfa1-name1,
end of wa_lfa1,
data ht_lfa1 like hashed table of wa_lfa1
with unique key lifnr
with header line.
data: begin of wa_makt,
matnr like makt-matnr,
maktx like makt-maktx,
end of wa_makt.
data: ht_makt like hashed table of wa_makt
with unique key matnr
with header line.
result table
data: begin of wa_lst, "
budat like mkpf-budat,
mblnr like mseg-mblnr,
lifnr like mseg-lifnr,
name1 like lfa1-name1,
xblnr like mkpf-xblnr,
zeile like mseg-zeile,
charg like mseg-charg,
matnr like mseg-matnr,
maktx like makt-maktx,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
mjahr like mseg-mjahr,
end of wa_mseg,
data: ht_lst like hashed table of wa_lst
with unique key mblnr mjahr zeile
with header line.
data: g_lines type i.
select-options: so_budat for mkpf-budat default sy-datum.
select-options: so_matnr for mseg-matnr.
form get_data.
select mblnr mjahr budat xblnr
into table ht_mkfp
from mkpf
where budat in so_budat.
describe table ht_mkpf lines g_lines.
if lines > 0.
select mblnr mjahr zeile bwart charg
matnr lifnr erfmg erfme
into table ht_mseg
from mseg
for all entries in ht_mkpf
where mblnr = ht_mkpf-mblnr
and mjahr = ht_mjahr.
endif.
loop at ht_mseg.
filter unwanted data
check ht_mseg-bwart = '101' or ht_mseg-bwart = '901'.
check ht_mseg-matnr in so_matnr.
read header line.
read table ht_mkpf with table key mblnr = ht_mseg-mblnr
mjahr = ht_mseg-mjahr.
clear ht_lst.
note : this may be faster if you specify field by field.
move-corresponding ht_mkpf to ht_lst.
move-corresponding ht_mseg to ht_lst.
perform read_lfa1 using ht_mseg-lifnr changing ht_lst-name1.
perform read_makt using ht_mseg-matnr changing ht_lst-maktx.
insert table ht_lst.
endloop.
implementation of cache for lfa1.
form read_lfa1 using p_lifnr changing p_name1.
read table ht_lfa1 with table key lifnr = p_lifnr
transporting name1.
if sy-subrc <> 0.
clear ht_lfa1.
ht_lfa1-lifnr = p_lifnr.
select single name1
into ht_lfa1-name1
from lfa1
where lifnr = p_lifnr.
if sy-subrc <> 0. ht_lfa1-name1 = 'n/a in lfa1'. endif.
insert table ht_lfa1.
endif.
p_name1 = ht_lfa1-name1.
endform.
implementation of cache for makt
form read_makt using p_matnr changing p_maktx.
read table ht_makt with table key matnr = p_matnr
transporting maktx.
if sy-subrc <> 0.
ht_makt-matnr = p_matnr.
select single maktx into ht_matk-maktx
from makt
where spras = sy-langu
and matnr = p_matnr.
if sy-subrc <> 0. ht_makt-maktx = 'n/a in makt'. endif.
insert table ht_makt.
endif.
p_maktx = ht_makt-maktx.
endform.
Reward points if found helpfull...
Cheers,
Siva. -
Hello,
I am debating whether or not to use hash tables for something I need to do.. Here is the scenario
I was given a list of data, this data contains a string of indexes.
Throughout my program, I took that list of data and had to sort it. Now after I have done the calculations I needed, I need to re-output the
data from the original file in the same order, using some new information that I have retrieved.
Basically here is my question, should I iterate through the original file, searching the index per line, then do a manual search through my maniupulated sorted list which contains the information i want?
OR
Should I learn to use hashing, hash out the index's in the list, and hash the sorted list, and find matches? To be honest I'm not too sure how hashing works and how it can benefit.Don't worry about efficiency now. Worry about correctness. You're far more likely to make your program unusably incorrect by chasing efficiency, than by making it unusable inefficient by chasing correctness.
Anyway, I don't see how hashing has any relevance to this issue.
You could just create a list of courses when you read in the input, and then make another list for purposes of sorting. Then when you produce output, use the original list. Actually I'm not convinced that you even need to make that second sorted list -- the efficiency gains are probably minuscule or possibly even negative -- but whatever. Since the objects in both lists are the same, changes you make to the objects in the second list are present in the objects in the first list. -
How can I use Hash Table when processing the data from cdpos and cdhdr
Hello Guru,
I've a question,
I need to reduce the access time to both cdhdr and cdpos.
Because may be I'll get a huge number of entries.
It looks like that by processing cdhdr and cdpos data will take many secondes,
it depends on how many data you need to find.
Hints : Putting instructions inside a form will slow down the program?
Also, I just want use Hash table and I need to put a loop-instruction going on the hash-table in form.
I know that it's no possible but I can declare an index inside my customized hash table.
For example :
DO
READ TABLE FOR specific_hash_table WITH KEY TABLE oindex = d_oindex.
Process data
d_oindex += 1.
UNTIL d_oindex = c_max_lines + 1.
Doing this would actually not necessary improve the performance.
Because It looks like I'm having a standard table, may be there's a hash function, but it could be a bad function.
Also I need to use for example COUNT (*) to know how many lines I get with the select.
FORM find_cdpos_data_with_loop
TABLES
i_otf_objcs TYPE STANDARD TABLE
USING
i_cdhdr_data TYPE HASHED TABLE
i_objcl TYPE j_objnr
* i_obj_lst TYPE any
i_option TYPE c
CHANGING
i_global TYPE STANDARD TABLE.
" Hint: cdpos is a cluster-table
CONSTANTS : objectid TYPE string VALUE 'objectid = i_obj_lst-objectid',
changenr TYPE string VALUE 'changenr = i_obj_lst-changenr',
tabname TYPE string VALUE 'tabname = i_otf_objcs-tablename',
tabnameo1 TYPE string VALUE 'tabname NE ''''',
tabnameo2 TYPE string VALUE 'tabname NE ''DRAD''',
fname TYPE string VALUE 'fname = i_otf_objcs-fieldname'.
DATA : BEGIN OF i_object_list OCCURS 0,
objectclas LIKE cdpos-objectclas,
objectid LIKE cdpos-objectid,
changenr LIKE cdpos-changenr,
END OF i_object_list.
DATA : i_cdpos LIKE TABLE OF i_object_list WITH HEADER LINE,
i_obj_lst LIKE LINE OF i_cdpos.
DATA : tabnamev2 TYPE string.
IF i_option EQ 'X'.
MOVE tabnameo2 TO tabnamev2.
ELSE.
MOVE tabnameo1 TO tabnamev2.
ENDIF.
*LOOP AT i_cdhdr_data TO i_obj_lst.
SELECT objectclas objectid changenr
INTO TABLE i_cdpos
FROM cdpos
FOR ALL ENTRIES IN i_otf_objcs
WHERE objectclas = i_objcl AND
(objectid) AND
(changenr) AND
(tabname) AND
(tabnamev2) AND
(fname).
LOOP AT i_cdpos.
APPEND i_cdpos-objectid TO i_global.
ENDLOOP.
*ENDLOOP.
ENDFORM. "find_cdpos_dataHey Mart,
This is what I met, unfortunately I get the same performance with for all entries.
But with a lot of more code.
FORM find_cdpos_data
TABLES
i_otf_objcs TYPE STANDARD TABLE
USING
i_objcl TYPE j_objnr
i_obj_lst TYPE any
i_option TYPE c
CHANGING
i_global TYPE STANDARD TABLE.
" Hint: cdpos is a cluster-table
CONSTANTS : objectid TYPE string VALUE 'objectid = i_obj_lst-objectid',
changenr TYPE string VALUE 'changenr = i_obj_lst-changenr',
tabname TYPE string VALUE 'tabname = i_otf_objcs-tablename',
tabnameo1 TYPE string VALUE 'tabname NE ''''',
tabnameo2 TYPE string VALUE 'tabname NE ''DRAD''',
fname TYPE string VALUE 'fname = i_otf_objcs-fieldname'.
* DATA : BEGIN OF i_object_list OCCURS 0,
* objectclas LIKE cdpos-objectclas,
* objectid LIKE cdpos-objectid,
* changenr LIKE cdpos-changenr,
* END OF i_object_list.
** complete modified code [begin]
DATA : BEGIN OF i_object_list OCCURS 0,
objectclas LIKE cdpos-objectclas,
objectid LIKE cdpos-objectid,
changenr LIKE cdpos-changenr,
tabname LIKE cdpos-tabname,
fname LIKE cdpos-fname,
END OF i_object_list.
** complete modified code [end]
DATA : i_cdpos LIKE TABLE OF i_object_list WITH HEADER LINE.
DATA : tabnamev2 TYPE string.
** complete modified code [begin]
FIELD-SYMBOLS : <otf> TYPE ANY,
<otf_field_tabname>,
<otf_field_fname>.
** complete modified code [end]
IF i_option EQ 'X'.
MOVE tabnameo2 TO tabnamev2.
ELSE.
MOVE tabnameo1 TO tabnamev2.
ENDIF.
** SELECT objectclas objectid changenr
** INTO TABLE i_cdpos
* SELECT objectid
* APPENDING CORRESPONDING FIELDS OF TABLE i_global
* FROM cdpos
* FOR ALL ENTRIES IN i_otf_objcs
* WHERE objectclas = i_objcl AND
* (objectid) AND
* (changenr) AND
* (tabname) AND
* (tabnamev2) AND
* (fname).
** complete modified code [begin]
SELECT objectid tabname fname
INTO CORRESPONDING FIELDS OF TABLE i_cdpos
FROM cdpos
WHERE objectclas = i_objcl AND
(objectid) AND
(changenr) AND
(tabnamev2).
ASSIGN LOCAL COPY OF i_otf_objcs TO <otf>.
LOOP AT i_cdpos.
LOOP AT i_otf_objcs INTO <otf>.
ASSIGN COMPONENT 'TABLENAME' OF STRUCTURE <otf> TO <otf_field_tabname>.
ASSIGN COMPONENT 'FIELDNAME' OF STRUCTURE <otf> TO <otf_field_fname>.
IF ( <otf_field_tabname> EQ i_cdpos-tabname ) AND ( <otf_field_fname> EQ i_cdpos-fname ).
APPEND i_cdpos-objectid TO i_global.
RETURN.
ENDIF.
ENDLOOP.
ENDLOOP.
** complete modified code [end]
** LOOP AT i_cdpos.
** APPEND i_cdpos-objectid TO i_global.
** ENDLOOP.
ENDFORM. "find_cdpos_data -
Getting runtime error while using hash table
Hi,
I have defined an internal table as hash with unique key.But while executng the prog. its giving a dump saying "There is already a line with the same key." My code is
data: begin of wa_rkrp,
vbeln like vbrk-vbeln,
fkdat like vbrk-fkdat,
fkart like vbrk-fkart,
kunag like vbrk-kunag,
knumv like vbrk-knumv,
inco1 like vbrk-inco1,
spart like vbrk-spart,
netwr like vbrk-netwr,
mwsbk like vbrk-mwsbk,
uepos like vbrp-uepos,
werks like vbrp-werks,
lgort like vbrp-lgort,
end of wa_rkrp.
data lt_rkrp like hashed table of wa_rkrp
with unique key vbeln
with header line.
select vbrk~vbeln
vbrk~fkdat
vbrk~fkart
vbrk~kunag
vbrk~knumv
vbrk~inco1
vbrk~spart
vbrk~netwr
vbrk~mwsbk
vbrp~uepos
vbrp~werks
vbrp~lgort
into table lt_rkrp
from vbrk inner join vbrp
on vbrpvbeln = vbrkvbeln
where vbrk~fkdat in s_fkdat
and vbrk~bukrs eq p_bukrs.
Any problem in my select query? or with my table deifnition.
Can anyone pls suggest how to rectify this.define a unique key VBELN and POSNR.
data lt_rkrp like hashed table of wa_rkrp
with unique key vbeln posnr
with header line.
BTW: Stop using the header line!!! Outdated!!
Edited by: Micky Oestreich on Mar 23, 2009 7:28 AM -
Problems using different tables for base class and derived class
I have a class named SuperProject and another class Project derived from
it. If I let SchemaTool generate the tables without specifying a "table"
extension, I get a single TABLE with all the columns from both classes and
everything works fine. But if I specify a "table" for the derived class,
SchemaTool generates the derived class with just one column (corresponds
to the attribute in derived class). Also it causes problems in using the
Project class in collection attributes.
JDO file:
<jdo>
<package name="jdo">
<class name="Project" identity-type="application"
persistence-capable-superclass="SuperProject">
<extension vendor-name="kodo" key="table" value="PROJECT"/>
</class>
<class name="SuperProject" identity-type="application"
objectid-class="ProjectId">
<field name="id" primary-key="true"/>
</class>
</package>
</jdo>
java classes:
public class Project extends SuperProject
String projectSpecific
public class SuperProject
BigDecimal id;
String name;
tables generated by SchemaTool:
TABLE SUPERPROJECTSX (IDX, JDOCLASSX, JDOLOCKX, NAMEX);
TABLE PROJECT(PROJECTSPECIFICX)
Thanks,
Justine ThomasJustine,
This will be resolved in 2.3.4, to be released later this evening.
-Patrick
In article <aofo2q$mih$[email protected]>, Justine Thomas wrote:
I have a class named SuperProject and another class Project derived from
it. If I let SchemaTool generate the tables without specifying a "table"
extension, I get a single TABLE with all the columns from both classes and
everything works fine. But if I specify a "table" for the derived class,
SchemaTool generates the derived class with just one column (corresponds
to the attribute in derived class). Also it causes problems in using the
Project class in collection attributes.
JDO file:
<jdo>
<package name="jdo">
<class name="Project" identity-type="application"
persistence-capable-superclass="SuperProject">
<extension vendor-name="kodo" key="table" value="PROJECT"/>
</class>
<class name="SuperProject" identity-type="application"
objectid-class="ProjectId">
<field name="id" primary-key="true"/>
</class>
</package>
</jdo>
java classes:
public class Project extends SuperProject
String projectSpecific
public class SuperProject
BigDecimal id;
String name;
tables generated by SchemaTool:
TABLE SUPERPROJECTSX (IDX, JDOCLASSX, JDOLOCKX, NAMEX);
TABLE PROJECT(PROJECTSPECIFICX)
Thanks,
Justine Thomas
Patrick Linskey [email protected]
SolarMetric Inc. http://www.solarmetric.com -
Weird problem using external tables
Hi,
Please move my message to the correct forum if it is not in the right one.
Problem:
I have the following external table definition:
CREATE TABLE blabla (
AANSLUITINGSNR_BV NUMBER(15),
BSN_NR NUMBER(9),
DATUM_AANVANG_UKV DATE,
DATUM_EIND_UKV DATE,
BEDR_WAO_WERKN_REFJ number(18),
BEDR_WAO_WERKG_REFJ NUMBER(38)
ORGANIZATION EXTERNAL (
TYPE oracle_loader
DEFAULT DIRECTORY nood_dir
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ';'
MISSING FIELD VALUES ARE NULL
"AANSLUITINGSNR_BV",
"BSN_NR",
"DATUM_AANVANG_UKV" DATE "YYYYMMDD",
"DATUM_EIND_UKV" DATE "YYYYMMDD",
"BEDR_WAO_WERKN_REFJ",
"BEDR_WAO_WERKG_REFJ"
LOCATION ('myfile.csv')
REJECT LIMIT UNLIMITED;
My file looks like this:
107031035423278;913487654;20010101;20011231;3231729003;8334195582
128039008378982;117347347;20010101;20011231;1606131689;4468506457
134740829467773;450263934;20010101;20011231;9568986434;526201096
141020256280899;782714783;20010101;20011231;33235678;2398903683
146130347251892;960256796;20010101;20011231;441706397;2622754437
151020336151123;441010528;20010101;20011231;8183416412;6077359802
152888977527618;114572066;20010101;20011231;2370992895;6196483262
But when selecting the following log is stated:
error processing column BEDR_WAO_WERKG_REFJ in row 1 for datafile /oracle/db/noodscenario/myfile.csv
ORA-01722: invalid number
error processing column BEDR_WAO_WERKG_REFJ in row 2 for datafile /oracle/db/noodscenario/myfile.csv
ORA-01722: invalid number
Why is number 8334195582 stated as invalid ?
Thanks,
Coen
Message was edited by:
Coenos1Which Oracle version and OS are you on ? It works perfectly to me :
$ cat myfile.csv
107031035423278;913487654;20010101;20011231;3231729003;8334195582
128039008378982;117347347;20010101;20011231;1606131689;4468506457
134740829467773;450263934;20010101;20011231;9568986434;526201096
141020256280899;782714783;20010101;20011231;33235678;2398903683
146130347251892;960256796;20010101;20011231;441706397;2622754437
151020336151123;441010528;20010101;20011231;8183416412;6077359802
152888977527618;114572066;20010101;20011231;2370992895;6196483262
$ sqlplus test/test
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Aug 24 10:56:30 2007
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> CREATE TABLE blabla (
2 AANSLUITINGSNR_BV NUMBER(15),
3 BSN_NR NUMBER(9),
4 DATUM_AANVANG_UKV DATE,
5 DATUM_EIND_UKV DATE,
6 BEDR_WAO_WERKN_REFJ number(18),
7 BEDR_WAO_WERKG_REFJ NUMBER(38)
8 )
9 ORGANIZATION EXTERNAL (
10 TYPE oracle_loader
11 DEFAULT DIRECTORY work
12 ACCESS PARAMETERS (
13 RECORDS DELIMITED BY NEWLINE
14 FIELDS TERMINATED BY ';'
15 MISSING FIELD VALUES ARE NULL
16 (
17 "AANSLUITINGSNR_BV",
18 "BSN_NR",
19 "DATUM_AANVANG_UKV" DATE "YYYYMMDD",
20 "DATUM_EIND_UKV" DATE "YYYYMMDD",
21 "BEDR_WAO_WERKN_REFJ",
22 "BEDR_WAO_WERKG_REFJ"
23 )
24 )
25 LOCATION ('myfile.csv')
26 )
27* REJECT LIMIT UNLIMITED
SQL> /
Table created.
SQL> select * from blabla;
AANSLUITINGSNR_BV BSN_NR DATUM_AAN DATUM_EIN BEDR_WAO_WERKN_REFJ BEDR_WAO_WERKG_REFJ
107031035423278 913487654 01-JAN-01 31-DEC-01 3231729003 8334195582
128039008378982 117347347 01-JAN-01 31-DEC-01 1606131689 4468506457
134740829467773 450263934 01-JAN-01 31-DEC-01 9568986434 526201096
141020256280899 782714783 01-JAN-01 31-DEC-01 33235678 2398903683
146130347251892 960256796 01-JAN-01 31-DEC-01 441706397 2622754437
151020336151123 441010528 01-JAN-01 31-DEC-01 8183416412 6077359802
152888977527618 114572066 01-JAN-01 31-DEC-01 2370992895 6196483262
7 rows selected.
SQL>What do you see within the log file ? -
Memory leakage in Swing application using hash table to retrive data
Hi I have developed one application using swing , I am using hashtable to retrieve data from database . but that cause me problem my size of application increases automatically for every click in the application.I think memory leakage is there.
would anybody help me to remove this errorHi I have developed one application using swing , I am using hashtable to retrieve data from database . but that cause me problem my size of application increases automatically for every click in the application.I think memory leakage is there.
would anybody help me to remove this error -
Hash Table -Runtime error.
Hi,
i am defining internal table as hash table but it is giving runtime error like bolow. some times it is executing properly or giving dump.please give me some suggestions to come out of this problem.
Ex:
What happened?
Error in the ABAP Application Program
The current ABAP program "SAPLZGIT_MED" had to be terminated because it has
come across a statement that unfortunately cannot be executed.
Error analysis
An entry was to be entered into the table
"\FUNCTION=ZBAPI_GIT_MED_HD_BOTELLAS_RPT\DATA=T_THERAPY_INFO" (which should
have
had a unique table key (UNIQUE KEY)).
However, there already existed a line with an identical key.
The insert-operation could have ocurred as a result of an INSERT- or
MOVE command, or in conjunction with a SELECT ... INTO.
The statement "INSERT INITIAL LINE ..." cannot be used to insert several
initial lines into a table with a unique key.
Code: .....................................
TYPES: BEGIN OF X_THERAPY_INFO,
GIT_THERAPY_ID TYPE ZGIT_VBELN_GIT,
CHO_THERAPY_ID TYPE VBELN,
SERVICE_ID TYPE MATNR,
SERVICE_DESC TYPE ARKTX,
FLOW TYPE ZGIT_ZFLOW,
DURATION TYPE ZGIT_ZDURATION,
GIT_PAT_ID TYPE ZGIT_KUNNR_GIT,
CHORUS_PAT_ID TYPE KUNNR,
END OF X_THERAPY_INFO,
DATA:
T_THERAPY_INFO TYPE HASHED TABLE OF X_THERAPY_INFO WITH UNIQUE KEY GIT_THERAPY_ID, " Internal table for extracting the therapy Informamtion
Extract the Therapy Details
SELECT VBELN_GIT
VBELN
MATNR
ARKTX
ZFLOW
ZDURATION
KUNNR_GIT
KUNNR
INTO TABLE T_THERAPY_INFO
FROM ZGIT_T_VBAK
WHERE KUNNR_GIT NE SPACE
AND KVGR2 IN TR_KVGR2
AND ZDOCTOR IN TR_DOC_SEL_ID
AND ZHOSPITAL IN TR_HOSPITAL_ID
AND ( ZSTATUS_ID NE C_HIDDEN
AND ZSTATUS_ID NE C_DISABLED
AND ZSTATUS_ID NE C_OTHERS )
AND VBELN IN TR_VBELN.
IF SY-SUBRC NE C_0.
Thanks in advance,
Srinivas PWhen you are using hashed tables, you cannot have duplicate records (considering it's key). So DELETE ADJACENT DUPLICATES won't solve it. It will dump before that.
Find if you are inserting records with the same fields you inserted in the key.
Example.
If your key is VBELN, you cannot make it like this:
SELECT a~vbeln b~matnr
INTO TABLE hashed_itab
FROM vbak AS a INNER JOIN vbap as b
ON a~vbeln = b~vbeln
WHERE ...
It will dump if your vbeln has 2 lines in vbap.
Two solutions:
1 - expand the itab key to VBELN and POSNR (for example) or
2
SELECT DISTINCT a~vbeln b~matnr
INTO TABLE hashed_itab
FROM vbak AS a INNER JOIN vbap as b
ON a~vbeln = b~vbeln
WHERE ...
Regards,
Valter Oliveira. -
Experimenting with hashed table.
Hello experts,
I am currently experementing with using hashed table for my report and this is my first time to try this. Below is my code:
DATA: it_iloa type hashed TABLE OF t_iloa
WITH unique KEY iloan.
Now here is my problem, Originally I am reading it_iloa using its header line. Now, how can I define my hashed table to have a work area/header line?
Again, thanks guys and have a nice day!Hi,
The SORTED table / HASH table are best used along with a LEY definition of the table, meaning you will declare a key for the internal table.
So, while declare the table add the WITH UNIQUE KEY column1.
This will create a HASH index on the column1 and when you are reading the table, make sure you have column1 in the where clause so that HASH index is used and the performance is improved.
However, unless the table has got huge data you will not be able to see the difference.
Regards,
Ravi -
How to create hashed table in runtime
hi experts
how to create hashed table in runtime, please give me the coading style.
please help me.
regards
subhasisHi,
Have alook at the code, and pls reward points.
Use Hashed Tables to Improve Performance :
report zuseofhashedtables.
Program: ZUseOfHashedTables **
Author: XXXXXXXXXXXXXXXXXX **
Versions: 4.6b - 4.6c **
Notes: **
this program shows how we can use hashed tables to improve **
the responce time. **
It shows, **
1. how to declare hashed tables **
2. a cache-like technique to improve access to master data **
3. how to collect data using hashed tables **
4. how to avoid deletions of unwanted data **
Results: the test we run read about 31000 rows from mkpf, 150000 **
rows from mseg, 500 rows from makt and 400 from lfa1. **
it filled ht_lst with 24500 rows and displayed them in **
alv grid format. **
It needed about 65 seconds to perform this task (with **
all the db buffers empty) **
The same program with standard tables needed 140 seconds **
to run with the same recordset and with buffers filled in **
Objetive: show a list that consists of all the material movements **
'101' - '901' for a certain range of dates in mkpf-budat. **
the columns to be displayed are: **
mkpf-budat, **
mkpf-mblnr, **
mseg-lifnr, **
lfa1-name1, **
mkpf-xblnr, **
mseg-zeile **
mseg-charg, **
mseg-matnr, **
makt-maktx, **
mseg-erfmg, **
mseg-erfme. **
or show a sumary list by matnr - menge **
You'll have to create a pf-status called vista - **
See form set_pf_status for details **
tables used -
tables: mkpf,
mseg,
lfa1,
makt.
global hashed tables used
data: begin of wa_mkpf, "header
mblnr like mkpf-mblnr,
mjahr like mkpf-mjahr,
budat like mkpf-budat,
xblnr like mkpf-xblnr,
end of wa_mkpf.
data: ht_mkpf like hashed table of wa_mkpf
with unique key mblnr mjahr
with header line.
data: begin of wa_mseg, " line items
mblnr like mseg-mblnr,
mjahr like mseg-mjahr,
zeile like mseg-zeile,
bwart like mseg-bwart,
charg like mseg-charg,
matnr like mseg-matnr,
lifnr like mseg-lifnr,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
end of wa_mseg.
data ht_mseg like hashed table of wa_mseg
with unique key mblnr mjahr zeile
with header line.
cache structure for lfa1 records
data: begin of wa_lfa1,
lifnr like lfa1-lifnr,
name1 like lfa1-name1,
end of wa_lfa1.
data ht_lfa1 like hashed table of wa_lfa1
with unique key lifnr
with header line.
cache structure for material related data
data: begin of wa_material,
matnr like makt-matnr,
maktx like makt-maktx,
end of wa_material.
data: ht_material like hashed table of wa_material
with unique key matnr
with header line.
result table
data: begin of wa_lst, "
budat like mkpf-budat,
mblnr like mseg-mblnr,
lifnr like mseg-lifnr,
name1 like lfa1-name1,
xblnr like mkpf-xblnr,
zeile like mseg-zeile,
charg like mseg-charg,
matnr like mseg-matnr,
maktx like makt-maktx,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
mjahr like mseg-mjahr,
end of wa_lst.
data: ht_lst like hashed table of wa_lst
with unique key mblnr mjahr zeile
with header line.
data: begin of wa_lst1, " sumary by material
matnr like mseg-matnr,
maktx like makt-maktx,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
end of wa_lst1.
data: ht_lst1 like hashed table of wa_lst1
with unique key matnr
with header line.
structures for alv grid display.
itabs
type-pools: slis.
data: it_lst like standard table of wa_lst with header line,
it_fieldcat_lst type slis_t_fieldcat_alv with header line,
it_sort_lst type slis_t_sortinfo_alv,
it_lst1 like standard table of wa_lst1 with header line,
it_fieldcat_lst1 type slis_t_fieldcat_alv with header line,
it_sort_lst1 type slis_t_sortinfo_alv.
structures
data: wa_sort type slis_sortinfo_alv,
ls_layout type slis_layout_alv.
global varialbes
data: g_lines type i.
data: g_repid like sy-repid,
ok_code like sy-ucomm.
selection-screen
"text: Dates:
select-options: so_budat for mkpf-budat default sy-datum.
"text: Material numbers.
select-options: so_matnr for mseg-matnr.
selection-screen uline.
selection-screen skip 1.
"Text: show summary by material.
parameters: gp_bymat as checkbox default ''.
start-of-selection.
perform get_data.
perform show_data.
end-of-selection.
FORM get_data *
form get_data.
select mblnr mjahr budat xblnr
into table ht_mkpf
from mkpf
where budat in so_budat. " make use of std index.
have we retrieved data from mkpf?
describe table ht_mkpf lines g_lines.
if g_lines > 0.
if true then retrieve all related records from mseg.
Doing this way we make sure that the access is by primary key
of mseg.
The reason is that is faster to filter them in memory
than to allow the db server to do it.
select mblnr mjahr zeile bwart charg
matnr lifnr erfmg erfme
into table ht_mseg
from mseg
for all entries in ht_mkpf
where mblnr = ht_mkpf-mblnr
and mjahr = ht_mkpf-mjahr.
endif.
fill t_lst or t_lst1 according to user's choice.
if gp_bymat = ' '.
perform fill_ht_lst.
else.
perform fill_ht_lst1.
endif.
endform.
form fill_ht_lst.
refresh ht_lst.
Example: how to discard unwanted data in an efficient way.
loop at ht_mseg.
filter unwanted data
check ht_mseg-bwart = '101' or ht_mseg-bwart = '901'.
check ht_mseg-matnr in so_matnr.
read header line.
read table ht_mkpf with table key mblnr = ht_mseg-mblnr
mjahr = ht_mseg-mjahr.
clear ht_lst.
* note : this may be faster if you specify field by field.
move-corresponding ht_mkpf to ht_lst.
move-corresponding ht_mseg to ht_lst.
perform read_lfa1 using ht_mseg-lifnr changing ht_lst-name1.
perform read_material using ht_mseg-matnr changing ht_lst-maktx.
insert table ht_lst.
endloop.
endform.
form fill_ht_lst1.
refresh ht_lst1.
Example: how to discard unwanted data in an efficient way.
hot to simulate a collect in a faster way
loop at ht_mseg.
filter unwanted data
check ht_mseg-bwart = '101' or ht_mseg-bwart = '901'.
check ht_mseg-matnr in so_matnr.
* note : this may be faster if you specify field by field.
read table ht_lst1 with table key matnr = ht_mseg-matnr
transporting erfmg.
if sy-subrc <> 0. " if matnr doesn't exist in sumary table
" insert a new record
ht_lst1-matnr = ht_mseg-matnr.
perform read_material using ht_mseg-matnr changing ht_lst1-maktx.
ht_lst1-erfmg = ht_mseg-erfmg.
ht_lst1-erfme = ht_mseg-erfme.
insert table ht_lst1.
else." a record was found.
" collect erfmg. To do so, fill in the unique key and add
" the numeric fields.
ht_lst1-matnr = ht_mseg-matnr.
add ht_mseg-erfmg to ht_lst1-erfmg.
modify table ht_lst1 transporting erfmg.
endif.
endloop.
endform.
implementation of cache for lfa1.
form read_lfa1 using p_lifnr changing p_name1.
read table ht_lfa1 with table key lifnr = p_lifnr
transporting name1.
if sy-subrc <> 0.
clear ht_lfa1.
ht_lfa1-lifnr = p_lifnr.
select single name1
into ht_lfa1-name1
from lfa1
where lifnr = p_lifnr.
if sy-subrc <> 0. ht_lfa1-name1 = 'n/a in lfa1'. endif.
insert table ht_lfa1.
endif.
p_name1 = ht_lfa1-name1.
endform.
implementation of cache for material data
form read_material using p_matnr changing p_maktx.
read table ht_material with table key matnr = p_matnr
transporting maktx.
if sy-subrc <> 0.
ht_material-matnr = p_matnr.
select single maktx into ht_material-maktx
from makt
where spras = sy-langu
and matnr = p_matnr.
if sy-subrc <> 0. ht_material-maktx = 'n/a in makt'. endif.
insert table ht_material.
endif.
p_maktx = ht_material-maktx.
endform.
form show_data.
if gp_bymat = ' '.
perform show_ht_lst.
else.
perform show_ht_lst1.
endif.
endform.
form show_ht_lst.
"needed because the FM can't use a hashed table.
it_lst[] = ht_lst[].
perform fill_layout using 'full display'
changing ls_layout.
perform fill_columns_lst.
perform sort_lst.
g_repid = sy-repid.
call function 'REUSE_ALV_GRID_DISPLAY'
exporting
i_callback_program = g_repid
i_callback_pf_status_set = 'SET_PF_STATUS'
is_layout = ls_layout
it_fieldcat = it_fieldcat_lst[]
it_sort = it_sort_lst
tables
t_outtab = it_lst
exceptions
program_error = 1
others = 2.
endform.
form show_ht_lst1.
"needed because the FM can't use a hashed table.
it_lst1[] = ht_lst1[].
perform fill_layout using 'Sumary by matnr'
changing ls_layout.
perform fill_columns_lst1.
perform sort_lst.
g_repid = sy-repid.
call function 'REUSE_ALV_GRID_DISPLAY'
exporting
i_callback_program = g_repid
i_callback_pf_status_set = 'SET_PF_STATUS'
is_layout = ls_layout
it_fieldcat = it_fieldcat_lst1[]
it_sort = it_sort_lst
tables
t_outtab = it_lst1
exceptions
program_error = 1
others = 2.
endform.
form fill_layout using p_window_titlebar
changing cs_layo type slis_layout_alv.
clear cs_layo.
cs_layo-window_titlebar = p_window_titlebar.
cs_layo-edit = 'X'.
cs_layo-edit_mode = space.
endform. " armar_layout_stock
form set_pf_status using rt_extab type slis_t_extab.
create a new status
and then select extras -> adjust template -> listviewer
set pf-status 'VISTA'.
endform. "set_pf_status
define add_lst.
clear it_fieldcat_lst.
it_fieldcat_lst-fieldname = &1.
it_fieldcat_lst-outputlen = &2.
it_fieldcat_lst-ddictxt = 'L'.
it_fieldcat_lst-seltext_l = &1.
it_fieldcat_lst-seltext_m = &1.
it_fieldcat_lst-seltext_m = &1.
if &1 = 'MATNR'.
it_fieldcat_lst-emphasize = 'C111'.
endif.
append it_fieldcat_lst.
end-of-definition.
define add_lst1.
clear it_fieldcat_lst.
it_fieldcat_lst1-fieldname = &1.
it_fieldcat_lst1-outputlen = &2.
it_fieldcat_lst1-ddictxt = 'L'.
it_fieldcat_lst1-seltext_l = &1.
it_fieldcat_lst1-seltext_m = &1.
it_fieldcat_lst1-seltext_m = &1.
append it_fieldcat_lst1.
end-of-definition.
form fill_columns_lst.
set columns for output.
refresh it_fieldcat_lst.
add_lst 'BUDAT' 10.
add_lst 'MBLNR' 10.
add_lst 'LIFNR' 10.
add_lst 'NAME1' 35.
add_lst 'XBLNR' 15.
add_lst 'ZEILE' 5.
add_lst 'CHARG' 10.
add_lst 'MATNR' 18.
add_lst 'MAKTX' 30.
add_lst 'ERFMG' 17.
add_lst 'ERFME' 5.
add_lst 'MJAHR' 4.
endform.
form fill_columns_lst1.
set columns for output.
refresh it_fieldcat_lst1.
add_lst1 'MATNR' 18.
add_lst1 'MAKTX' 30.
add_lst1 'ERFMG' 17.
add_lst1 'ERFME' 5..
endform.
Regards,
Ameet -
can ne body give me an example of using hash with source code
I would like to use hash tables for storing passwords, Basically I am looking for a secure way to store users, and passwords
Message was edited by:
user448429Sir I want to store username password for my project . I wish this to be secure hence i am looking to use hashtables for this task
Your question is not completely understood, at least
myself, be more specific and clear in about your
requirement.
Jaffar -
Hash table and function module input
Hi ABAP Expert,
Please advise what happening if i am passing the intertal table (hashtable) become input of function module (table).
so insite the function module is this table still hashtable type or just normal internal table ?
Thank you and Regards
FernandTyping of such parameter should be either generic (i.e ANY TABLE) or fully specified (HASHED/SORTED/STANDARD TABLE). In both cases when you pass i.e. HASHED table to that formal parameter the dynamic type will be inherited by the actual paremeter.
This means that inside the function module you will not be able to use HASHED table "banned" statement i.e. not appending to this table. The system must be fully convinced about the type of passed parameter to allow certain access. Without that knowledge it won't pass you through the syntax checker or will trigger runtime error.
I.e
"1) parameter is typed
CHANGING
C_TAB type ANY TABLE
"here you can't use STANDARD/SORTED table specific statements as the dynamic type of param might be HASHED TABLE
append ... to c_tab. "error during runtime
"2) parameter is typed
CHANGING
C_TAB type HASHED TABLE
"here system explicitly knows that dynamic type is the same as static one so you can append to this table too
append ... to c_tab. "syntax error before runtime
So the anwser to your question
so insite the function module is this table still hashtable type or just normal internal table ?
is...
During syntax check system takes static type of table and shouts if table related operation is not allowed for this kind.
During runtime system takes dynamic type of the table and checks whether particular statement is allowed for this kind of table, if not triggers an exception.
Regards
Marcin -
Hash Table - BSIS - A test code
Hi All,
I want to improve one program involving table BSIS. I thought HASH table could be good option considering the large volume of BSIS. Below is my code, any suggestion etc. Since first time I am using HASH table.
Thank you,
REPORT ZHASHTABLE .
tables : bsis.
data irec type i.
select-options : postdate for bsis-budat,
cocode for bsis-bukrs,
hkont for bsis-hkont.
data i_bsis like bsis occurs 0 with header line.
data : tab_bsis like line of i_bsis.
data : it_bsis type hashed table of bsis with unique key BUKRS
HKONT
AUGDT
AUGBL
ZUONR
GJAHR
BELNR
BUZEI.
start-of-selection.
select * from bsis
into table it_bsis
where bukrs in cocode and hkont in hkont.
delete it_bsis where not ( budat in postdate ).
describe table it_bsis lines irec.
loop at it_bsis into tab_bsis.
move-corresponding tab_bsis to i_bsis.
append i_bsis.
clear i_bsis.
clear tab_bsis.
endloop.
loop at i_bsis.
write / i_bsis.
endloop.Hi Bonny,
A hash table is only beneficial if you create a key AND are able to provide the full key. In your case, it does not look like you are accessing the table with the full key. It does not help you in your DELETE statement.
If you are deleting records from your internal table it would be better to sort by the date since that is your criteria for deletion.
But it would be better all around if you would:
- create a table with only fields you need rather than all fields of bsis. It takes time to pull in all the fields and memory as well.
- it is better to bring back only the records you need so your where clause should include the BUDAT in POSTDATE.
Imagine if you select a single date or a range of a week or even a month. Your Select will pull back everything in the company code and account number which could be tens or hundreds of thousands of records and end up with only a few records. And it would do this each time regardless of your date range because you filter out unwanted dates after you've brought all those records back.
So really you need to recode your Select statement to incorporate your select options posting date. I could see this taking a very long time otherwise.
Hope this helps.
Filler -
Implementing Hash tables with chaining.
Hi, I'm in a Java data structures class right now and we have a program to do using hash tables. I've read about hash tables and chaining but does anyone know where i can find some examples of code that implement hash tables using chaining. It's all very confusing to me without seeing how it is used in a program.
To give you an idea of what we're doing, the assignment is to create a word processor that looks thru a file and adds different words to the table and also counts how many of each word there is in that file. Keep in mind I'm not asking for the code to this assignment just for some example of how coding hash tables/chaining works.Simple and probably not complete overview:
Suppose you have an array in which you will store objects.
You take an object, and find a "hash code" for it. You then restrict the hash code in to the range of your array. This gives you the index where you can store your object.
But what if another object is already present at that index?
Different solutions to this exist - such as re-hashing. One solution is "chaining". Instead of storing the object directly at the hashed index - you store a list of objects which hashed to the same location.
When you do a lookup, you first get your hash. You then look at the corresponding location in the array. If you find a list rather than the required object, you walk along the list until you find the object you are looking for.
Of course, with a small array and bad hash distribution, the performance of this solution can degrade fairly quickly.
BTW, in the real world, just use one of the ready made collections in java.util.
Maybe you are looking for
-
Synching error with different USB port
I set up my iPhone and had no problems synching with iTunes. I then changed to a different USB port than the one I had originally used. I began to recieve the following error when I tryed to synch "0xE800001". I then switched to a different usb port
-
PLEASE HELP!! My mom is gonna kill me.. Tried recovering(restoring) with other pc's aswell but itunes does not show any message or any error (nothing)...Only the windows gives the message (Unknown Device) error 43.
-
Trouble connecting Ipod Touch 2G to Belkin Tunecast auto
Hi there! I just bought my Ipod Touch 2G on this Chritsmas and find it really cool, but... I have this one problem, when I try to connect it to my new Belkin Tunecast auto, it just doesn't stop sending the audio trough the internal speakers in order
-
Error on export from a 1.5 version and import to a 1.6 version
I am exporting a 1.5 version of an application to the 1.6 version on the Oracle Workspace and I get the following error: ORA-20001: GET_BLOCK Error. ORA-20001: Execution of the statement was unsuccessful. ORA-00001: unique constraint (FLOWS_010601.WW
-
Any real answers for these?
WIFI decides to just always drop. Never had an issue with data when I had 4GB plan. I never went over 1/2gb because I am always around WIFI and I drop to 1gb plan and all of a sudden every month I am using 90%+ of my plan. Battery display - I hate wh