Performance mach table
Moved to correct forum by moderator
Hi to everybody
Sorry for my simple question.
I'm programming with abap to few time
I would kown if is more fast/performance use:
loop at itab1.
loop at itab2 where......
endloop.
endloop.
or
loop at itab1
read table itab2- WITH KEY binary search.
endloop.
I ask this because I created a program and I must run in background otherwise it goes in dump time out.
Thanks e good work
Edited by: Matt on Dec 4, 2008 11:49 AM
HI,
Check this way....
REPORT zparallel_cursor.
TABLES:
likp,
lips.
DATA:
t_likp TYPE TABLE OF likp,
t_lips TYPE TABLE OF lips.
DATA:
w_runtime1 TYPE i,
w_runtime2 TYPE i,
w_index LIKE sy-index.
START-OF-SELECTION.
SELECT *
FROM likp
INTO TABLE t_likp.
SELECT *
FROM lips
INTO TABLE t_lips.
GET RUN TIME FIELD w_runtime1.
SORT t_likp BY vbeln.
SORT t_lips BY vbeln.
LOOP AT t_likp INTO likp.
LOOP AT t_lips INTO lips FROM w_index.
IF likp-vbeln NE lips-vbeln.
w_index = sy-tabix.
EXIT.
ENDIF.
ENDLOOP.
ENDLOOP.
GET RUN TIME FIELD w_runtime2.
w_runtime2 = w_runtime2 - w_runtime1.
WRITE w_runtime2.
Similar Messages
-
Hi,
I would like to enquire is there anyway that i can improve the performance for table BKPF from the ABAP code point of view.
Because we have customise one program to generate report for the asset master listing.
one of the select statement are show as below:
SELECT SINGLE * FROM BKPF WHERE BUKRS = ANEP-BUKRS
AND GJAHR = ANEP-GJAHR
AND AWKEY = AWKEYUS.
I would like to know how it different from the select statemene below:
SELECT SINGLE * FROM BKPF INTO CORRESPONDING FIELDS OF T_BKPF
WHERE
BUKRS = ANEP-BUKRS
AND GJAHR = ANEP-GJAHR
AND AWKEY = AWKEY.
Which of the select statements above can enhance report,because currently we have face quite bad issue on this report.
Can i post the ABAP code on this forum.
Hope someone can help me on this. thank you.Hi,
As much as possible use the primary keys of BKPF which is BUKRS, BELNR and GJAHR. Also, select only the records which are needed so to increase performance. Please look at the code below:
DATA: lv_age_of_rec TYPE p.
FIELD-SYMBOLS: <fs_final> LIKE LINE OF it_final.
LOOP AT it_final ASSIGNING <fs_final>.
get records from BKPF
SELECT SINGLE bukrs belnr gjahr budat bldat xblnr bktxt FROM bkpf
INTO (bkpf-bukrs, bkpf-belnr, bkpf-gjahr, <fs_final>-budat,
<fs_final>-bldat, <fs_final>-xblnr, <fs_final>-bktxt)
WHERE bukrs = <fs_final>-bukrs
AND belnr = <fs_final>-belnr
AND gjahr = <fs_final>-gjahr.
if <fs_final>-shkzg = 'H', multiply dmbtr(amount in local currency)
by negative 1
IF <fs_final>-shkzg = 'H'.
<fs_final>-dmbtr = <fs_final>-dmbtr * -1.
ENDIF.
combine company code(bukrs), accounting document number(belnr),
fiscal year(gjahr) and line item(buzei) to get long text.
CONCATENATE: <fs_final>-bukrs <fs_final>-belnr
<fs_final>-gjahr <fs_final>-buzei
INTO it_thead-tdname.
CALL FUNCTION 'READ_TEXT'
EXPORTING
client = sy-mandt
id = '0001'
language = sy-langu
name = it_thead-tdname
object = 'DOC_ITEM'
ARCHIVE_HANDLE = 0
LOCAL_CAT = ' '
IMPORTING
HEADER =
TABLES
lines = it_lines
EXCEPTIONS
id = 1
language = 2
name = 3
not_found = 4
object = 5
reference_check = 6
wrong_access_to_archive = 7
OTHERS = 8.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
if successful, split long text into start and end date
IF sy-subrc = 0.
READ TABLE it_lines TRANSPORTING tdline.
IF sy-subrc = 0.
SPLIT it_lines-tdline AT '-' INTO
<fs_final>-s_dat <fs_final>-e_dat.
ENDIF.
ENDIF.
get vendor name from LFA1
SELECT SINGLE name1 FROM lfa1
INTO <fs_final>-name1
WHERE lifnr = <fs_final>-lifnr.
lv_age_of_rec = p_budat - <fs_final>-budat.
condition for age of deposits
IF lv_age_of_rec <= 30.
<fs_final>-amount1 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 30 AND lv_age_of_rec <= 60.
<fs_final>-amount2 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 60 AND lv_age_of_rec <= 90.
<fs_final>-amount3 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 90 AND lv_age_of_rec <= 120.
<fs_final>-amount4 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 120 AND lv_age_of_rec <= 180.
<fs_final>-amount5 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 180.
<fs_final>-amount6 = <fs_final>-dmbtr.
ENDIF.
CLEAR: bkpf, it_lines-tdline, lv_age_of_rec.
ENDLOOP.
Hope this helps...
P.S. Please award points for useful answers. -
How to define perform with table statement in ECC6.0
Hi all,
I am working on ECC6.0 version. I m using a perform to populate an internal table like this:-
PERFORM explode TABLES li_zmpsdtl
USING gs_matnr.
& its forms is defined as: -
FORM treat_one_item TABLES li_zmpsdtl STRUCTURE zmpsdtl
USING gs_matnr TYPE matnr.
doing some action..............
endform.
While performing SLIN it shows an error message :-
" The current ABAP command is obsolete.
Within classes and interfaces, you can only use "TYPE" to refer to ABAP Dictionary types (not "LIKE" or "STRUCTURE"). "
If i use type in place of STRUCTURE then it is ok, but zmpsdtl should be defined as table type. :-
FORM treat_one_item TABLES li_zmpsdtl type zmpsdtl
USING gs_matnr TYPE matnr.
doing some action..............
endform.
is there any other option to do the same thing. i dont want to create any teable type.
Thanx in advance,
SachinYou have to use a global structure instead of a ddic structure after the STRUCTURE statement:
<b>DATA: gs_zmpsdtl TYPE zmpsdtl.</b>
FORM treat_one_item TABLES li_zmpsdtl STRUCTURE <b>gs_zmpsdt</b>
USING gs_matnr TYPE matnr.
...[/code] -
How to improve Oracle Veridata Compair pair performance with tables that have big (30-40MB)CLOB/BLOB fileds ?
Can you use insert .. returning .. so you do not have to select the empty_clob back out.
[I have a similar problem but I do not know the primary key to select on, I am really looking for an atomic insert and fill clob mechanism, somone said you can create a clob fill it and use that in the insert, but I have not seen an example yet.] -
Commit performance on table with Fast Refresh MV
Hi Everyone,
Trying to wrap my head around fast refresh performance and why I'm seeing (what I would consider) high disk/query numbers associated with updating the MV_LOG in a TKPROF.
The setup.
(Oracle 10.2.0.4.0)
Base table:
SQL> desc action;
Name Null? Type
PK_ACTION_ID NOT NULL NUMBER(10)
CATEGORY VARCHAR2(20)
INT_DESCRIPTION VARCHAR2(4000)
EXT_DESCRIPTION VARCHAR2(4000)
ACTION_TITLE NOT NULL VARCHAR2(400)
CALL_DURATION VARCHAR2(6)
DATE_OPENED NOT NULL DATE
CONTRACT VARCHAR2(100)
SOFTWARE_SUMMARY VARCHAR2(2000)
MACHINE_NAME VARCHAR2(25)
BILLING_STATUS VARCHAR2(15)
ACTION_NUMBER NUMBER(3)
THIRD_PARTY_NAME VARCHAR2(25)
MAILED_TO VARCHAR2(400)
FK_CONTACT_ID NUMBER(10)
FK_EMPLOYEE_ID NOT NULL NUMBER(10)
FK_ISSUE_ID NOT NULL NUMBER(10)
STATUS VARCHAR2(80)
PRIORITY NUMBER(1)
EMAILED_CUSTOMER TIMESTAMP(6) WITH LOCAL TIME
ZONE
SQL> select count(*) from action;
COUNT(*)
1388780MV was created
create materialized view log on action with sequence, rowid
(pk_action_id, fk_issue_id, date_opened)
including new values;
-- Create materialized view
create materialized view issue_open_mv
build immediate
refresh fast on commit
enable query rewrite as
select fk_issue_id issue_id,
count(*) cnt,
min(date_opened) issue_open,
max(date_opened) last_action_date,
min(pk_action_id) first_action_id,
max(pk_action_id) last_action_id,
count(pk_action_id) num_actions
from action
group by fk_issue_id;
exec dbms_stats.gather_table_stats('tg','issue_open_mv')
SQL> select table_name, last_analyzed from dba_tables where table_name = 'ISSUE_OPEN_MV';
TABLE_NAME LAST_ANAL
ISSUE_OPEN_MV 15-NOV-10
*note: table was created a couple of days ago *
SQL> exec dbms_mview.explain_mview('TG.ISSUE_OPEN_MV');
CAPABILITY_NAME P REL_TEXT MSGTXT
PCT N
REFRESH_COMPLETE Y
REFRESH_FAST Y
REWRITE Y
PCT_TABLE N ACTION relation is not a partitioned table
REFRESH_FAST_AFTER_INSERT Y
REFRESH_FAST_AFTER_ANY_DML Y
REFRESH_FAST_PCT N PCT is not possible on any of the detail tables in the mater
REWRITE_FULL_TEXT_MATCH Y
REWRITE_PARTIAL_TEXT_MATCH Y
REWRITE_GENERAL Y
REWRITE_PCT N general rewrite is not possible or PCT is not possible on an
PCT_TABLE_REWRITE N ACTION relation is not a partitioned table
13 rows selected.Fast refresh works fine. And the log is kept quite small.
SQL> select count(*) from mlog$_action;
COUNT(*)
0When I update one row in the base table:
var in_action_id number;
exec :in_action_id := 398385;
UPDATE action
SET emailed_customer = SYSTIMESTAMP
WHERE pk_action_id = :in_action_id
AND DECODE(emailed_customer, NULL, 0, 1) = 0
commit;I see the following happen via tkprof...
INSERT /*+ IDX(0) */ INTO "TG"."MLOG$_ACTION" (dmltype$$,old_new$$,snaptime$$,
change_vector$$,sequence$$,m_row$$,"PK_ACTION_ID","DATE_OPENED",
"FK_ISSUE_ID")
VALUES
(:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,
sys.cdc_rsid_seq$.nextval,:m,:1,:2,:3)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.01 0 0 0 0
Execute 2 0.00 0.03 4 4 4 2
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.04 4 4 4 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
2 SEQUENCE CDC_RSID_SEQ$ (cr=0 pr=0 pw=0 time=28 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 4 0.01 0.01
update "TG"."MLOG$_ACTION" set snaptime$$ = :1
where
snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.94 5.36 55996 56012 1 2
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.94 5.38 55996 56012 1 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 UPDATE MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=5364554 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=46756 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 3529 0.02 4.91
select dmltype$$, max(snaptime$$)
from
"TG"."MLOG$_ACTION" where snaptime$$ <= :1 group by dmltype$$
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.70 0.68 55996 56012 0 1
total 4 0.70 0.68 55996 56012 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 SORT GROUP BY (cr=56012 pr=55996 pw=0 time=685671 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=1851 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 3529 0.00 0.38
delete from "TG"."MLOG$_ACTION"
where
snaptime$$ <= :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.71 0.70 55946 56012 3 2
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.71 0.70 55946 56012 3 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 DELETE MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=702813 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=1814 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 3530 0.00 0.39
db file sequential read 33 0.00 0.00
********************************************************************************Could someone explain why are the SELECT/UPDATE/DELETE on MLOG$_ACTION so "expensive" when there should only be 2 rows (old value and new value) in that log after an update? Is there anything I could do to improve the performance of the update?
Let me know if you require more info...would be glad to provide it.Brilliant. Thanks.
I owe you a beverage.
SQL> set autotrace on
SQL> select count(*) from MLOG$_ACTION;
COUNT(*)
0
Execution Plan
Plan hash value: 2727134882
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 12309 (1)| 00:02:28 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| MLOG$_ACTION | 1 | 12309 (1)| 00:02:28 |
Note
- dynamic sampling used for this statement
Statistics
4 recursive calls
0 db block gets
56092 consistent gets
56022 physical reads
0 redo size
410 bytes sent via SQL*Net to client
400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> truncate table MLOG$_ACTION;
Table truncated.
SQL> select count(*) from MLOG$_ACTION;
COUNT(*)
0
Execution Plan
Plan hash value: 2727134882
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| MLOG$_ACTION | 1 | 2 (0)| 00:00:01 |
Note
- dynamic sampling used for this statement
Statistics
1 recursive calls
1 db block gets
6 consistent gets
0 physical reads
96 redo size
410 bytes sent via SQL*Net to client
400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedJust for fun...comparison of the TKPROF.
Before:
update "TG"."MLOG$_ACTION" set snaptime$$ = :1
where
snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.94 5.36 55996 56012 1 2
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.94 5.38 55996 56012 1 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 UPDATE MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=5364554 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=46756 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 3529 0.02 4.91
********************************************************************************After:
update "TG"."MLOG$_ACTION" set snaptime$$ = :1
where
snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 7 1 2
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 7 1 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 UPDATE MLOG$_ACTION (cr=7 pr=0 pw=0 time=79 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=7 pr=0 pw=0 time=28 us)
******************************************************************************** -
DML operations performance on table indexed with CTXCAT
Hi,
I have a table with 2M records. The table is batch updated once a day, and the number of record movements (update/delete/insert) should be around 100K.
The table is indexed with CTXCAT.
If I create the index from scratch, it takes 5minutes.
If I perform delete/insert/update operations involving 40K records, it takes a lot more (especially for delete and update operations, something like 30 minutes).
In this particular case I can drop the index and recreate it from scratch every night. The problem is that the 2M records table is only the first step in adoption of Oracle Text. The next step will be a 40M records table, on which the initial index creation takes something like 2hours (so I can't rebuild it every night).
Do you have any suggest?
Thanks.
-- table DDL
CREATE TABLE TAHZVCON_TEXT
CONSUMER_ID NUMBER(10) NOT NULL,
COMPANY_NAME VARCHAR2(4000 CHAR),
CITY VARCHAR2(30 BYTE),
PROVINCE VARCHAR2(3 CHAR),
POST_CODE VARCHAR2(10 BYTE)
CREATE UNIQUE INDEX TAHZVCON_TEXT_PK ON TAHZVCON_TEXT (CONSUMER_ID);
begin
ctx_ddl.drop_preference('mylex');
ctx_ddl.create_preference('mylex', 'BASIC_LEXER');
ctx_ddl.set_attribute('mylex', 'printjoins', '.#');
ctx_ddl.set_attribute('mylex', 'base_letter', 'YES');
ctx_ddl.set_attribute('mylex', 'index_themes','NO');
ctx_ddl.set_attribute('mylex', 'index_text','YES');
ctx_ddl.set_attribute('mylex', 'prove_themes','NO');
ctx_ddl.drop_preference('mywordlist');
ctx_ddl.create_preference('mywordlist', 'BASIC_WORDLIST');
ctx_ddl.set_attribute('mywordlist','stemmer','NULL');
ctx_ddl.set_attribute('mywordlist','SUBSTRING_INDEX', 'NO');
ctx_ddl.set_attribute('mywordlist','PREFIX_INDEX','NO');
ctx_ddl.drop_index_set('tahzvcon_iset');
ctx_ddl.create_index_set('tahzvcon_iset');
ctx_ddl.add_index('tahzvcon_iset','city');
ctx_ddl.add_index('tahzvcon_iset','province');
ctx_ddl.add_index('tahzvcon_iset','post_code');
end;
CREATE INDEX TAHZVCON_TEXT_TI01 ON TAHZVCON_TEXT(COMPANY_NAME)
INDEXTYPE IS CTXSYS.CTXCAT
PARAMETERS ('lexer mylex wordlist mywordlist index set tahzvcon_iset')
PARALLEL 8;
AndreaHi kevinUCB,
I've decided to use CTXCAT indexes because I had to perform queries involving different columns (company name, city, region, etc.). So I thought CTXCAT was the right index for me.
Now I've discovered that if I index an XML with CONTEXT, I can perform a search on single XML fields, so CONTEXT is suitable for my needs.
Preliminary test on the 2M record table looks very good.
Bye,
Andrea -
Problems with Performance in table AUSP - R/3
Hello, I am working with the set of tables (Klah, KSSK, KSML, AUSP, CABN in SAP R3) and generate reports with Crystal Reports 2008. The problem arises when displaying the data in Table AUSP, since the characteristics (Attin) are listed as records and I must show them grouped by the field AUSP.OBJEK as columns (see how in the transaction CL30N).
I probe with infoset, query / infoset and related tables in Crystal Reports but the performance is very bad.
Anybody know of a function standard similar to CL30N or I have only programmed in ABAP?
Edited by: Carolina Jerez on Oct 7, 2009 10:01 AMHi Ruven,
so you are able to log on to the portal from Windows via SSO. When you access the backend from the portal it is not working, right?
What kind of iView are you using to access the backend? Can you try with a simple Windows-SAPGui-iView? Is it working then? If you are using some web-based application to access the backend there might be a problem with different domains so that the SAPLogon Ticket is not passed along (both the URL of the portal and the URL of the backend have to be in the same domain -- of course there are way to enable SSO to different domains, but let's start with this simple case)
I would still recomend to use the Note mentioned above to trace what is actually sent to the backend (I don't think that the dataSourceConfiguration file you are using has anything to do with SSO not working from the portal to the backend).
Regards,
Holger. -
CCM performance problems table info needed!!
Hello,
We have iimpemented SRM50 wit hCCM 200_700 level 6. We have serious performance problems when uploading content and publishing the catalogs. Does anybody know, as we have 29 procurement catalogs, if for all catalogs the entres are entered in the table /CCM/D_ITM.
We have 29 catalogs but only 4000 items so we think the entries are double. How many items and catalogs should CCM be able to handle ?
Thanks of course i will reward points
Kind regards,
Antoinette StorkHi,
See this related thread/notes:
Re: items in catalog not flagged as "approved"
Note 919725 - BBP_CCM_TRANSFER_CATALOG - Performance Improvement
BR,
Disha.
Do reward points for useful answers. -
Performance ME51 table EBKN partially active
Hello group,
we suddenly had a performance issue, and we found that suddenly table EBKN appeared as "patially active" in the database. After activating the performance problems were gone. But we have no clue as to how the table got the status "partially active" and what this means. We got no errors or warnings when activating.
Doe sanyone have a clue as to what the cause is for the status "partially active"?
Regards, Léon HoeneveldHi,
> I've checked with SE01 Transport organizer if any transport contained table EBKN.
it's not necessarily the table but could be one of the includes .... .
Partially active means that a part of the table (e.g. an include) was not active while the table (the part without the include) was active. You can try to find that part and search that in transports.
In the cases i had i was like this.
Regaring your performance problems: I wonder if they have to do with a partially activated table. Honestly i doubt it.
Programs working on partially activated tables should either work or dump but i never heard of performance problems
related to partially activated tables....
Kind regards,
Hermann -
9iAS Performance Metrics Tables
I want to generate my own custom report based on the performance metrics available in the
Oracle 9IAS. However i am not able to locate the metrics tables(i.e ohs_server, oc4j_web_module,
oc4j_servlet, etc ) mentioned in the Oracle 9iAS performance guide.
Can anybody help me with locating the tables, under what username should we look for them and
also if any special packs need to be installed for them.
Regards
Sriram.Sriram,
If U find a solution, pls do let me know for I too am facing the same problem.
rgds,
--rash -
Poor performances on tables FAGL_011ZC, FAGL_011PC and FAGL_011VC
We note from some time on a very very poor performances of some programme accessing these tables.
The tables are very small, still standard.
We see these tables does not have particular indexes, and their buffering is "allowed but disabled".
In you opinion can we activate the buffering of these tables ? Are there controindications ?
Our system is ECC5 quite aligned as SP level.Client is looking upon TDMS (Test Data Migration Server) to replicate the PRD data into lower systems.
-
Can numbers perform pivot tables
can you create pivot tables on numbers the same way you can with excel?
No no pivot tables in numbers.... however...
Pivot tables are generally a fast method of using functions already existing to perform summarization, therefore even though they do not exist as an automatic function in numbers, you can use the basic CountIF, SumIF, etc... functions in a table you make to perform the same functionality. This has been done by many of us over the years and it works fairly well. You lose some of the "pivot" ability and no drag and drop. so it takes a little more planning beforehand.
Jason Kent -
Hi,
I want to pass different tables using single perform, whats the correct way?
Also want to pass one string parameter.
Thanks,
AmolHi
<u><b>check this for perfrom statement</b></u>
DATA: RNAME(30) VALUE 'WRITE_STATISTIC', "Form and program
"names must
PNAME(8) VALUE 'ZYX_STAT'. "be written in
"upper case
PERFORM (RNAME) IN PROGRAM ZYX_STAT.
PERFORM WRITE_STATISTIC IN PROGRAM (PNAME).
PERFORM (RNAME) IN PROGRAM (PNAME).
<u><b>You can also</b></u>
PERFORM form ON COMMIT.
<b>
In the Report write <u>perform</u> and put your cursor on that statement and press F1 you can get number of examples related to it.</b>
<u><b>Check this sample code</b></u>
METHOD if_ex_le_shp_goodsmovement~change_input_header_and_items .
* Internal table declaration
DATA: t_lipsheader TYPE TABLE OF lipsvb .
DATA :t_messtab TYPE TABLE OF bdcmsgcoll.
DATA: t_lipsitem TYPE TABLE OF lipsvb.
DATA :t_bdcdata TYPE TABLE OF bdcdata.
* Structure declaration
DATA: wa_lipsheader TYPE lips.
DATA: wa_likp TYPE likp.
DATA: wa_lips TYPE lipsvb.
DATA: wa_bdcdata TYPE bdcdata.
DATA: wa_messtab TYPE bdcmsgcoll.
DATA: wa_textout TYPE t100-text.
* variable declaration
DATA :fval TYPE bdc_fval.
DATA :ctumode TYPE ctu_params-dismode,
cupdate TYPE ctu_params-updmode.
DATA :date1(10) TYPE c,date2(10) TYPE c.
DATA: budat TYPE sy-datum,
bldat TYPE likp-bldat,
usnam TYPE sy-uname,
uzeit TYPE sy-uzeit,
hhmm(4) TYPE n.
* Constant declaration
CONSTANTS: nodata TYPE c VALUE '/' .
CONSTANTS: c_bwart TYPE lips-bwart VALUE '101'.
* Logic for Posting date and Document date.
* Check for Actual GI date from delivery.
IF is_likp-wadat_ist NE space.
budat = is_likp-wadat_ist.
ELSE.
* If Actual GI date is initial then populate today's date
budat = sy-datum.
ENDIF.
* Populate today's date for document date
bldat = sy-datum.
usnam = sy-uname.
* Start new screen
DEFINE bdc_dynpro.
clear wa_bdcdata.
wa_bdcdata-program = &1.
wa_bdcdata-dynpro = &2.
wa_bdcdata-dynbegin = 'X'.
append wa_bdcdata to t_bdcdata.
END-OF-DEFINITION.
* Insert field
DEFINE bdc_field.
if fval <> nodata.
clear wa_bdcdata.
wa_bdcdata-fnam = &1.
wa_bdcdata-fval = &2.
append wa_bdcdata to t_bdcdata.
endif.
END-OF-DEFINITION.
* loops through the internal table and validates *
* the data in the internal table *
LOOP AT it_xlips INTO wa_lipsheader.
IF wa_lipsheader-uepos IS INITIAL AND wa_lipsheader-pstyv =
'taq' AND wa_lipsheader-oic_mot = 'PK'.
MOVE wa_lipsheader-matnr TO wa_lips-ummat.
MOVE wa_lipsheader-werks TO wa_lips-umwrk.
MOVE wa_lipsheader-werks TO wa_lips-werks.
MOVE wa_lipsheader-lgort TO wa_lips-lgort .
MOVE wa_lipsheader-lgort TO wa_lips-umlgo.
MOVE c_bwart TO wa_lips-bwart.
MOVE wa_lipsheader-posnr TO wa_lips-posnr.
MOVE wa_lipsheader-lfimg TO wa_lips-lfimg.
MOVE wa_lipsheader-meins TO wa_lips-meins.
MOVE wa_lipsheader-volum TO wa_lips-volum.
MOVE wa_lipsheader-vbeln TO wa_lips-vbeln.
MOVE wa_lipsheader-bwtar TO wa_lips-bwtar.
APPEND wa_lips TO t_lipsheader.
ELSE.
IF wa_lipsheader-uepos IS NOT INITIAL AND wa_lipsheader-pstyv
= 'TAE'.
MOVE wa_lipsheader-matnr TO wa_lips-matnr.
MOVE wa_lipsheader-werks TO wa_lips-werks.
MOVE wa_lipsheader-lgort TO wa_lips-lgort.
MOVE wa_lipsheader-lfimg TO wa_lips-lfimg.
MOVE wa_lipsheader-posnr TO wa_lips-posnr.
MOVE wa_lipsheader-voleh TO wa_lips-voleh.
MOVE wa_lipsheader-meins TO wa_lips-meins.
MOVE wa_lipsheader-volum TO wa_lips-volum.
MOVE wa_lipsheader-vbeln TO wa_lips-vbeln.
MOVE wa_lipsheader-bwtar TO wa_lips-bwtar.
IF wa_lips-lgort IS INITIAL.
wa_lips-lgort = wa_lips-umlgo.
ENDIF.
APPEND wa_lips TO t_lipsitem.
ENDIF.
ENDIF.
ENDLOOP.
* BDC TABLE CONTROL
LOOP AT t_lipsheader INTO wa_lipsheader.
WRITE : bldat TO date1 MM/DD/YYYY,
budat TO date2 MM/DD/YYYY.
bdc_dynpro 'SAPMM07M' '0400'.
bdc_field 'BDC_CURSOR' 'RM07M-LGORT'.
bdc_field 'BDC_OKCODE' '/00' .
bdc_field 'MKPF-BLDAT' date1.
bdc_field 'MKPF-BUDAT' date2.
bdc_field 'MKPF-OIB_BLTIME' hhmm.
bdc_field 'RM07M-BWARTWA' c_bwart.
bdc_field 'RM07M-WERKS' wa_lips-werks.
bdc_field 'RM07M-LGORT' wa_lips-lgort.
bdc_field 'XFULL' 'X'.
bdc_field 'RM07M-WVERS2' 'X'.
bdc_dynpro 'SAPMM07M' '0421'.
bdc_field 'BDC_CURSOR' 'MSEG-WERKS(02)'.
bdc_field 'BDC_OKCODE' '/00'.
bdc_field 'MSEGK-UMWRK' wa_lips-umwrk.
bdc_field 'MSEGK-UMLGO' wa_lips-umlgo.
bdc_field 'MSEGK-UMMAT' wa_lips-ummat.
* Data declaration
DATA:quan(17) TYPE c.
DATA:ftable(20) TYPE c.
DATA:k TYPE n.
MOVE 1 TO k.
LOOP AT t_lipsitem INTO wa_lips WHERE vbeln
= wa_lipsheader-vbeln.
IF sy-subrc = 0.
CONCATENATE 'MSEG-MATNR(' k ')' INTO ftable.
bdc_field ftable wa_lips-matnr.
MOVE wa_lips-lfimg TO quan.
CONCATENATE 'MSEG-ERFMG(' k ')' INTO ftable.
bdc_field ftable quan .
CONCATENATE 'MSEG-LGORT(' k ')' INTO ftable.
bdc_field ftable wa_lips-lgort.
CONCATENATE 'MSEG-CHARG(' k ')' INTO ftable.
bdc_field ftable wa_lips-bwtar.
CONCATENATE 'MSEG-WERKS(' k ')' INTO ftable.
bdc_field ftable wa_lips-werks.
ENDIF.
k = k + 1.
ENDLOOP.
bdc_field 'DKACB-FMORE' 'X'.
bdc_dynpro 'SAPLKACB' '0002'.
bdc_field 'BDC_OKCODE' '=ENTE' .
bdc_dynpro 'SAPLOIB_QCI' '0500'.
bdc_field 'BDC_CURSOR' 'OIB_A08-TDICH'.
bdc_field 'BDC_OKCODE' '=CALC'.
bdc_dynpro 'SAPLOIB_QCI' '0500'.
bdc_field 'BDC_CURSOR' 'OIB_A08-TDICH'.
bdc_field 'BDC_OKCODE' '=CONT'.
bdc_dynpro 'SAPLKACB' '0002'.
bdc_field 'BDC_OKCODE' '=ENTE'.
bdc_dynpro 'SAPLOIB_QCI' '0500'.
bdc_field 'BDC_CURSOR' 'OIB_A08-TDICH'.
bdc_field 'BDC_OKCODE' '=CALC'.
bdc_dynpro 'SAPLOIB_QCI' '0500'.
bdc_field 'BDC_CURSOR' 'OIB_A08-TDICH'.
bdc_field 'BDC_OKCODE' '=CONT'.
bdc_dynpro 'SAPLKACB' '0002'.
bdc_field 'BDC_OKCODE' '=ENTE'.
bdc_dynpro 'SAPMM07M' '0421'.
bdc_field 'BDC_CURSOR' 'MSEG-ERFMG(01)'.
bdc_field 'BDC_OKCODE' '=BU'.
bdc_field 'DKACB-FMORE' 'X'.
bdc_dynpro 'SAPLKACB' '0002'.
bdc_field 'BDC_OKCODE' '=ENTE'.
*set the parametrs for call transaction.
ctumode = 'N'.
cupdate = 'L'.
CALL TRANSACTION 'MB11' USING t_bdcdata MODE ctumode
UPDATE cupdate MESSAGES INTO t_messtab.
LOOP AT t_messtab INTO wa_messtab .
CALL FUNCTION 'MESSAGE_TEXT_BUILD'
EXPORTING
msgid = wa_messtab-msgid
msgnr = wa_messtab-msgnr
msgv1 = wa_messtab-msgv1
msgv2 = wa_messtab-msgv2
msgv3 = wa_messtab-msgv3
msgv4 = wa_messtab-msgv4
IMPORTING
message_text_output = wa_textout.
MESSAGE wa_textout TYPE wa_messtab-msgtyp.
ENDLOOP.
ENDLOOP.
ENDMETHOD.
<u><b>Check this link</b></u><b><a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/d1/801aaf454211d189710000e8322d00/content.htm">insert statements in sap</a></b>
Reward all helpfull answers
Regards
Pavan
Message was edited by:
Pavan praveen -
Update Query is Performing Full table Scan of 1 Millions Records
Hello Everyboby I have one update query ,
UPDATE tablea SET
task_status = 12
WHERE tablea.link_id >0
AND tablea.task_status <> 0
AND tablea.event_class='eventexception'
AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
AND ltask.task_status = 0)
When I do explain plan it shows following result...
Execution Plan
0 UPDATE STATEMENT Optimizer=CHOOSE
1 0 UPDATE OF 'tablea'
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'tablea'
4 2 TABLE ACCESS (BY INDEX ROWID) OF 'tablea'
5 4 INDEX (UNIQUE SCAN) OF 'PK_tablea' (UNIQUE)
NOW tablea may have more than 10 MILLION Records....This would take hell of time even if it has to
update 2 records....please suggest me some optimal solutions.....
Regards
MaheshI see your point but my question or logic say i have index on all columns used in where clauses so i find no reason for oracle to do full table scan,,,,
UPDATE tablea SET
task_status = 12
WHERE tablea.link_id >0
AND tablea.task_status <> 0
AND tablea.event_class='eventexception'
AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
AND ltask.task_status = 0)
I am clearly statis where task_status <> 0 and event_class= something and tablea.link_id >0
so ideal case FOR optimizer should be
Step 1)Select all the rowid having this condition...
Step 2)
For each row in rowid get all the row where task_status=0
and where taskid=linkid of rowid selected above...
Step 3)While looping for each rowid if it find any condition try for rowid obtained from ltask in task 2 update that record....
I want to have this kind of plan,,,,,does anyone know how to make oracle obtained this kind of plan......
It is try FULL TABLE SCAN is harmfull alteast not better than index scan..... -
SQL performance - multiple tables or one bigger one?
One product will have 5 different types of media that need to
be
associated with it.
Some products will have, perhaps many items in one column
while none in
other columns
For Example
Product ID | ###
images | photoID1, PhotoID2
multimedia |
testimonials |
Pdfs | pdfid1
INserts | insertid1, insertid2 insertid3
Anyways, I am wondering if creating one table with multiple
columns is
better than many tables with just one of each of these
categories?
In other words, would it be better to have a table for
multi-media, a
table for pdfs and so on or one table for all of them. Some
rows will be
empty.
does the server take a bigger hit accessing multiple tables
win
individual items or one table with more columns that are,
perhaps empty?.oO(Lee)
>One product will have 5 different types of media that
need to be
>associated with it.
>
>Some products will have, perhaps many items in one column
while none in
>other columns
>
>For Example
>
>Product ID | ###
>images | photoID1, PhotoID2
>multimedia |
>testimonials |
>Pdfs | pdfid1
>INserts | insertid1, insertid2 insertid3
>
>Anyways, I am wondering if creating one table with
multiple columns is
>better than many tables with just one of each of these
categories?
>
>In other words, would it be better to have a table for
multi-media, a
>table for pdfs and so on
That would be one possible way.
>or one table for all of them. Some rows will be
>empty.
Don't do that, at least not in the way you described above.
But you
could do it with a single table if you would add a column
that describes
the media type:
productId
mediaType
mediaId
There would be a record for every single associated media. A
product
with 3 PDFs and 5 images would have 8 records in the media
table.
Micha
Maybe you are looking for
-
Confusion with DX80 directory and contacts
We just received two new Cisco DX80. I have to say I am a bit confused by many thing, one being the directory/contact. When the endpoint is registered on the CUCM (9.1.2), synched with Exchange and Jabber CUP server, a "Cisco UCM account" and "Cisco
-
How can I delete SKYPE to GO - my account has been...
How can I delete SKYPE to GO???? My account has been hacked and a SKYPE to Go account has been set up with I number I am unable to delete. It has proved impossible to contact anyone at SKYPE to inform them of this and to ask for assistance in deletin
-
Problems with non-existent waveform cache...
I previously posted this: +I cannot open Final Cut. I have an error saying 'The Waveform Cache Files Folder does not have read/write access. Please select a new folder in the System Settings Dialog or correct the problem on disk'. Final Cut then proc
-
Oracle Intermedia Search in jsp files
I am using Oracle Intermedia for search ".jsp" files. Actually i am storing the files on a bfile column and passing the bfile to a stored procedure which atually reads and highlights the given search keywords in the ".jsp" files. Problem here is that
-
Safari crash after open a link
this is the link: https://support.worldoftanks.com/index.php?/Tickets/Ticket/View/RPH-912-94716, and now when I open safari it crashes after 4 seconds... I have a Ipod Touch 4G 5.1 I closed the app, restart the Ipod Touch and keep crashed I press thi