Performance issue in the table
Hi All
i have one table which is based on a VO. This VO has 150 attributes.I am facing performance issue.
first Q is i need to display only 15 attributes in the table. but other attributes are also required .
so i have two options .
1. make other as form value
or
2. make them messagetextinput and set rendered false.
i just wanted to know which will perform better.
pls help
these attributes have some default values.
i need to pass these values to the api on some action
if i dont keep in my page then
will row.getAttribute("XYZ") still carry its value
??
Similar Messages
-
Performance issue with the table use vrkpa
Hi.
here is the selection criteria that i am using and the table use vrkpa i only used to map the table kna1 and vbrk.vbrk and kna1 doesnot have the direct primary key relationship.
please check and let me know wht this vrkpa is taking time and how can i improve the performance as from kna1,i am fetching data very easily while fetching nothing from vrkpa and fetching fkdat from vbrk.
the idea behind using these tables is just for one kunnr (from kna1)getting the relevant entries based on the fkdat(selection screen input field),please suggest.
SELECT kunnr
name1
land1
regio
ktokd
FROM kna1
INTO TABLE it_kna1
FOR ALL ENTRIES IN it_knb1
WHERE kunnr = it_knb1-kunnr
AND ktokd = '0003'.
IF sy-subrc = 0.
SORT it_kna1 BY kunnr.
DELETE ADJACENT DUPLICATES FROM it_kna1 COMPARING kunnr.
ENDIF.
ENDIF.
IF NOT it_kna1[] IS INITIAL.
SELECT kunnr
vbeln
FROM vrkpa
INTO TABLE it_vrkpa
FOR ALL ENTRIES IN it_kna1
WHERE kunnr = it_kna1-kunnr.
IF sy-subrc = 0.
SORT it_vrkpa BY kunnr vbeln.
ENDIF.
ENDIF.
IF NOT it_vrkpa[] IS INITIAL.
SELECT vbeln
kunrg
fkdat
kkber
bukrs
FROM vbrk
INTO TABLE it_vbrk
FOR ALL ENTRIES IN it_vrkpa
WHERE vbeln = it_vrkpa-vbeln.
IF sy-subrc = 0.
DELETE it_vbrk WHERE fkdat NOT IN s_indate.
DELETE it_vbrk WHERE fkdat NOT IN s_chdate.
DELETE it_vbrk WHERE bukrs NOT IN s_ccode.
SORT it_vbrk DESCENDING BY vbeln fkdat.
ENDIF.
ENDIF.Hi,
Transaction SE11
Table VRKPA => Display (not Change)
Click on "Indexes"
Click on "Create" (if your system is Basis 7.00, then click on the "Create" drop-down icon and choose "Create extension index")
Choose a name (up to 3 characterss, start with Z)
Enter a description for the index
Enter the field names of the index
Choose "Save" (prompts for transport request)
Choose "Activate"
If after "Activate' the status shows "Index exists in database system <...>", then you have nothing more to dotable is very large the activation will not create the index in the database and the status remains "Index does nor exist". In that case:
- Transaction SE14
- Table VRKPA -> Edit
- Choose "Indexes" and select your new index
- Choose "Create database index"; mark the option "Background"
- Wait until the job is finished and check in SE11 that the index now exists in the DB
You don't have to do anyhting to your program because Oracle should choose the new index automatically. Run a SQL Trace to make sure.
Rgds,
Mark -
Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?
Thanks Kelly,
The answers would be the following:
1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
The cells will be numeric, free text, drop downs, and some calculated numeric.
Are we reaching the limits for UI performance?
Thanks -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issue in the Incremental ETL
Hi Experts,
we have performance issue and the task "Custom_SIL_OvertimeFact" is taking more than 55 mins.
this tasks reads the .csv file coming from business every 2 weeks with a few rows of data updated and appended.
every day this tasks runs and scans the whole of 70,000 records when in reality we just have 4-5 records updated/apended.
is there a way to shrink the time taken by this task?
when the record in the .csv file comes only once in a month then why do the mapping has to scan this many number of records on daily Incremental load?
please provide me some suggestions .
best regards,
KamThe reason you have to scan all the records is because you do not have an identifier to find what has been updated.
Becuase its a insert/update job it would be using normal load.
Truncating the table and reloading it would help you in terms of perfromance since then you would be able to use bulk load.
If there is any reason you cant do that then you can also look at the udpate strategy and see that all the fields that are being used to determine if a particular transaction is insert/update/existing must have an index defined on it , apart from this you should try and minimize the number of indexes on this table or make sure that you are dropping these when the load is running and creating them back once its finished !!
Also check your commit point and see if you can bump it up !!
Edited by: Sid_BI on Jan 28, 2010 4:03 PM -
Performance issue with COEP table in ECC 6
Hi,,
Any idea how to resonlve performance issue on COEP table in ECC6.0
We are not using COEP table right now. this table occupies 100gb of 900 gb in PRD system.
Can i directly archive/delete the table?
Regards
SivaHi Siva,
You cannot archive COEP table alone. It should be archived along with the respective archive object. Just deleting the table is not at all a good idea.
For finding out the appropriate archive object contributing to the entries in COEP, you need to perform CO table analysis using programs RARCCOA1 and RARCCOA2. For further informaton refer to SAP note 138688.
Hope this helps,
Naveen -
Performance issue with the ABAP statements
Hello,
Please can some help me with the below statements where I am getting performance problem.
SELECT * FROM /BIC/ASALHDR0100 into Table CHDATE.
SORT CHDATE by DOC_NUMBER.
SORT SOURCE_PACKAGE by DOC_NUMBER.
LOOP AT CHDATE INTO WA_CHDATE.
READ TABLE SOURCE_PACKAGE INTO WA_CIDATE WITH KEY DOC_NUMBER =
WA_CHDATE-DOC_NUMBER BINARY SEARCH.
MOVE WA_CHDATE-CREATEDON to WA_CIDATE-CREATEDON.
APPEND WA_CIDATE to CIDATE.
ENDLOOP.
I wrote an above code for the follwing requirement.
1. I have 2 tables from where i am getting the data
2.I have common fields in both the table names CREATEDON date. In both the tables I hve the values.
3. While accessing the 2 table and copying to thrid table i have to modify the field.
I am getting performance issues with the above statements.
Than
Edited by: Rob Burbank on Jul 29, 2010 10:06 AMHello,
try a select like the following one instead of you code.
SELECT field field2 ...
INTO TABLE it_table
FROM table1 AS T1 INNER JOIN table2 AS T2
ON t1-doc_number = t2-doc_number -
Performance issues with the Vouchers index build in SES
Hi All,
We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
The index creation process runs for over 5 days.
Can you please share any information or issues that you may have faced on your project and how they were addressed?Check the following logs for errors:
1. The message log from the process scheduler
2. search_server1-diagnostic.log in /search_server1/logs directory
If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES> -
Performance issues with the Tuxedo MQ Adapter
We are experimenting some performance issues with the MQ Adapter. For example, we are seeing that the MQ Adapter takes from 10 to 100 ms in reading a single message from the queue and sending to the Tuxedo service. The Tuxedo service takes 80 ms in its execution so there is a considerable waste of time in the MQ adapter that we cannot explain.
Also, we have looked a lot of rollback transactions on the MQ adapter, for example we got 980 rollback transactions for 15736 transactions sent and only the MQ adapter is involved in the rollback. However, the operations are executed properly. The error we got is
135027.122.hqtux101!MQI_QMTESX01.7636.1.0: gtrid x0 x4ec1491f x25b59: LIBTUX_CAT:376: ERROR: tpabort: xa_rollback returned XA_RBROLLBACK.
I am looking for information at Oracle site, but I have not found nothing. Could you or someone from your team help me?Hi Todd,
We have 6 MQI adapters reading from 5 different queues, but in this case we are writing in only one queue.
Someone from Oracle told us that the XA_RBROLLBACK occurs because we have 6 MQ adapters that are reading from the same queues and when one adapter finds a message and try to get that message, it can occurs that other MQ Adapter gets it before. In this case, the MQ adapter rollbacks the transaction. Even when we got some XA_RBROLLBACK errors, we don´t lose message. Also, I read something about that when XA sends a xa_end call to MQ adapter, it actually does the rollback, so when the MQ adapter receives the xa_rollback call, it answers with XA_RBROLLBACK. Is that true?
However, I am more worried about the performance. We are putting a request message in a MQ queue and waiting for the reply. In some cases, it takes 150ms and in other cases it takes much more longer (more than 400ms). The average is 300ms. MQ adapter calls a service (txgralms0) which lasts 110ms in average.
This is our configuration:
"MQI_QMTESX01" SRVGRP="g03000" SRVID=3000
CLOPT="-- -C /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg"
RQPERM=0600 REPLYQ=N RPPERM=0600 MIN=6 MAX=6 CONV=N
SYSTEM_ACCESS=FASTPATH
MAXGEN=1 GRACE=86400 RESTART=N
MINDISPATCHTHREADS=0 MAXDISPATCHTHREADS=1 THREADSTACKSIZE=0
SICACHEENTRIESMAX="500"
/tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg:
*SERVER
MINMSGLEVEL=0
MAXMSGLEVEL=0
DEFMAXMSGLEN=4096
TPESVCFAILDATA=Y
*QUEUE_MANAGER
LQMID=QMTESX01
NAME=QMTESX01
*SERVICE
NAME=txgralms0
FORMAT=MQSTR
TRAN=N
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCRQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCPQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPSAQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPINQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KDECQ01
Thanks in advance,
Marling -
Dear Experts
We are facing some performance issue in the CRM-WEBGUI . All the standard and custom one are slow it take more time to open.
However from the ABAP side is OK the response time is less than one second.
Any specific operation i have to perform in term of CRM ?
Regards
HaroonHello,
i guess you started the webUI for the first time, right?
At the first time it is very slow - SGEN does not really help here.
It should be much better if you call the same screens a second and third time.
See the link:
[CRM Webclient UI - Generation of views after system restart / upgrade|CRM Webclient UI - Generation of views after system restart / upgrade]
Kind regards
Manfred -
Performance issue on the sys.dba_audit_session
i have the following query which is taking long time and had performance issue.
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS
failed_count
FROM sys.dba_audit_session
WHERE returncode != 0
AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00')
call count cpu elapsed disk query current rows
Parse 1 0.01 0.04 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 68.42 216.08 3943789 3960058 0 1
total 4 68.43 216.13 3943789 3960058 0 1
The view dba_audit_session is a select from the view dba_audit_trail. If you
look at the definition of dba_audit_trail it does a CAST on the ntimestamp#
column. Therefore disabling index access because there is not a function
based index on ntimestamp#. I am not even sure a function based index would
work to match what the view does.
cast ( /* TIMESTAMP */
(from_tz(ntimestamp#,'00:00') at local) as date),
To get index access the metric would have to avoid the use of the view. I have changed the query like this.
SELECT /*+ INDEX(a I_AUD3) */ TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD
HH24:MI:SS TZD') AS curr_timestamp, COUNT(userid) AS failed_count
FROM sys.aud$ a
WHERE returncode != 0
and action# between 100 and 102
AND ntimestamp# >= systimestamp at time zone 'GMT' - 30/1440
is it correct way to di it?
could you comment on this ?The query is run by grid Control (or DBConsole) to count the metric related to audit sessions which is ON by default in 11g. To decrease the impact of this query you should purge the aud$ table regularly.
Best way is to use DBMS_AUDIT_MGMT to periodically purge the data older than "whatever date". If you don't need the audit infor, you can simply truncate aud$. -
Insert performance issue with Partitioned Table.....
Hi All,
I have a performance issue during with a table which is partitioned. without table being partitioned
it ran in less time but after partition it took more than double.
1) The table was created initially without any partition and the below insert took only 27 minuts.
Total Rec Inserted :- 2424233
PL/SQL procedure successfully completed.
Elapsed: 00:27:35.20
2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
Is there anyway i can achive the better performance during insert on this partitioned table?
[ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
with partitioning the table, it took 18 hours... ]
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 4195045590
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
|* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
| 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
|* 3 | HASH JOIN | | | | | | |
| 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
| 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
PLAN_TABLE_OUTPUT
| 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
Predicate Information (identified by operation id):
1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
"A"."VENDOR_CD"="B"."COMPANY_NO")
3 - access(ROWID=ROWID)
Open C1;
Loop
Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
Forall I In 1..C_Rectype.Count
Insert test
col1,col2,col3)
Values
val1, val2,val3);
V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
Commit;
Exit When C_Rectype.Count = 0;
C_Rectype.delete;
End Loop;
End;
Total Rec Inserted :- 2424233
PL/SQL procedure successfully completed.
Elapsed: 00:51:01.22
Edited by: user520824 on Jul 16, 2010 9:16 AMI'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
I don't think there is a nologging hint :)
So, try something like
insert /*+ hints */ into ...
Select
A.Ing_Acct_Nbr, currency_Symbol,
Balance_Date, Company_No,
Substr(Account_No,1,8) Account_No,
Substr(Account_No,9,1) Typ_Cd ,
Substr(Account_No,10,1) Chk_Cd,
Td_Balance, Sd_Balance,
Sysdate, 'Sisadmin'
From Ideaal_Cons.Tb_Account_Master_Base A,
Ideaal_Staging.Tb_Sisadmin_Balance B
Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
And A.Vendor_Cd = b.company_no
;Edited by: riedelme on Jul 16, 2010 7:42 AM -
Performance issue with JEST table
Moved to correct forum by moderator
Hi all,
I have a report which is giving performance issue.
It hits the function module "status_read", which in turn hits the table JEST..
The select query is:
SELECT SINGLE * FROM JSTO CLIENT SPECIFIED
WHERE MANDT = MANDT
AND OBJNR = OBJNR.
I know we should not use client specified, but this is a SAP standard code..
Since this query is hit many times, it results in TIME_OUT error..
I observed that the table JEST has 133,523,962 entries in production and in technical details, the size catagory is metnioned as 3 - (Data records expected: 280,000 to 1,100,000).
Since here, the data size is exceeded, if i change the size catagory to 4 would improve the performance?
Or should I request Client to archive this table? If yes, please guide me how to go for it? I have heard there are archiving objects.. please specify which objects should be considered for archiving...
I could only think of above two solutions, please let me know if there is any other workaround...
thanks!
Edited by: Matt on Jan 27, 2009 11:12 AMHi,
I'm not sure the exact archiving object for this, here are some archiving objects related to tabel JEST
MM_EBAN
MM_EKKO
MM_MATNR
PP_ORDER
PR_ORDER
PM_NET
pl. go thru them using tcode: SARA
thanks\
Mahesh -
Performance issue with MSEG table
Hi all,
I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
Regards ,
AmitHi,
There could be various reasons for performance issue with MSEG.
1) database statistics of tables and indexes are not upto date.
because of this wrong index is choosen during the execution.
2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
3) Optimizer bug in oracle.
4) Size of table is very huge, archive.
Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
Hope this helps
dileep -
Performance issue with the Select query
Hi,
I have an issue with the performance with a seclet query.
In table AFRU - AUFNR is not a key field.
So i had selected the low and high values into s_reuck and used it in Where condition.
Still i have an issue with the Performance.
SELECT SINGLE RUECK
RMZHL
IEDD
AUFNR
STOKZ
STZHL
FROM AFRU INTO table t_AFRU
FOR ALL ENTRIES IN T_ZSCPRT100
WHERE RUECK IN S_RUECK AND
AUFNR = T_ZSCPRT100-AUFNR AND
STOKZ = SPACE AND
STZHL = 0.
I had also cheked by createing an index for AUFNR in the table AFRU...it does not help.
Is there anyway that we can declare Key field while declaring the Internal table....?
ANy suggestions to fix the performance issue is apprecaited!
Regards,
KittuHi,
Thank you for your quick response!
Rui dantas, i have lill confusion...this is my code below :
data : t_zscprt type standard table of ty_zscprt,
wa_zscprt type ty_zscprt.
types : BEGIN OF ty_zscprt100,
aufnr type zscprt100-aufnr,
posnr type zscprt100-posnr,
ezclose type zscprt100-ezclose,
serialnr type zscprt100-serialnr,
lgort type zscprt100-lgort,
END OF ty_zscprt100.
data : t_zscprt100 type standard table of ty_zscprt100,
wa_zscprt100 type ty_zscprt100.
Types: begin of ty_afru,
rueck type CO_RUECK,
rmzhl type CO_RMZHL,
iedd type RU_IEDD,
aufnr type AUFNR,
stokz type CO_STOKZ,
stzhl type CO_STZHL,
end of ty_afru.
data : t_afru type STANDARD TABLE OF ty_afru,
WA_AFRU TYPE TY_AFRU.
SELECT AUFNR
POSNR
EZCLOSE
SERIALNR
LGORT
FROM ZSCPRT100 INTO TABLE T_ZSCPRT100
FOR ALL ENTRIES IN T_ZSCPRT
WHERE AUFNR = T_ZSCPRT-PRTNUM
AND SERIALNR IN S_SERIAL
AND LGORT IN S_LGORT.
IF sy-subrc <> 0.
MESSAGE ID 'Z2' TYPE 'I' NUMBER '41'. "BDCG87
stop."BDCG87
ENDIF.
ENDIF.
SELECT RUECK
RMZHL
IEDD
AUFNR
STOKZ
STZHL
FROM AFRU INTO TABLE T_AFRU
FOR ALL ENTRIES IN T_ZSCPRT100
WHERE RUECK IN S_RUECK AND
AUFNR = T_ZSCPRT100-AUFNR AND
STOKZ = SPACE AND
STZHL = 0.
Using AUFNR, get AUFPL from AFKO
Using AUFPL, get RUECK from AFVC
Using RUEKC, read AFRU
In other words, one select joining AFKO <-> AFVC <-> AFRU should get what you want.
This is my select query, would you want me to write another select query to meet this criteria..
From AUFNR> I will get AUFPL from AFKO> BAsed on AUFPL I will get RUECK, based on RUEKC i need to read AFRU..but i need to select few field from AFRu based on AUFNR....
ANy suggestions wil be appreciated!
Regards
Kittu
Maybe you are looking for
-
Hi, I am developing a game in Java. If I use float variables in my program, it gives float exception while building the application in Ktoolbar in J2ME. Can I use float variables in java program while developing games. If not what is the solution for
-
AudioElement causes an error in Flex 4
Greetings, A few days ago Ryan and Brian helped me to patch the problem wtih importing video into Flex by using the 'loadForCompatibility' flag. This morning I ran into similar problem with audio: the SWF that uses simplest OSMF player plays fine in
-
Accessing java class and methods within JSP
Hi All, I am writing a JSP page, and need to instantiate an object of a certain class within my package, in my JSP page. How do I do that? Do i use the <%import = "Whatever.class" %> Any help would be greatly appreciated. Kal.
-
Warm greetings to all!, While using Specail GL indicator system is also using payment block field by default.Even if i try to remove it still it picks up the value in payment block field. Pls guide me suitable solution. Immediate reply will be apprec
-
Neither Quick Time works!!
Help. I seem to have two Quick Times on my computer -- the old trusted one (I believe it's QuickTime 7 with the nice light-blue Q) and the other one is new (I believe it's Quick Time 10 with a silver Q and a slightly different colored blue on the ins