If statement is too long in sap scripts.
Hi all,
My requirement is to check vendor and print accordingly, but i am unable to write complete statement in single line. i used page left and right option even though it is not fitting in that line.
please help.
Regards,
Ramprasad
<<Moved by moderator to correct forum>>
Edited by: Matt on Nov 6, 2008 11:48 AM
you can create one loacl varivale and assing your vendor varible to this local varible and check...
Ex : /: lv_lifnr to lifnr.
lv_lifnr = your long variable..
if not lv_varible is initial.
condition
else.
condition
endif.
Similar Messages
-
Update statement taking too long to execute
Hi All,
I'm trying to run this update statement. But its taking too long to execute.
UPDATE ops_forecast_extract b SET position_id = (SELECT a.row_id
FROM s_postn a
WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME)))
WHERE position_level = 7
AND b.am_id IS NULL;
SELECT COUNT(*) FROM S_POSTN;
214665
SELECT COUNT(*) FROM ops_forecast_extract;
49366
SELECT count(*)
FROM s_postn a, ops_forecast_extract b
WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME));
575What could be the reason for update statement to execute so long?
Thankspolasa wrote:
Hi All,
I'm trying to run this update statement. But its taking too long to execute.
What could be the reason for update statement to execute so long?You haven't said what "too long" means, but a simple reason could be that the scalar subquery on "s_postn" is using a full table scan for each execution. Potentially this subquery gets executed for each row of the "ops_forecast_extract" table that satisfies your filter predicates. "Potentially" because of the cunning "filter/subquery optimization" of the Oracle runtime engine that attempts to cache the results of already executed instances of the subquery. Since the in-memory hash table that holds these cached results is of limited size, the optimization algorithm depends on the sort order of the data and could suffer from hash collisions it's unpredictable how well this optimization works in your particular case.
You might want to check the execution plan, it should tell you at least how Oracle is going to execute the scalar subquery (it doesn't tell you anything about this "filter/subquery optimization" feature).
Generic instructions how to generate a useful explain plan output and how to post it here follow:
Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the {noformat}[{noformat}code{noformat}]{noformat} tag before and {noformat}[{noformat}/code{noformat}]{noformat} tag after or the {noformat}{{noformat}code{noformat}}{noformat} tag before and after to enhance readability of the output provided:
In SQL*Plus:
SET LINESIZE 130
EXPLAIN PLAN FOR <your statement>;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
In 9i and above, if the "Predicate Information" section is missing from the DBMS_XPLAN.DISPLAY output but you get instead the message "Plan table is old version" then you need to re-create your plan table using the server side script "$ORACLE_HOME/rdbms/admin/utlxplan.sql".
In previous versions you could run the following in SQL*Plus (on the server) instead:
@?/rdbms/admin/utlxplsA different approach in SQL*Plus:
SET AUTOTRACE ON EXPLAIN
<run your statement>;will also show the execution plan.
In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
When your query takes too long ...
and post the "tkprof" output here, too.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
SQL Update statement taking too long..
Hi All,
I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
Oracle Version: 11.2.0.1 64bit
OS: Windows 2008 64bit
desc temp_person;
Name Null? Type
PERSON_ID NOT NULL NUMBER(10)
DISTRICT_ID NOT NULL NUMBER(10)
FIRST_NAME VARCHAR2(60)
MIDDLE_NAME VARCHAR2(60)
LAST_NAME VARCHAR2(60)
BIRTH_DATE DATE
SIN VARCHAR2(11)
PARTY_ID NUMBER(10)
ACTIVE_STATUS NOT NULL VARCHAR2(1)
TAXABLE_FLAG VARCHAR2(1)
CPP_EXEMPT VARCHAR2(1)
EVENT_ID NOT NULL NUMBER(10)
USER_INFO_ID NUMBER(10)
TIMESTAMP NOT NULL DATE
CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
Index created.
ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
Index analyzed.
explain plan for update temp_person
2 set first_name = (select trim(f_name)
3 from ext_names_csv
4 where temp_person.PERSON_ID=ext_names_csv.p_id
5 and temp_person.DISTRICT_ID=ext_names_csv.ed_id);
Explained.
@?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 3786226716
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 82095 | 4649K| 2052K (4)| 06:50:31 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 82095 | 4649K| 191 (1)| 00:00:03 |
|* 3 | EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV | 1 | 178 | 24 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
19 rows selected.By the looks of it the update is going to take 6 hrs!!!
ext_names_csv is an external table that have the same number of rows as the PERSON table.
ROHO@rohof> desc ext_names_csv
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
F_NAME VARCHAR2(300)
L_NAME VARCHAR2(300)Anyone can help diagnose this please.
Thanks
Edited by: rsar001 on Feb 11, 2011 9:10 PMThank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
Table created.
SQL> desc ext_person
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
FST_NAME VARCHAR2(300)
LST_NAME VARCHAR2(300)
SQL> select count(*) from ext_person;
COUNT(*)
93383
SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
Index created.
SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
SQL> explain plan for update temp_person
2 set first_name = (select fst_name
3 from ext_person
4 where temp_person.PERSON_ID=ext_person.p_id
5 and temp_person.DISTRICT_ID=ext_person.ed_id);
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 1236196514
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 93383 | 1550K| 186K (50)| 00:37:24 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 93383 | 1550K| 191 (1)| 00:00:03 |
| 3 | TABLE ACCESS BY INDEX ROWID| EXTT_PERSON | 9 | 1602 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | EXT_PERSON_ED | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
SQL> explain plan for MERGE INTO temp_person t
2 USING (SELECT fst_name ,p_id,ed_id
3 FROM ext_person) ext
4 ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
5 WHEN MATCHED THEN
6 UPDATE set t.first_name=ext.fst_name;
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 2192307910
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | MERGE STATEMENT | | 92307 | 14M| | 1417 (1)| 00:00:17 |
| 1 | MERGE | TEMP_PERSON | | | | | |
| 2 | VIEW | | | | | | |
|* 3 | HASH JOIN | | 92307 | 20M| 6384K| 1417 (1)| 00:00:17 |
| 4 | TABLE ACCESS FULL| TEMP_PERSON | 93383 | 5289K| | 192 (2)| 00:00:03 |
| 5 | TABLE ACCESS FULL| EXT_PERSON | 92307 | 15M| | 85 (2)| 00:00:02 |
Predicate Information (identified by operation id):
3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
Note
- dynamic sampling used for this statement (level=2)
21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
Thank you all for your ideas that helped us get to the solution.
Much appreciated.
Thanks -
Replacing customer statement form (T CodeF.27)SAP script with Smartform.
I have a requirement to automate sending customer statement form which is triggered through T-code: F.27. For this we are trying to replace the SAP script which is triggered from program RFKORD11 with Smartform. We have tried copy the program RFKORD11 to a custom one which assigned to the correspondance type in t-code OB78 and do the modifications, but we could not do so as we are not authorized to copy the program. Can anyone help us in acheiving this functionality. BR, Karthik G.
Re your question A:
F140_CUS_STAT_01 and F140_CUS_STAT_02 are SAP standard forms for customer statement. You can use either one but usually you will create your own (by copying) to add company logo and name, etc.
If you use program RFKORD11 for customer statement, you specify the form to be used in the "Correspondence" field. -
Update statement takes too long to run
Hello,
I am running this simple update statement, but it takes too long to run. It was running for 16 hours and then I cancelled it. It was not even finished. The destination table that I am updating has 2.6 million records, but I am only updating 206K records. If add ROWNUM <20 to the update statement works just fine and updates the right column with the right information. Do you have any ideas what could be wrong in my update statement? I am also using a DB link since CAP.ESS_LOOKUP table resides in different db from the destination table. We are running 11g Oracle Db.
UPDATE DEV_OCS.DOCMETA IPM
SET IPM.XIPM_APP_2_17 = (SELECT DISTINCT LKP.DOC_STATUS
FROM [email protected] LKP
WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND
IPM.XIPMSYS_APP_ID = 2
WHERE
IPM.XIPMSYS_APP_ID = 2;
Thanks,
Ilyamatthew_morris wrote:
In the first SQL, the SELECT against the remote table was a correlated subquery. the 'WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND IPM.XIPMSYS_APP_ID = 2" means that the subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated. This might have meant thousands of iterations, meaning a great deal of network traffic (not to mention each performing a DISTINCT operation). Queries where the data is split between two or more databases are much more expensive than queries using only tables in a single database.Sorry to disappoint you again, but with clause by itself doesn't prevent from "subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated". For example:
{code}
SQL> set linesize 132
SQL> explain plan for
2 update emp e
3 set deptno = (select t.deptno from dept@sol10 t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "T" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select * from dept@sol10) select t.deptno from t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "DEPT" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL>
{code}
As you can see, WITH clause by itself guaranties nothing. We must force optimizer to materialize it:
{code}
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select /*+ materialize */ * from dept@sol10) select t.deptno from t where e.deptno = t.deptno
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3568118945
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 87 (17)| 00:00:02 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL | EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
| 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6603_1CEEEBC | | | | | | |
| 5 | REMOTE | DEPT | 4 | 80 | 3 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
|* 6 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 | | |
| 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6603_1CEEEBC | 4 | 80 | 2 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
6 - filter("T"."DEPTNO"=:B1)
Remote SQL Information (identified by operation id):
PLAN_TABLE_OUTPUT
5 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'SOL10' )
25 rows selected.
SQL>
{code}
I do know hint materialize is not documented, but I don't know any other way besides splitting statement in two to materialize it.
SY. -
SQL Statement taking too long to get the data
Hi,
There are over 2500 records in a table and when retrieve all using ' SELECT * From Table' it is taking too long to get the data. ie .. 4.3 secs.
Is there any possible way to shorten the process time.
ThanksHi Patrick,
Here is the sql statement and table desc.
ID Number
SN Varchar2(12)
FN Varchar2(30)
LN Varchar2(30)
By Varchar(255)
Dt Date(7)
Add Varchar2(50)
Add1 Varchar2(30)
Cty Varchar2(30)
Stt Varchar2(2)
Zip Varchar2(12)
Ph Varchar2(15)
Email Varchar2(30)
ORgId Number
Act Varchar2(3)
select A."FN" || '' '' || A."LN" || '' ('' || A."SN" || '')'' "Name",
A."By", A."Dt",
A."Add" || ''
'' || A."Cty" || '', '' || A."Stt" || '' '' || A."Zip" "Location",
A."Ph", A."Email", A."ORgId", A."ID",
A."SN" "OSN", A."Act"
from "TBL_OPTRS" A where A."ID" <> 0 ';
I'm displaying all rows in a report.
if I use 'select * from TBL_OPTRS' , this also takes 4.3 to 4.6 secs.
Thanks. -
ABAP select statements takes too long
Hi,
I have a select statement as shown below.
SELECT * FROM BSEG INTO TABLE ITAB_BSEG
WHERE BUKRS = CO_CODE
AND BELNR IN W_DOCNO
AND GJAHR = THISYEAR
AND AUGBL NE SPACE.
This select statement runs fine in all of R/3 systems except for 1. The problem that is shown with this particular system is that the query takes very long ( up to an hour for if W_DOCNO consists of 5 entries). Sometimes, before the query can complete, an ABAP runtime error is encountered as shown below:
<b>Database error text........: "ORA-01555: snapshot too old: rollback segment
number 7 with name "PRS_5" too small?"
Internal call code.........: "[RSQL/FTCH/BSEG ]"
Please check the entries in the system log (Transaction SM21). </b>
Please help me on this issue. However, do not give me suggestions about selecting from a smaller table (bsik, bsak) as my situation does not permit it.
I will reward points.dont use select * ....
instead u declare ur itab with the required fields and then in select refer to the fields in the select .
data : begin of itab,
f1
f2
f3
f4
end of itab.
select f1 f2 f3 f4 ..
into table itab
from bseg where ...
. this improves the performance .
select * is not advised .
regards,
vijay -
Update statement time too long
HI
im doing this update statement using toad and its been an hour now and not finished ( its only 900 records)
and when i try to update 20 records only it takes about 3min
update IC_ITEM_MST set WHSE_ITEM_ID ='503' where ITEM_NO like 'PP%'
thnx
Edited by: george samaan on Dec 21, 2008 10:35 PMselect * from v$locked_object
gave me this
XIDUSN XIDSLOT XIDSQN OBJECT_ID SESSION_ID ORACLE_USERNAME
OS_USER_NAME PROCESS LOCKED_MODE
11 15 10999 36834 97 APPS
appltst2 897256 3
10 47 347465 63200 14 APPS
Administrator 3124:2324 2
10 47 347465 63569 14 APPS
Administrator 3124:2324 2
10 47 347465 63867 14 APPS
Administrator 3124:2324 3
10 47 347465 64380 14 APPS
Administrator 3124:2324 2
10 47 347465 64447 14 APPS
Administrator 3124:2324 2
10 47 347465 64934 14 APPS
XIDUSN XIDSLOT XIDSQN OBJECT_ID SESSION_ID ORACLE_USERNAME
OS_USER_NAME PROCESS LOCKED_MODE
Administrator 3124:2324 2
10 47 347465 78678 14 APPS
Administrator 3124:2324 3
10 47 347465 79069 14 APPS
Administrator 3124:2324 3
10 47 347465 64026 14 APPS
Administrator 3124:2324 3
10 47 347465 93468 14 APPS
Administrator 3124:2324 3
10 47 347465 209903 14 APPS
Administrator 3124:2324 3
10 47 347465 80084 14 APPS
Administrator 3124:2324 3
XIDUSN XIDSLOT XIDSQN OBJECT_ID SESSION_ID ORACLE_USERNAME
OS_USER_NAME PROCESS LOCKED_MODE
0 0 0 36944 60 APPS
appltst2 1572894 3
14 rows selected. -
Insert statement taking too long
Hi,
I am inserting in a TAB_A from TAB_B using following statement
INSERT INTO TAB_A
SELECT * FROM TAB_B;
In TAB_A more than 98000000 rows exits. While in TAB_B rows may be vary from 1000 to 1000000.
TAB_A is a partition table.
This insert is taking more than 3 hrs to insert 800000 rows from TAB_B.
I need to improve performance of this insert statement.
Can you please help me ???Hi,
Try this:
INSERT INTO tab_a SELECT /*+append*/ * FROM tab_b; -
Long lines corupt script output
When SQL run as Script (F5) produces too long lines any Script Output gets blank. Text is there in tab Script Output and I can copy and past it but I can't see it.
What setting could affect this?
--This SQL is working fine
select rpad('Test1',4000), rpad(' ',996) from dual;
--and this sql output is white on white:
select rpad('Test2',4000), rpad(' ',997) from dual;
Adam Dziurda
PS. Affected version Oracle SQL Developer Version 4.0.0.12.27 (aka 4.0 EA1)Yes, I see that. I've logged bug 17218146 for it.
Thanks
Barry -
Long Text printing in SAP SCRIPT
Hi Experts,
I have a requirement of printing long text in sapscript.
There are 15 condition types for each item in sales order and one long text for each condition record.
Each long text has multiple lines i.e. for one long text it may have 2 lines and other may have 1 or 3 lines or etc.
My trials :
I used read_text() function mudule in a routine which is being called from the sap script to get the whole long text which has 5 lines and is stored in an internal table.
Now is there a way to transfer the whole internal_table data as a whole into the script i.e. is there a way to transfer the table from the routine to the sapscript.
Thanks in advance.
kalikonda.Hi
In addition of my include solution.
you ofcourse can use a perform statement if you havea maximum of lines which is possible.
like (if you have a maximum of 5 lines
define &line_1& := ' '
define &line_2& := ' '
define &line_3& := ' '
define &line_4& := ' '
define &line_5& := ' '
Perform getsomedate in program abcxyz
using orderno
using itemno
changing &line_1&
changing &line_2&
changing &line_3&
changing &line_4&
changing &line_5&
Endperform
when printing the data
/: if &line_1& NE ' '
IL &line_1&
/: endif
/: if &line_2& NE ' '
IL &line_2&
/: endif
/: if &line_3& NE ' '
IL &line_3&
/: endif
/: if &line_4& NE ' '
IL &line_4&
/: endif
/: if &line_5& NE ' '
IL &line_5&
/: endif
Gr., Frank -
Data Archive Script is taking too long to delete a large table
Hi All,
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
CREATE TABLE "APP"."MON_TXNS"
( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
"BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_PAYER" NUMBER(12,0),
"ID_PAYER_PI" NUMBER(12,0),
"ID_PAYEE" NUMBER(12,0),
"ID_PAYEE_PI" NUMBER(12,0),
"ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
"STR_TEXT" VARCHAR2(60 CHAR),
"DAT_MERCHANT_TIMESTAMP" DATE,
"STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
"DAT_EXPIRATION" DATE,
"DAT_CREATION" DATE,
"STR_USER_CREATION" VARCHAR2(30 CHAR),
"DAT_LAST_UPDATE" DATE,
"STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
"STR_OTP" CHAR(6 BYTE),
"ID_AUTH_METHOD_PAYER" NUMBER(1,0),
"AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
"BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
"ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ENABLE,
CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for
2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2798378986
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 |
| 1 | DELETE | MON_TXNS | | | | |
|* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 |
| 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
Please help,
thanks,
Banka Ravi'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index. -
Hi All,
I have written the below If statement in SAP scripts but when i execute the same the controll doent check the second line entries. If firtst line doesnot satisfy it goes to the else part. Kindly suggest what is wrong in this..
/: IF &T156T-BWART& = '321' OR &T156T-BWART& = '322' OR
/: &T156T-BWART& = '349' OR &T156T-BWART& = '350' OR
/: &T156T-BWART& = '312' OR &T156T-BWART& = '326' OR
/: &T156T-BWART& = '343' OR &T156T-BWART& = '344'.
/: ELSE
/: ENDIF.Hi neha,
Try to use the '/E' fo rnext line
/E->Extended line
Here is a code:
/: IF &T156T-BWART& = '321' OR &T156T-BWART& = '322' OR
/E &T156T-BWART& = '349' OR &T156T-BWART& = '350' OR
/E &T156T-BWART& = '312' OR &T156T-BWART& = '326' OR
/E &T156T-BWART& = '343' OR &T156T-BWART& = '344'.
/: ELSE
/: ENDIF.
Hope this helps you.
Regards,
Rajani -
Use of IF statement in SAP Scripts
Can u tell me how to use IF statement in SAP Scripts.
The problem is
if &sy-tabix& eq '1'
total
else
total1.
endif.
this sy-tabix is not workingi think sy-tabix will not work here....
do like this..
data : vtabix type i.
loop at itab.
vtabix = sy-tabix.
write_form...for the text element..
endloop.
in form layout
/: if &vtabix(c)& eq 1
/:endif
regards
shiba dutta -
Report script taking too long to export data
Hello guys,
I have a report script to export data out of a BSO cube. The cube is 200GB in size. But the exported text file is only 10MB. It takes around 40 mins to export this file.
I have exported data of this size in less than a minute from other DBs. But this one is taking way too long for me.
I also have a calc script for the same export but that too is taking 20 mins which is not reasonable for a 10MB export.
Any idea why a report script could take this long? Is it due to huge size of database? Or is there a way to optimize the report script?
Any help would be appreciated.
ThanksThanks for the input guys.
My DATAEXPORT is taking half the time than my report script export. So yeah it is much faster but still not reasonable(20 mins for one month data) compared to other DBs that are exported very quick.
In my calc I am just FIXING on level 0 members for most of the dimensions against the specific period, year and scenario. I have checked the conditions for an optimal report script, I think mine is just fine.
The outline has like 15 dimensions in it and only two of them are dense. Do you think the reason might be the huge size of DB along with too many sparse Dims?
I appreciate your help on this.
Thanks
Maybe you are looking for
-
A private message is similar to an email. You can send and receive messages from anyone on the Forum without exchanging any personal information. To send a Private Message (or PM): 1. Click on the screen name of the user, you want to contact. 2. The
-
Problem with Date Variable (Resolved)
Hi, I am using a varibale of type date to populate a constant date value to my target column. My source is excel and target is Oracle. When i execute, it shows the following error - 'Unexpected token: 00 in statement' I also tried to give an sql qure
-
Backup available for OS 10.4.11?
I currently have a PowerBook G4 with OS 10.4.11 and am interested in getting a MB Pro. I'd like to transfer all the files from my PowerBook to a Time Capsule to free up some space on it. Will there be any issues transferring files from a non-Leopard
-
Enabling 'submit' button in email activity
All, version:11.1.1.4 I'm trying to configure an email activity in an async bpel as below <body><form action="http://localhost:7001/soa-infra/services/default/AdvanceShipNotice/AdvanceShipNoticeResend?WSDL"> First name:<br> <input type="text" name="f
-
Tax category:MWST MM01 screen
In the material creation screen(MM01), SALES ORG 1 TAB; Tax category always:MWST and I am not able to change it. And I must enter 2 tax categories there. To fix this issue what should I do? Thanks regards...