SQL Statement taking different path on 2 servers
I ran the same SQL statement on two servers (8.1.7.3).
One is taking the index path and returns results quickly but the second server, which is our production server, is going about a very different path and timing out on the query. I ran the explain plan and see the differences.
How can I identify the cause to behave differently?
Can you suggest some ideas and steps that I can further
investigate?
Thanks, Moling
1. The statistics are run nightly on all tables.
exec sys.dbms_stats.gather_schema_stats ...
2. What causes BITMAP CONVERSION (TO ROWIDS)and
BITMAP CONVERSION (From ROWIDS) that I see for table
U_SUPPLIER_PART_PLANT_MAP? This table has DOMAIN index.
FAST QUERY
==============
i p PLAN_PLUS_EXP OBJECT_N
0 SELECT STATEMENT optimizer=FIRST_ROWS (cost=123 card=1 bytes
=884)
1 0 NESTED LOOPS (cost=123 card=1 bytes=884)
2 1 NESTED LOOPS (OUTER) (cost=122 card=1 bytes=877)
3 2 NESTED LOOPS (OUTER) (cost=121 card=1 bytes=861)
4 3 NESTED LOOPS (cost=120 card=1 bytes=856)
5 4 NESTED LOOPS (cost=119 card=2 bytes=1658)
6 5 NESTED LOOPS (OUTER) (cost=118 card=3 bytes=2472
7 6 NESTED LOOPS (cost=117 card=3 bytes=2448)
8 7 NESTED LOOPS (cost=116 card=3 bytes=1506)
9 8 NESTED LOOPS (OUTER) (cost=115 card=3 byte
s=1284)
10 9 NESTED LOOPS (OUTER) (cost=114 card=3 by
tes=1260)
11 10 NESTED LOOPS (cost=113 card=3 bytes=12
39)
12 11 NESTED LOOPS (OUTER) (cost=112 card=
3 bytes=1161)
13 12 NESTED LOOPS (OUTER) (cost=111 car
d=3 bytes=1137)
14 13 NESTED LOOPS (OUTER) (cost=110 c
ard=3 bytes=1113)
15 14 NESTED LOOPS (cost=109 card=3
bytes=1089)
16 15 NESTED LOOPS (cost=101 card=
87 bytes=30798)
17 16 TABLE ACCESS (BY INDEX ROW
ID) of 'U_SUPPLIER_PART_PLANT_MAP' (cost=99 card=87 bytes=30
363)
18 17 DOMAIN INDEX of 'CTX_100
26328' (cost=88)
19 16 INDEX (UNIQUE SCAN) of 'UK
USITE1' UNIQUE)
20 15 TABLE ACCESS (BY INDEX ROWID
) of 'U_ORGANIZATION_DIMENSION' (cost=1 card=1 bytes=9)
21 20 INDEX (UNIQUE SCAN) of 'UK
UORGANIZATION_DIMENSION' UNIQUE)
22 14 TABLE ACCESS (BY INDEX ROWID)
of 'SPM_MATERIAL_GROUP' (cost=1 card=120 bytes=960)
23 22 INDEX (UNIQUE SCAN) of 'UK_S
PM_MATERIAL_GROUP' UNIQUE)
24 13 TABLE ACCESS (BY INDEX ROWID) of
'U_PURCHASING_GROUP' (cost=1 card=269 bytes=2152)
25 24 INDEX (UNIQUE SCAN) of 'UK_U_P
URCHASING_GROUP' UNIQUE)
26 12 TABLE ACCESS (BY INDEX ROWID) of '
SPM_UOM' (cost=1 card=428 bytes=3424)
27 26 INDEX (UNIQUE SCAN) of 'UK_SPM_U
OM' UNIQUE)
28 11 TABLE ACCESS (BY INDEX ROWID) of 'SP
M_SUPP_PART' (cost=1 card=423258 bytes=11004708)
29 28 INDEX (UNIQUE SCAN) of 'SYS_C00192
66' UNIQUE)
30 10 TABLE ACCESS (BY INDEX ROWID) of 'SPM_
CURRENCY' (cost=1 card=1 bytes=7)
31 30 INDEX (UNIQUE SCAN) of 'UK_SPM_CURRE
NCY' UNIQUE)
32 9 TABLE ACCESS (BY INDEX ROWID) of 'SPM_UO
M' (cost=1 card=428 bytes=3424)
33 32 INDEX (UNIQUE SCAN) of 'UK_SPM_UOM' UN
IQUE)
34 8 TABLE ACCESS (BY INDEX ROWID) of 'SPM_REFE
RENCE_ITEM1' (cost=1 card=385224 bytes=28506576)
35 34 INDEX (UNIQUE SCAN) of 'UK_SPM_REFERENCE
_ITEM11' UNIQUE)
36 7 TABLE ACCESS (BY INDEX ROWID) of 'SPD_MANUFA
CTURER_PART' (cost=1 card=385175 bytes=120944950)
37 36 INDEX (UNIQUE SCAN) of 'UK_SPD_MANUFACTURE
R_PART' UNIQUE)
38 6 TABLE ACCESS (BY INDEX ROWID) of 'SPM_UOM' (co
st=1 card=428 bytes=3424)
39 38 INDEX (UNIQUE SCAN) of 'UK_SPM_UOM' UNIQUE)
40 5 INDEX (UNIQUE SCAN) of 'UK_U_SUPPLIER' UNIQUE)
41 4 TABLE ACCESS (BY INDEX ROWID) of 'U_SUPPLIER_DIMEN
SION' (cost=1 card=2 bytes=54)
42 41 INDEX (UNIQUE SCAN) of 'UK_U_SUPPLIER_DIMENSION'
UNIQUE)
43 3 INDEX (UNIQUE SCAN) of 'UK_U_SUPPLIER' UNIQUE)
44 2 TABLE ACCESS (BY INDEX ROWID) of 'U_SUPPLIER_DIMENSION
' (cost=1 card=8276 bytes=132416)
45 44 INDEX (UNIQUE SCAN) of 'UK_U_SUPPLIER_DIMENSION' UNI
QUE)
46 1 TABLE ACCESS (BY INDEX ROWID) of 'S_ROT_CLASS' (cost=1 c
ard=12943407 bytes=90603849)
47 46 INDEX (UNIQUE SCAN) of 'UK_S_ROT_CLASS' UNIQUE)
48 rows selected.
SLOW QUERY
===========
i p PLAN_PLUS_EXP OBJECT_N
0 SELECT STATEMENT optimizer=FIRST_ROWS (cost=28278 card=52 by
tes=49452)
1 0 NESTED LOOPS (cost=28278 card=52 bytes=49452)
2 1 NESTED LOOPS (OUTER) (cost=28273 card=52 bytes=48932)
3 2 NESTED LOOPS (OUTER) (cost=28268 card=52 bytes=48412)
4 3 NESTED LOOPS (OUTER) (cost=28263 card=52 bytes=47892
5 4 NESTED LOOPS (cost=28257 card=52 bytes=47372)
6 5 NESTED LOOPS (cost=28252 card=52 bytes=46800)
7 6 NESTED LOOPS (cost=28251 card=52 bytes=46488)
8 7 NESTED LOOPS (OUTER) (cost=305 card=70 bytes
=37240)
9 8 NESTED LOOPS (OUTER) (cost=298 card=70 byt
es=36540)
10 9 NESTED LOOPS (cost=291 card=70 bytes=359
10)
11 10 NESTED LOOPS (cost=285 card=64 bytes=3
0784)
12 11 NESTED LOOPS (OUTER) (cost=279 card=
64 bytes=25792)
13 12 NESTED LOOPS (OUTER) (cost=272 car
d=64 bytes=24640)
14 13 NESTED LOOPS (OUTER) (cost=271 c
ard=64 bytes=24256)
15 14 NESTED LOOPS (cost=265 card=64
bytes=23616)
16 15 NESTED LOOPS (cost=7 card=2
bytes=72)
17 16 TABLE ACCESS (FULL) of 'U_
SUPPLIER_DIMENSION' (cost=6 card=2 bytes=60)
18 16 INDEX (UNIQUE SCAN) of 'UK
USUPPLIER' UNIQUE)
19 15 TABLE ACCESS (BY INDEX ROWID
) of 'SPD_MANUFACTURER_PART' (cost=129 card=375810 bytes=125
144730)
20 19 INDEX (RANGE SCAN) of 'NIS
_P649987' NON-UNIQUE)
21 14 TABLE ACCESS (BY INDEX ROWID)
of 'SPM_UOM' (cost=1 card=428 bytes=4280)
22 21 INDEX (UNIQUE SCAN) of 'UK_S
PM_UOM' UNIQUE)
23 13 INDEX (UNIQUE SCAN) of 'UK_U_SUP
PLIER' UNIQUE)
24 12 TABLE ACCESS (BY INDEX ROWID) of '
U_SUPPLIER_DIMENSION' (cost=1 card=8652 bytes=155736)
25 24 INDEX (UNIQUE SCAN) of 'UK_U_SUP
PLIER_DIMENSION' UNIQUE)
26 11 TABLE ACCESS (BY INDEX ROWID) of 'SP
M_REFERENCE_ITEM1' (cost=1 card=375842 bytes=29315676)
27 26 INDEX (UNIQUE SCAN) of 'UK_SPM_REF
ERENCE_ITEM11' UNIQUE)
28 10 TABLE ACCESS (BY INDEX ROWID) of 'SPM_
SUPP_PART' (cost=1 card=409148 bytes=13092736)
29 28 INDEX (RANGE SCAN) of 'NIS_P651218'
NON-UNIQUE)
30 9 TABLE ACCESS (BY INDEX ROWID) of 'SPM_CU
RRENCY' (cost=1 card=1 bytes=9)
31 30 INDEX (UNIQUE SCAN) of 'UK_SPM_CURRENC
Y' UNIQUE)
32 8 TABLE ACCESS (BY INDEX ROWID) of 'SPM_UOM'
(cost=1 card=428 bytes=4280)
33 32 INDEX (UNIQUE SCAN) of 'UK_SPM_UOM' UNIQ
UE)
34 7 TABLE ACCESS (BY INDEX ROWID) of 'U_SUPPLIER
PARTPLANT_MAP' (cost=28251 card=304053 bytes=110067186)
35 34 BITMAP CONVERSION (TO ROWIDS)
36 35 BITMAP AND
37 36 BITMAP CONVERSION (FROM ROWIDS)
38 37 INDEX (RANGE SCAN) of 'NIS_P651535'
NON-UNIQUE) (cost=765)
39 36 BITMAP CONVERSION (FROM ROWIDS)
40 39 SORT (ORDER BY)
41 40 DOMAIN INDEX of 'CTX_10026328' (co
st=25098 card=304053)
42 6 INDEX (UNIQUE SCAN) of 'UK_U_SITE1' UNIQUE)
43 5 TABLE ACCESS (BY INDEX ROWID) of 'U_ORGANIZATION
_DIMENSION' (cost=1 card=1 bytes=11)
44 43 INDEX (UNIQUE SCAN) of 'UK_U_ORGANIZATION_DIME
NSION' UNIQUE)
45 4 TABLE ACCESS (BY INDEX ROWID) of 'SPM_MATERIAL_GRO
UP' (cost=1 card=119 bytes=1190)
46 45 INDEX (UNIQUE SCAN) of 'UK_SPM_MATERIAL_GROUP' U
NIQUE)
47 3 TABLE ACCESS (BY INDEX ROWID) of 'U_PURCHASING_GROUP
' (cost=1 card=269 bytes=2690)
48 47 INDEX (UNIQUE SCAN) of 'UK_U_PURCHASING_GROUP' UNI
QUE)
49 2 TABLE ACCESS (BY INDEX ROWID) of 'SPM_UOM' (cost=1 car
d=428 bytes=4280)
50 49 INDEX (UNIQUE SCAN) of 'UK_SPM_UOM' UNIQUE)
51 1 TABLE ACCESS (BY INDEX ROWID) of 'S_ROT_CLASS' (cost=1 c
ard=13126048 bytes=131260480)
52 51 INDEX (UNIQUE SCAN) of 'UK_S_ROT_CLASS' UNIQUE)
53 rows selected.
Similar Messages
-
SQL Statement taking too long to get the data
Hi,
There are over 2500 records in a table and when retrieve all using ' SELECT * From Table' it is taking too long to get the data. ie .. 4.3 secs.
Is there any possible way to shorten the process time.
ThanksHi Patrick,
Here is the sql statement and table desc.
ID Number
SN Varchar2(12)
FN Varchar2(30)
LN Varchar2(30)
By Varchar(255)
Dt Date(7)
Add Varchar2(50)
Add1 Varchar2(30)
Cty Varchar2(30)
Stt Varchar2(2)
Zip Varchar2(12)
Ph Varchar2(15)
Email Varchar2(30)
ORgId Number
Act Varchar2(3)
select A."FN" || '' '' || A."LN" || '' ('' || A."SN" || '')'' "Name",
A."By", A."Dt",
A."Add" || ''
'' || A."Cty" || '', '' || A."Stt" || '' '' || A."Zip" "Location",
A."Ph", A."Email", A."ORgId", A."ID",
A."SN" "OSN", A."Act"
from "TBL_OPTRS" A where A."ID" <> 0 ';
I'm displaying all rows in a report.
if I use 'select * from TBL_OPTRS' , this also takes 4.3 to 4.6 secs.
Thanks. -
SQL statement taking a long time
Hi friends,
The below query is taking a very long time to execute. Please give some advise to optimise it.
INSERT INTO AFMS_ATT_DETL
ATT_NUM,
ATT_REF_CD,
ATT_REF_TYP_CD,
ATT_DOC_CD,
ATT_FILE_NAM,
ATT_FILE_VER_NUM_TXT,
ATT_DOC_LIB_NAM,
ATT_DESC_TXT,
ATT_TYP_CD,
ACTIVE_IND,
CRT_BY_USR_NUM,
CRT_DTTM,
UPD_BY_USR_NUM,
UPD_DTTM ,
APP_ACC_CD,
ARCH_CRT_BTCH_NUM,
ARCH_CRT_DTTM,
ARCH_UPD_BTCH_NUM ,
ARCH_UPD_DTTM
(SELECT
K.ATT_NUM,
K.ATT_REF_CD,
K.ATT_REF_TYP_CD,
K.ATT_DOC_CD,
K.ATT_FILE_NAM,
K.ATT_FILE_VER_NUM_TXT,
K.ATT_DOC_LIB_NAM,
K.ATT_DESC_TXT,
K.ATT_TYP_CD,
K.ACTIVE_IND,
K.CRT_BY_USR_NUM,
K.CRT_DTTM,
K.UPD_BY_USR_NUM,
K.UPD_DTTM ,
L_APP_ACC_CD1,
L_ARCH_BTCH_NUM,
SYSDATE ,
L_ARCH_BTCH_NUM ,
SYSDATE
FROM
FMS_ATT_DETL K
WHERE
( K.ATT_REF_CD IN
(SELECT CSE_CD FROM T_AFMS_CSE_DETL ) AND
K.ATT_REF_TYP_CD ='ATTREF03'
) OR
ATT_REF_CD IN
(SELECT TO_CHAR(CMNT_PROC_NUM) FROM AFMS_CMNT_PROC ) AND
ATT_REF_TYP_CD = 'ATTREF02'
) OR
ATT_REF_CD IN
(SELECT TO_CHAR(CSE_RPLY_NUM) FROM AFMS_CSE_RPLY ) AND
ATT_REF_TYP_CD = 'ATTREF01'
AND NOT EXISTS (SELECT ATT_NUM FROM (
SELECT B.CSE_RPLY_NUM CSE_RPLY_NUM,B.ATT_NUM ATT_NUM FROM
FMS_CSE_RPLY_ATT_MAP B
WHERE
NOT EXISTS
(SELECT A.CSE_RPLY_NUM CSE_RPLY_NUM FROM AFMS_CSE_RPLY A WHERE A.CSE_RPLY_NUM = B.CSE_RPLY_NUM)
) X WHERE X.ATT_NUM = K.ATT_NUM)
) ;The explain plan for above query is as below:
PLAN_TABLE_OUTPUT
Plan hash value: 871385851
| Id | Operation | Name | Rows | Bytes | Cos
t (%CPU)| Time |
| 0 | INSERT STATEMENT | | 9 | 1188 | 6 (17)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | AFMS_ATT_DETL | | | | |
|* 2 | FILTER | | | | | |
|* 3 | HASH JOIN RIGHT ANTI | | 167 | 22044 | 6 (17)| 00:00:01 |
| 4 | VIEW | VW_SQ_1 | 1 | 13 | 1 (0)| 00:00:01 |
| 5 | NESTED LOOPS ANTI | | 1 | 12 | 1 (0)| 00:00:01 |
| 6 | INDEX FULL SCAN | FMS_CSE_RPLY_ATT_MAP_PK | 25 | 200 | 1 (0)| 00:00:01 |
|* 7 | INDEX UNIQUE SCAN | AFMS_CSE_RPLY_PK | 162 | 648 | 0 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | FMS_ATT_DETL | 167 | 19873 | 4 (0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | T_AFMS_CSE_DETL_PK | 1 | 9 | 0 (0)| 00:00:01 |
|* 10 | INDEX FULL SCAN | AFMS_CSE_RPLY_PK | 1 | 4 | 1 (0)| 00:00:01 |
|* 11 | INDEX FULL SCAN | AFMS_CMNT_PROC_PK | 1 | 5 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("K"."ATT_REF_TYP_CD"='ATTREF03' AND EXISTS (SELECT 0 FROM "T_AFMS_CSE_DETL"
"T_AFMS_CSE_DETL" WHERE "CSE_CD"=:B1) OR "ATT_REF_TYP_CD"='ATTREF01' AND EXISTS (SELECT 0
FROM "AFMS_CSE_RPLY" "AFMS_CSE_RPLY" WHERE TO_CHAR("CSE_RPLY_NUM")=:B2) OR
"ATT_REF_TYP_CD"='ATTREF02' AND EXISTS (SELECT 0 FROM "AFMS_CMNT_PROC" "AFMS_CMNT_PROC"
WHERE TO_CHAR("CMNT_PROC_NUM")=:B3))
3 - access("ITEM_1"="K"."ATT_NUM")
7 - access("A"."CSE_RPLY_NUM"="B"."CSE_RPLY_NUM")
9 - access("CSE_CD"=:B1)
10 - filter(TO_CHAR("CSE_RPLY_NUM")=:B1)
11 - filter(TO_CHAR("CMNT_PROC_NUM")=:B1) -
Same sql statement two different results?
Hi,
I was wondering if anyone knows why I am getting different
results on the same query.
Basically... when I run this query in "view" in sql server, I
get the results I need, however when I run in Coldfusion, I am
getting totally different results.... It is a totally different
result...
the query:
SELECT DISTINCT
tbl_employees.indexid, tbl_employees.[Employee ID] as
employeeid, tbl_employees.[First Name] as firstname,
tbl_employees.[Last Name] as lastname,
tbl_employees.[Supervisor ID] as supervisorid,
tbl_workaddress_userdata.firstname,
tbl_workaddress_userdata.lastname,
tbl_workaddress_userdata.supervisorid,
tbl_workaddress_userdata.location,
tbl_workaddress_userdata.employeeid,
tbl_workaddress_userdata.locationdescription
FROM tbl_employees FULL OUTER JOIN
tbl_workaddress_userdata ON tbl_employees.[Employee ID] =
tbl_workaddress_userdata.employeeid
WHERE (tbl_employees.[Supervisor ID] = 7) AND
(tbl_workaddress_userdata.location IS NULL)I suspect you and your CF DSN are looking at two different
DBs...
Adam -
PL/SQL block taking different duration
Dear Oracle Gurus
I executed a PL/SQL block which took 10 min to complete. I again executed the same thing , this time it took 8 mins ... after which the time taken was between 7 and 8 min. it was never 10 min .
I was told that this happens due to REDO log that parsing doesnt happens when we execute the process again and hence the time taken is less than the first time .
Kindly clarify me in this regard.
Since i am testing the script i would like to have the correct time taken , whenever i execute it how can we do this .. like clearing the redo log for this particular script
Kindly guide me in this regard
with warm regards
ssrwas told that this happens due to REDO log that parsing doesnt happens when we execute the process again and hence the time taken is less than the first time .
Who mentoned to you that due to REDO Logs , the parsing didnt happen and your block ran more fastly, please for heaven sake ask him/her to read oracle concepts guide.Believe me , he/she really needs it.
Your block's data got cached so you saw a decrease in the time of the query result.This holds true for the block itself but if it was an anonymous block than you wont enjoy the benefit of reusing it actualy.The oracle cache is kicking in here.
Since i am testing the script i would like to have the correct time taken , whenever i execute it how can we do this .. like clearing the redo log for this particular script
I amnot able to get this.What do you mean by correct timing?What exactly is the bench mark that you are trying to meet?
Please redo log clearance has nothing to do with this.
Post the script also what you are trying to run with oracle version of yours.And how you are measuting the time taken?
Aman.... -
Possible to do "grant" sql statement in Native SQL?
We have a need to do a grant of access from one of our systems out for various applications. In order for this to work we need to run a grant access command on the table and are trying to put a wrapper around this so we can use an abap. Below is the code I am unit testing. Two questions. First, can a grant be done via native SQL in abap? Second, if it can be done, what is the error with the logic where I am trying to put in the table name via a parameter.
REPORT ZLJTEST2.
tables dd02l.
DATA scarr_carrid TYPE dd02l-tabname.
SELECT-OPTIONS s_carrid for dd02l-tabname no intervals.
DATA s_carrid_wa LIKE LINE OF s_carrid.
DATA name TYPE c LENGTH 20.
TRY.
EXEC SQL.
CREATE FUNCTION selfunc( input CHAR(20) )
RETURNING char(20);
DEFINE output char(20);
set schema sapr3;
grant select on table input to group infouser;
RETURN output;
END FUNCTION;
ENDEXEC.
LOOP AT s_carrid INTO s_carrid_wa
WHERE sign = 'I' AND option = 'EQ'.
TRY.
EXEC SQL.
EXECUTE PROCEDURE selfunc( IN :s_carrid_wa-low,
OUT :name )
ENDEXEC.
WRITE: / s_carrid_wa-low, name.
CATCH cx_sy_native_sql_error.
MESSAGE `Error in procedure execution` TYPE 'I'.
ENDTRY.
ENDLOOP.
EXEC SQL.
DROP FUNCTION selfunc;
ENDEXEC.
CATCH cx_sy_native_sql_error.
MESSAGE `Error in procedure handling` TYPE 'I'.
ENDTRY.Hi,
Yes it is posible.
I made one program like you want. But it need very long code.
Here I explain the idea:
1. Create Screen with input TEXT EDIT CONTROL.
This is for input SQL Statement.
2. Get SQL Statement from Text Edit Control using method <b>get_text_as_r3table</b>.
3. Now we need to separate SQL Statement into different table.
We Separate SELECT, FROM, WHERE, GROUP, HAVING, ORDER, etc.
4. We need dynamic internal table to store the data.
5. Select the data according SQL statement.
SELECT (IT_SELECT)
into corresponding fields of table <dyn_table>
FROM (IT_FROM)
WHERE (IT_WHERE)
GROUP BY (IT_GROUP)
HAVING (IT_HAVING)
ORDER BY (IT_ORDER).
6. Display our data using ALV GRID
Hopefully it will help you.
Regards, -
SQL statements in Orchestration take too long
Hello all,
I have an orchestration that inside has a Expression Widget.
In this Expression widget I call several methods from an assembly that I wrote in VstudioC#.
This assembly get some values from the files beeing processed and does some checking (empty values, accepted values, etc..) and does some select/insert queries into a separate database that I use for tracking/analysis.
These sql statement from some time now are taking just too long, taking each file around 5 minutes inside the Expression and some of the sql statement taking as long as 1 minute.
I've cleaned up the tables from my separate database, as they had more that 2 million records.
The queries are simple, without joins , etc...
Any ideas on hot to make this work faster?Why are you doing SQL Operations in an Expression Shape?
The standard BizTalk Pattern would be by the WCF SQL Adapter. -
Different result from same SQL statement
The following SQL statement brings back records using query
analyzer on the SQL server. However when I run it in a cold fusion
page it comes back with no results. Any idea why????????
SELECT COUNT(h.userID) AS hits, u.OCD
FROM dbo.tbl_hits h INNER JOIN
dbo.tlkp_users u ON h.userID = u.PIN
WHERE (h.appName LIKE 'OPwiz%') AND (h.lu_date BETWEEN
'05/01/07' AND '06/01/07')
GROUP BY u.OCD
ORDER BY u.OCDAnthony Spears wrote:
> That didn't work either.
>
> But here is something interesting. If we use the dates
05/01/2007 and
> 06/01/2007 we get results in SQL Server Query Analyzer
but not using a cold
> fusion page. But if we use the dates 05/01/2007 and
09/01/2007 both get back
> the same results.
>
Are you absolutely, 100% sure that you are connecting to the
same
database instance with both CF and Query Analyzer? That kind
of symptom
is 9 out of 10 times because the user is looking at different
data. One
is looking at production and the other development or an
backup or
recent copy or something different. -
Execute Different SQL Statements Based On Day of Week
I need to execute different sql statements based on the day of the week in Oracle 8i. In SQL Server, it's pretty simple. SELECT
CASE DATEPART(dw,GETDATE())
WHEN 4 THEN (SELECT COUNT(*) FROM ADR_VLDN )
END
In Oracle it should be something like this
IF to_char(SYSDATE, 'D') = 2 THEN
SELECT * FROM RSVP_FILE_PRCS WHERE ROWNUM = 1;
ELSEIF to_char(SYSDATE, 'D') = 3 THEN
SELECT * FROM RSVP_FILE_PRCS WHERE ROWNUM = 2;
END IF;
But this doesn't work. Does anyone have any ideas about how to do this in Oracle?805771 wrote:
Yes, but I don't want to create 7 different jobs, one for each day of the week. Isn't there a way to do this in PL/SQL? It took me 10 seconds in SQL Server's TSQL.Yes you keep showing some TSQL syntax that obviously does not do what you are asking for.
>
SELECT
CASE DATEPART(dw,GETDATE())
WHEN 4 THEN (SELECT COUNT(*) FROM ADR_VLDN )
ENDSo the equivalent in Oracle would be
SQL> var n number
SQL> begin
2 if to_char(sysdate,'D') = '4' then
3 select count(*) into :n from dual;
4 end if;
5 end;
6 /
PL/SQL procedure successfully completed.
SQL> print n
N
1Also takes 10 seconds. -
Same sql statement gives output in different lines in 12.1.3 vs 11i
Hi all,
DB:11.2.0.3.0
EBS:11i and 12.1.3
O/S: Solaris SPARC 64 bits 5.10
The below query gives the output in one line in 11i as expected but it gives the output in two separate lines in 12.1.3. Are there any server level settings for linesize and pagesize to be performed?
set term off;
set serveroutput on size 1000000;
set feedback off;
set pagesize 0;
set head off;
set linesize 72;
set pause off;
set colsep '';
select
lpad(code_combination_id,15,0)||
rpad(to_char(start_date_active,'YYYYMMDD'),8,' ')||
rpad(to_char(end_date_active,'YYYYMMDD'),8,' '),
substr(SEGMENT1,1,3)|| --entity
rpad(substr(SEGMENT2,1,6),6,' ')|| --account
rpad(substr(SEGMENT3,1,5),5,' ')|| --costcenter
rpad(substr(SEGMENT4,1,6),6,' ')|| --activity
substr(SEGMENT6,1,3)|| --product
substr(SEGMENT7,1,3)|| --service
substr(SEGMENT5,1,3)|| --country
substr(SEGMENT8,1,3)|| --intercompany
rpad(substr(SEGMENT9,1,8),8,' ')|| --regional
substr(enabled_flag,1,1) -- active flag
from gl_code_combinations
where last_update_date >=
(select nvl(max(actual_start_date),'01-JAN-1951')
from fnd_concurrent_requests
where concurrent_program_id = (select concurrent_program_id
from fnd_concurrent_programs
where
concurrent_program_name = 'XYZACCT')
and status_code = 'C'
and actual_completion_date is not null)
order by 1;
OUTPUT in 11i
============
00000000000100020120930 7014912000000000000000000000000000000000Y
00000000000100120120930 5014912000000000000000000000000000000000Y
OUTPUT in 12.1.3
==============
00000000000116020120930
4881124010000000000000000000000000000000Y
000000000001161
6103229990000000000000000000000000000000Y
11i and 12.1.3 should get the output in one line as per the above sql statement.
Could anyone please share the fix on the above issue?
Thanks for your time
Regards,Hi,
Can you confirm in what session are you running this query.
Try this
Column Code_Date_Range format a25
Column Segments format a50
set lines 300
set pages 200
set term off;
set serveroutput on size 1000000;
set feedback off;
set pagesize 0;
set head off;
set linesize 72;
set pause off;
set colsep '';
select
lpad(code_combination_id,15,0)||
rpad(to_char(start_date_active,'YYYYMMDD'),8,' ')||
rpad(to_char(end_date_active,'YYYYMMDD'),8,' ') Code_Date_Range,
substr(SEGMENT1,1,3)|| --entity
rpad(substr(SEGMENT2,1,6),6,' ')|| --account
rpad(substr(SEGMENT3,1,5),5,' ')|| --costcenter
rpad(substr(SEGMENT4,1,6),6,' ')|| --activity
substr(SEGMENT6,1,3)|| --product
substr(SEGMENT7,1,3)|| --service
substr(SEGMENT5,1,3)|| --country
substr(SEGMENT8,1,3)|| --intercompany
rpad(substr(SEGMENT9,1,8),8,' ')|| --regional
substr(enabled_flag,1,1) Segments -- active flag
from gl_code_combinations
where last_update_date >=
(select nvl(max(actual_start_date),'01-JAN-1951')
from fnd_concurrent_requests
where concurrent_program_id = (select concurrent_program_id
from fnd_concurrent_programs
where
concurrent_program_name = 'XYZACCT')
and status_code = 'C'
and actual_completion_date is not null)
For more details, please see:
Formatting SQL*Plus Reports
Thanks &
Best Regards -
Getting difference from values of two different SQL Statements
Hello,
I have two SQL Queries like the following:
1. Statement:
SELECT name, SUM(a1), SUM(b1),SUM(c1),SUM(d1)
FROM table 1, table 2
WHERE ...
GROUP BY name
2. Statement:
SELECT name, SUM(a2), SUM(b2),SUM(c2),SUM(d2)
FROM table 3, table 4
WHERE ...
GROUP BY name
I need now a combination of these SQL Statements in one Statement where the result should be the following records
name, a1-a2 as a, b1-b2 as b, c1-c2 as c, d1-d2 as d
Name is a VARCHAR and in both queries the values of the field name are the same
all other fields are integer.
I hope someone can help me.
RegardsYou can use this
with t1 as (
SELECT name, SUM(a1) as a1, SUM(b1) as b1,SUM(c1) as c1,SUM(d1) as d1
FROM table 1, table 2
WHERE ...
GROUP BY name
), t2 as (
SELECT name, SUM(a2) as a2, SUM(b2) as b2,SUM(c2) as c2,SUM(d2) as d2
FROM table 3, table 4
WHERE ...
GROUP BY name
), tt as (
select name
from t1
union
select name
from t2
select *
from tt
natural left outer join t1
natural left outer join t2
where a1 <> a2
or b1 <> b2
or c1 <> c2
or d1 <> d2Bye Alessandro -
Procedure creation for 2 different sql statement
Hi all,
When I run a procedure with the below line:
cursor c1 is select object_name from user_objects where object_type='VIEW' and status='VALID';
procedure gets created successfully
But for the below line , oracle throws error
cursor c1 is select object_name from dba_objects where object_type='VIEW' and status='VALID';
Error Message:
Warning: Procedure created with compilation errors.
3/14 PL/SQL: SQL Statement ignored
3/38 PL/SQL: ORA-00942: table or view does not exist
Thanks for all your help.The below procedure gets created successfully:
CREATE OR REPLACE PROCEDURE P_TEST (schema_name in varchar2,new_usr in varchar2)
is
str varchar2(1000);
str_syn varchar2(1000);
v_obj_name varchar2(200);
begin
for z in ( select object_name from user_objects where object_type in ('VIEW') and status='VALID')
loop
v_obj_name:=schema_name||'.'|| z.object_name;
str :='grant select on '||v_obj_name||' to '|| new_usr ;
EXECUTE IMMEDIATE str;
end loop;
end;
where as below one throws error difference being "user_objects" replaced with
"dba_objects"
CREATE OR REPLACE PROCEDURE P_TEST (schema_name in varchar2,new_usr in varchar2)
is
str varchar2(1000);
str_syn varchar2(1000);
v_obj_name varchar2(200);
begin
for z in ( select object_name from dba_objects where object_type in ('VIEW') and status='VALID')
loop
v_obj_name:=schema_name||'.'|| z.object_name;
str :='grant select on '||v_obj_name||' to '|| new_usr ;
EXECUTE IMMEDIATE str;
end loop;
end; -
Sql statement hanging in prod. fast in dev.
Hi,
Sql statement is hanging in production.
In development it is executing in 2 secs.
From explainplan , noticed that taking different indexes.
I have posted the staement and explain plan (prod and dev) below.
Statement:
SELECT
REP_V_SERVICE_REQUEST.SERVICE_REQ_ID,
REP_V_ACTIVITY.EXTERNAL_REF,
REP_V_ACTIVITY.VENUS_PROBLEM_START,
REP_V_ACTIVITY.VENUS_ALERT_ISSUED,
REP_V_ACTIVITY.VENUS_NOTIFIED,
REP_V_ACTIVITY.ACTIVITY_ID,
REP_V_ACTIVITY.CREATED_BY_WORKGROUP,
REP_V_ACTIVITY.ABSTRACT,
REP_V_ACTIVITY.TIME_TO_VENUS_ALERT,
REP_V_ACTIVITY.TIME_TO_VENUS_ISSUE,
REP_V_ACTIVITY.TIME_TO_VENUS_NOTIFIED,
REP_V_SERVICE_REQUEST.TYPE,
REP_V_SERVICE_REQUEST.SUB_TYPE,
REP_V_SERVICE_REQUEST.CLASSIFICATION_TYPE,
REP_V_SERVICE_REQUEST.CLASSIFICATION_SUB_TYPE,
( REP_V_SERVICE_REQUEST.TIME_OPENED ),
REP_V_SERVICE_REQUEST.CREATED_BY_WORKGROUP,
REP_V_ACTIVITY.VENUS_SENT_TO_SDN,
SR_RESOLVER_WG.WORKGROUP
FROM
REP_V_SERVICE_REQUEST,
REP_V_ACTIVITY,
REP_V_WORKGROUP SR_RESOLVER_WG
WHERE
( SR_RESOLVER_WG.WORKGROUP_ID=REP_V_SERVICE_REQUEST.RESOLUTION_GROUP_ID )
AND ( REP_V_ACTIVITY.SERVICE_REQUEST_ROW_ID=REP_V_SERVICE_REQUEST.SERVICE_REQUEST_ROW_ID )
AND (
REP_V_ACTIVITY.PLANNED_START_TIME BETWEEN '01-JAn-2006' AND '19-Jun-2006'
AND REP_V_ACTIVITY.VENUS_PROBLEM_STATUS NOT IN ('Information', 'Planned')
AND REP_V_ACTIVITY.VENUS_ALERT_STATUS != 'Withdrawn'
AND REP_V_ACTIVITY.CREATED_BY_WORKGROUP LIKE '%SSHD%'
AND REP_V_ACTIVITY.STATUS != 'Cancelled'
AND REP_V_ACTIVITY.CREATED_BY_WORKGROUP NOT LIKE 'GLO_SSHD_MTC'
Exp. plan(prod)
24 SELECT STATEMENT
23 NESTED LOOPS (OUTER)
21 NESTED LOOPS (OUTER)
19 NESTED LOOPS
16 NESTED LOOPS
13 NESTED LOOPS
10 NESTED LOOPS (OUTER)
8 NESTED LOOPS
5 NESTED LOOPS
2 TABLE ACCESS (BY INDEX ROWID), S_ORG_EXT (SIEBEL)
1 INDEX (FULL SCAN), S_ORG_EXT_F13 (SIEBEL)
4 TABLE ACCESS (BY INDEX ROWID), S_BU (SIEBEL)
3 INDEX (UNIQUE SCAN), S_BU_P1 (SIEBEL)
7 TABLE ACCESS (BY INDEX ROWID), S_SRV_REQ (SIEBEL)
6 INDEX (RANGE SCAN), S_SRV_REQ_U2 (SIEBEL)
9 INDEX (UNIQUE SCAN), S_SRV_REGN_P1 (SIEBEL)
12 TABLE ACCESS (BY INDEX ROWID), SERVICE_REQUEST (CRMREP_REP)
11 INDEX (UNIQUE SCAN), PK_SR (CRMREP_REP)
15 TABLE ACCESS (BY INDEX ROWID), S_EVT_ACT (SIEBEL)
14 INDEX (RANGE SCAN), S_EVT_ACT_F14 (SIEBEL)
18 TABLE ACCESS (BY INDEX ROWID), ACTIVITY (CRMREP_REP)
17 INDEX (UNIQUE SCAN), PK_A (CRMREP_REP)
20 INDEX (RANGE SCAN), S_EVT_MAIL_U1 (SIEBEL)
22 INDEX (RANGE SCAN), S_EVT_ACT_X_U1 (SIEBEL
Exp plan(Dev):
24 SELECT STATEMENT
23 NESTED LOOPS (OUTER)
21 NESTED LOOPS (OUTER)
19 NESTED LOOPS
16 NESTED LOOPS
13 NESTED LOOPS (OUTER)
11 NESTED LOOPS
8 NESTED LOOPS
5 NESTED LOOPS
2 TABLE ACCESS (BY INDEX ROWID), S_EVT_ACT (SIEBEL)
1 INDEX (RANGE SCAN), S_EVT_ACT_M8 (SIEBEL)
4 TABLE ACCESS (BY INDEX ROWID), S_SRV_REQ (SIEBEL)
3 INDEX (UNIQUE SCAN), S_SRV_REQ_P1 (SIEBEL)
7 TABLE ACCESS (BY INDEX ROWID), S_ORG_EXT (SIEBEL)
6 INDEX (UNIQUE SCAN), S_ORG_EXT_U3 (SIEBEL)
10 TABLE ACCESS (BY INDEX ROWID), S_BU (SIEBEL)
9 INDEX (UNIQUE SCAN), S_BU_P1 (SIEBEL)
12 INDEX (UNIQUE SCAN), S_SRV_REGN_P1 (SIEBEL)
15 TABLE ACCESS (BY INDEX ROWID), SERVICE_REQUEST (REPORT)
14 INDEX (UNIQUE SCAN), PK_SR (REPORT)
18 TABLE ACCESS (BY INDEX ROWID), ACTIVITY (REPORT)
17 INDEX (UNIQUE SCAN), PK_A (REPORT)
20 INDEX (RANGE SCAN), S_EVT_MAIL_U1 (SIEBEL)
22 INDEX (RANGE SCAN), S_EVT_ACT_X_U1 (SIEBEL)
I checked the v$session_wait while it is hanging,
It is waiting for table s_evt_act (prod. 1.6 crores,dev. 1 crore)
In development it is accessing S_EVT_ACT_M8 (SINGLE COLUMN)index,in production
it is accessing S_EVT_ACT_F14(COMBN. TWO COLUMNS BUT DIFFERENT FROM PRODUCTION).
Thanks,
kumar.This query is not executing for last 5 to 6 months.
I am new to this issue.
pls find the outof v$session_event for this session.
It is waiting for db file sequential read and the wait is keep on increasing.
SID EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT MAX_WAIT TIME_WAITED_MICRO
43 db file sequential read 141459 0 130565 1 66 1305647401
43 db file scattered read 77437 0 54259 1 466 542587342
43 direct path write 1222 0 867 1 26 8671937
43 buffer busy waits 570 0 318 1 4 3175286
43 SQL*Net message to client 339 0 0 0 0 866
43 SQL*Net message from client 338 0 84716 251 32623 847156015
43 latch free 14 12 6 0 2 59905
43 direct path read 6 0 1 0 0 12290
43 log file sync 1 0 0 0 0 2268 -
[OCI] Parameter Binding, repeated Statements with differing values
Hello
I am working on an application written in C which sends about 50 different SQL Statements to an Oracle Database using OCI. These Statements are executed repeatedly with different values.
In order to improve performance I would like to know what possibilities I have.
Whats the benefit of the following "techniques" ?
- Parameter Binding
- Statement Caching
What else could I look into?
with friendly greetings.It doesn't take zero-time of course, and it does level-off after a while, but array-bind/define or pre-fetching can make a significant impact on performance:
truncated table: 0.907 / 0.918
insert_point_stru_1stmt_1commit: x100,000: 0.141 / 0.144Above I truncate the table to get repeatable numbers; deleting all rows from a previous run leaves a large free list (I guess...), and performance of this little contrived benchmark degrades non-negligeably otherwise. This is a single array-bind insert statement.
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@0: 7.594 / 7.608
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@1: 4.000 / 4.004
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@10: 0.906 / 0.910
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@100: 0.297 / 0.288
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@1,000: 0.204 / 0.204
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@10,000: 0.265 / 0.268
fetched 100,000 rows. (0 errors)Above I do a regular "scalar" define, but turn pre-fetching on (default is one row, but I tested with pre-fetching completly off too). @N means pre-fetch N rows.
select_points_array: x100,000@10: 0.969 / 0.967
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@100: 0.250 / 0.251
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@1,000: 0.156 / 0.167
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@10,000: 0.156 / 0.157
fetched 100,000 rows. (0 errors)Above I use array-defines instead of pre-fetch.
select_points_struct: x100,000@10: 0.938 / 0.935
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@100: 0.219 / 0.217
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@1,000: 0.140 / 0.140
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@1,000: 0.140 / 0.140Above I use array-of-struct defines instead of pre-fetch or array-bind. Performance is just a little better, probably because of better memory "locality" with structures.
The table is simple:
create table point_tab(
id number,
x binary_float,
y binary_float,
z binary_float
So each row is 22 + 4 + 4 + 4 = 34 bytes. There are no constraints or indexes of course to make it as fast as possible (this is 11g on XP win32, server and client on the same machine, single user, IPC protocol, so it doesn't get much better than this, and is not realistic of true client-server multi-user conditions).
There aren't enough data point to confirm or not your prediction that the advantage of array-bind level-off at the packet size threshold, but what you write makes sense to me.
So I went back and tried more sizes. 8K divided by 34 bytes is 240 rows, so I selected 250 rows as the "middle", and bracketed it with 2 upper and lower values which "double" up or down the number of rows:
truncated table: 0.953 / 0.960
insert_point_stru_1stmt_1commit: x100,000: 0.219 / 0.220
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@67: 0.329 / 0.320
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@125: 0.297 / 0.296
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@250: 0.250 / 0.237
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@500: 0.218 / 0.210
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@1,000: 0.187 / 0.195
fetched 99,964 rows. (0 errors)
select_points_array: x99,964@67: 0.297 / 0.294
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@125: 0.235 / 0.236
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@250: 0.203 / 0.206
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@500: 0.188 / 0.179
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@1,000: 0.156 / 0.165
fetched 99,964 rows. (0 errors)
select_points_struct: x99,964@67: 0.250 / 0.254
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@125: 0.203 / 0.207
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@250: 0.172 / 0.168
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@500: 0.157 / 0.152
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@1,000: 0.125 / 0.129As you can see, it still gets faster at 1,000, which is about 32K. I don't know the packet size of course, but 32K sounds big for a packet.
truncated table: 2.937 / 2.945
insert_point_stru_1stmt_1commit: x100,000: 0.328 / 0.324
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@1,000: 0.250 / 0.250
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@2,000: 0.266 / 0.262
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@3,000: 0.250 / 0.254
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@4,000: 0.266 / 0.273
fetched 100,000 rows. (0 errors)
select_first_n_points: x100,000@5,000: 0.281 / 0.278
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@1,000: 0.172 / 0.165
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@2,000: 0.157 / 0.159
fetched 99,000 rows. (0 errors)
select_points_array: x99,000@3,000: 0.156 / 0.157
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@4,000: 0.141 / 0.155
fetched 100,000 rows. (0 errors)
select_points_array: x100,000@5,000: 0.157 / 0.164
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@1,000: 0.125 / 0.129
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@2,000: 0.125 / 0.123
fetched 99,000 rows. (0 errors)
select_points_struct: x99,000@3,000: 0.125 / 0.120
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@4,000: 0.125 / 0.121
fetched 100,000 rows. (0 errors)
select_points_struct: x100,000@5,000: 0.125 / 0.122Above 32K, there doesn't seem to be much benefit (at least in this config. My colleague on linux64 is consistently faster in benchmarks, even connecting to the same servers, when we have the same exact machine). So 32K may indeed be a threshold of sort.
In all, I hope I've shown there is value in array-binds (or defines), or even simple pre-fetching (but the latter helps in selects only). I don't think one can very often take advantage of it, and I have no clue how it compares to direct-path calls, but value there is still IMHO. --DD -
Short dump in SAP R/3: SQL statement buffer overflow?
Hello,
I hope someone can help us with the following problem:
A short dump in SAP R/3 (DBIF_RSQL_INVALID_RSQL, CX_SY_OPEN_SQL_DB)
occurred during a delta load, which worked fine for several month.
The custom code crashes at a FETCH NEXT CURSOR statement.
I assume, it might be a SQL statement buffer overflow in SAP R/3?
The problem can be reproduced by RSA3, and is therefore not time-dependent.
The problem did not occur before or on the quality assurance system.
Cursor code:
Read all entries since last transfer (delta mechanism)
OPEN CURSOR WITH HOLD s_cursor FOR
SELECT * FROM ekko
WHERE ebeln IN t_selopt_ekko.
t_selopt_ekko has up to 60.000 data sets, which worked fine in the past.
It is very likely that the amount of data during the first crash did not exceed this.
SQL-Trace of RSA3 call:
It seems that 25150 data set can be processed via fetch before the short dump occurs
After that object SNAP is written:
"...DBIF_RSQL_INVALID_RSQL...dynpen00 + 0xab0 at dymain.c:1645 dw.sapP82_D82
Thdyn...Report für den Extraktoraufruf...I_T_FIELDS...Table IT_43[16x60]TH058FUNCTION=
RSA3_GET_DATA_SIMPLEDATA=S_S_IF_SIMPLE-T_FIELDSTH100...shmRefCount = 1...
...> 1st level extension part <...isUsed = 1...isCtfyAble = 1...> Shareable Table Header Data
<...tabi = Not allo......S_CURSORL...SAPLRSA3...LRSA3U06...SAPLRSA3...
During dump creation the following occurs:
"...SAPLSPIAGENTCW...CX_DYNAMIC_CHECK=...CRSFH...BALMSGHNDL...
DBIF_RSQL_INVALID_RSQL...DBIF_RSQL_INVALID_RSQL...DB_ERR_RSQL_00013...
INCL_ABAP_ERROR...DBIF_INCL_INTERNAL_ERROR...INCL_INTERNAL_ERROR...
GENERAL_EXC_WITHOUT_RAISING...INCL_SEND_TO_ABAP...INCL_SEARCH_HINTS...
INCL_SEND_TO_SAP...GENERAL_EXC_WITHOUT_RAISING...GENERAL_ENVIRONMENT...
GENERAL_TRANSACTION...GENERAL_INFO...GENERAL_INFO_INTERNAL...
DBIF_INCL_INTERNAL_CALL_CODE..."
Basis says, that the Oracle data base works fine. The problem seems to be a SAP R/3 buffer.
Does anyone had a similar problem or knows where such a buffer might be or how it can be enlarged?
Best regards
Thomas
P.S.
Found a thread that contains part of the dump message "dynpen00 + 0xab0 at dymain.c:1645":
Thread: dump giving by std prg contains -> seems not to be helpful
Found a similar thread:
Thread: Short dump in RSA3 --Z Data Source -> table space or somting else?
Edited by: Thomas Köpp on Apr 1, 2009 11:39 AMHi Thomas,
Its due to different field length.
Just check it out in code after FETCH NEXT CURSOR what internal table you have mention.
that internal table shoul deffined by taking refrence of ekko, because your code is
OPEN CURSOR WITH HOLD s_cursor FOR
SELECT * FROM ekko
WHERE ebeln IN t_selopt_ekko.
hope you got solution.
Regards,
Maybe you are looking for
-
Stop the report from firing until the user clicks the Go button?
Hi All, Is there a way to stop the report from firing until the user clicks the Go button? At the moment it is populating when I open the dashboard page. I found something online that said i could use the Page Options>Save Current Settings> For Other
-
I am having mac book air 2012model i had installed mavericks and use it, i long press command and power button at a same time and i saw the command prompt, from that i had formated the total hard disk. how to i want to install the OS again ? i tryed
-
Burning a dvd from an external hard drive
I want to burn a dvd from video that i have on an external hard drive. the extensions on vob, inf. when I plug in the external drive it won't even play the video on my iMac. help please.
-
This started a couple days ago. When I clicked on the Documents folder in the Finder, the Finder window would close. Now, when I click on the Finder in the Dock or open a new Finder window, the window pops up for 1/2 second and then closes. The entir
-
Hi! I am running on Vista Home Premium. When I try to remove programs in Control Panel/Programs and features I sometimes get refused. The latest Programs that I have tried to remove are Volo View 3 and Windows Vista Upgrade Advisor, (which I only ins