How to Tuning this sql statement
hi my friend:
SELECT ie.ImportEntityAutoID,
ie.ImportRawDataAutoID,
ie.Amount,
s.NAME AS StatusName,
ir.SourceLocation AS SourceLocation,
ie.ErrorMessage AS ErrorMessage,
famis.PROJECT_CODE,
famis.PROJECT_TITLE
FROM dbo.ImportEntity ie
INNER JOIN tblBatch B ON ie.BatchGuid = B.BatchGuid
INNER JOIN dbo.ImportRawData ir ON ie.ImportRawDataAutoID
= ir.ImportRawDataAutoID
INNER JOIN dbo.ImportRawData_Famis famis ON ie.ImportRawDataAutoID
= famis.ImportRawDataAutoID
LEFT OUTER JOIN dbo.Status s ON ( ie.StatusID
= s.StatusID
AND s.StatusTypeID = 35 --Import Entity Status
WHERE ie.EntityType <> 'Unknown'
AND ie.[StatusID] IN ( 3, 5, 6 ) ----3: Pending; 5: Failed; 6: Cancel
AND ( 0 = 0
OR ie.StatusID
= 0
AND ie.Amount >= -99999999999999.9999 AND ie.Amount
<= 99999999999999.9999 ;
the ImportEntity TABLE has 5 milion record AND the value of entitytype COLUMN equal 'Unknown' IS more THEN 3 milion,
statusid equal (3,5,6) Probably 2 milion. I had following index:
table ImportEntity:
clustered_index:ImportRawDataAutoID
non-CLUSTERED INDEX IX_batchguid ON BatchGuid
non-CLUSTERED INDEX IX_ImportRawDataAutoID ON ImportRawDataAutoID
non-CLUSTERED INDEX IX_StatusID ON StatusID
non-CLUSTERED INDEX IX_EntityType ON EntityType
AND tblBatch has ONLY two records
ImportRawData_Famis has 5 milion recors AND ImportRawDataAutoID IS its clustered index COLUMN
ImportRawData has 5 milion recors AND ImportRawDataAutoID IS its clustered index COLUMN
the query require 180 SECOND.how can i tuning this statment.
hi
Visakh16
I do just as you say. but I understand the execution plan and can you give me some advice about procedure
tuning and look the execution plan、clean execution plan and so on.
thank you very much!
ming
Similar Messages
-
Need help on how to code this SQL statement! (one key has leading zeros)
Good day, everyone!
First of all, I apologize if this isn't the best forum. I thought of putting it in the SAP Oracle database forum, but the messages there seemed to be geared outside of ABAP SELECTs and programming. Here's my question:
I would like to join the tables FMIFIIT and AUFK. The INNER JOIN will be done between FMIFIIT's MEASURE (Funded Program) field, which is char(24), and AUFK's AUFNR (Order Number) field, which is char(12).
The problem I'm having is this: All of the values in AUFNR are preceeded by two zeros. For example, if I have a MEASURE value of '5200000017', the corresponding value in AUFNR is '005200000017'. Because I have my SQL statement coded to just match the two fields, I obviously get no records returned because, I assume, of those leading zeros.
Unfortunately, I don't have a lot of experience coding SQL, so I'm not sure how to resolve this.
Please help! As always, I will award points to ALL helpful responses!
Thanks!!
Dave>
Dave Packard wrote:
> Good day, everyone!
> I would like to join the tables FMIFIIT and AUFK. The INNER JOIN will be done between FMIFIIT's MEASURE (Funded Program) field, which is char(24), and AUFK's AUFNR (Order Number) field, which is char(12).
>
> The problem I'm having is this: All of the values in AUFNR are preceeded by two zeros. For example, if I have a MEASURE value of '5200000017', the corresponding value in AUFNR is '005200000017'. Because I have my SQL statement coded to just match the two fields, I obviously get no records returned because, I assume, of those leading zeros.
> Dave
You can't do a join like this in SAP's open SQL. You could do it in real SQL ie EXEC.... ENDEXEC by using SUSBTR to strip off the leading zeros from AUFNR but this would not be a good idea because a) modifying a column in the WHERE clause will stop any index on that column being used and b) using real SQL rather than open SQL is really not something that should be encouraged for database portability reasons etc.
Forget about a database join and do it in two stages; get your AUFK data into an itab, strip off the leading zeros, and then use FAE to get the FMIFIIT data (or do it the other way round).
I do hope you've got an index on your FMIFIIT MEASURE field (we don't have one here); otherwise your SELECT could be slow if the table holds a lot of data. -
This is the sql for tuning. The driving table all_dist_gl is about 2 bln records. This sql is run more than 40min. The result is about 5k rows.
Thanks for your any advice. I will be appreciate for your example.
--Region_code_list is a package to determain which region is.
select
rowidtochar(adlg.rowid) row_id,
adlg.cust_trx_line_gl_dist_id,
adlg.posting_control_id,
adlg.set_of_books_id,
adlg.org_id,
adlg.amount,
adlg.acctd_amount,
adlg.last_update_date,
adlg.customer_trx_line_id,
adlg.customer_trx_id,
adlg.cust_trx_line_salesrep_id,
adlg.gl_date,
adlg.gl_posted_date
from all_dist_gl adlg
where
adlg.account_class = 'REV'
and adlg.account_set_flag = 'N'
and (adlg.posting_control_id <> -3 or (adlg.posting_control_id = -3 and adlg.gl_date <= trunc(sysdate)))
and last_update_date between to_date('2004-12-31 11:59:59','YYYY-MM-DD HH24:MI:SS') and to_date('2009-07-07 12:47:24','YYYY-MM-DD HH24:MI:SS')
and region_code_list.by_organization('UK',adlg.org_id) = 1The database version is
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 2786 | 236K| 141K (2)|
|* 1 | TABLE ACCESS BY INDEX ROWID| RA_CUST_TRX_LINE_GL_DIST_ALL | 2786 | 236K| 141K (2)|
|* 2 | INDEX RANGE SCAN | BCIRA_CUST_TRX_LINE_GL_DIST_N1 | 2785K| | 11830 (3)|
Predicate Information (identified by operation id):
1 - filter("CTLGD"."ACCOUNT_CLASS"='REV' AND "CTLGD"."ACCOUNT_SET_FLAG"='N' AND
"BCIEDW_REGION_CODE"."BY_ORGANIZATION"('FE',"CTLGD"."ORG_ID")=1 AND
("CTLGD"."POSTING_CONTROL_ID"<>(-3) OR "CTLGD"."POSTING_CONTROL_ID"=(-3) AND
"CTLGD"."GL_DATE"<=TRUNC(SYSDATE@!)))
2 - access("LAST_UPDATE_DATE">=TO_DATE(' 2009-04-25 12:20:13', 'syyyy-mm-dd
hh24:mi:ss') AND "LAST_UPDATE_DATE"<=TO_DATE(' 2009-06-26 12:47:24', 'syyyy-mm-dd
hh24:mi:ss'))
Note
- 'PLAN_TABLE' is old version
Statistics
59429 recursive calls
0 db block gets
169819 consistent gets
41652 physical reads
0 redo size
30476 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
38 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
555 rows processed
SQL> -
How to rewrite this SQL statement
I have tableA and tableB as below. The following query gets Max(Process_date) during month of january from the
two tables tableA and tableB with different criteria
I want to expand the following query to return Max(Process_date) for BOTH current month(txn_date=entire jan2013) and prior month(txn_date= entire dec2013)
Is it possible? If so how can I modify this query to return MAX(PROCESS_DATE) for current and prior months in single query.
SELECT MAX(process_date) AS curr_month_amount FROM
SELECT process_date AS process_date,
amount1 AS amount
FROM tableA
WHERE id = 1 AND
process_date = (SELECT MAX(process_date)
FROM tableA
WHERE id = 1 and txn_date between TO_DATE('01-JAN-2013','dd-mon-yyyy') and TO_DATE('31-JAN-2013', 'dd-mon-yyyy')
AND amount1 = 0
UNION
SELECT MAX(process_date) AS process_date,
0 AS amount
FROM tableB
WHERE id = 1 AND txn_code = 'B' and txn_date between TO_DATE('01-JAN-2013','dd-mon-yyyy') and TO_DATE('31-JAN-2013', 'dd-mon-yyyy')
future state of the sql
Single sql statement a) should look at txn_date between 1/1/2013 - 1/31/2013 to return max(process_date) for current month
b) should look at txn_date between 12/1/2012 - 12/31/2012 to return max(process_date) for prior month
NOTE:-( i want to pass current_month_end date 1/31/2013 to this sql so it will calculate for current and prior months)
expected output
***************: For id=1 in the modified query,
prior month max(process_date) should be 1/19/2013
current month max(process_date) should be 1/15/2012
For id=5 in the modified query
prior month max(process_date) should be NULL
current month max(process_date) should be 1/16/2013
SQL to create TableA and TableB with insert statements( txn_date column not included)
CREATE table tableA
id NUMBER,
process_date DATE,
amount1 NUMBER,
txn_code VARCHAR2(1),
txn_date DATE
Create table tableB
( id NUMBER,
process_date DATE,
amount2 NUMBER,
txn_code VARCHAR2(1),
txn_date DATE
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 1, to_date('01/12/2013','mm/dd/yyyy'), 500, 'A', to_date('01/15/2013','mm/dd/yyyy') );
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 1, to_date('01/13/2013','mm/dd/yyyy'), 100, 'A', to_date('01/14/2013','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code , txn_date)
values
( 1, to_date('01/14/2013','mm/dd/yyyy'), 0, 'A', to_date('01/15/2013','mm/dd/yyyy'));
INSERT INTO tableB
( id, process_date, amount2, txn_code, txn_date)
values
( 1, to_date('01/15/2013','mm/dd/yyyy'), 0, 'B', to_date('01/31/2013','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 1, to_date('12/01/2012','mm/dd/yyyy'), 500, 'A', to_date('12/31/2012','mm/dd/yyyy') );
INSERT INTO tableA
( id, process_date, amount1, txn_code , txn_date)
values
( 1, to_date('12/23/2012','mm/dd/yyyy'), 100, 'A', to_date('12/14/2012','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 1, to_date('12/19/2012','mm/dd/yyyy'), 0, 'A', to_date('12/15/2012','mm/dd/yyyy'));
INSERT INTO tableB
( id, process_date, amount2, txn_code, txn_date)
values
( 1, to_date('12/15/2012','mm/dd/yyyy'), 0, 'C', to_date('12/31/2012','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 5, to_date('01/11/2013','mm/dd/yyyy'), 500, 'A', to_date('01/09/2013','mm/dd/yyyy') );
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
(5, to_date('01/12/2013','mm/dd/yyyy'), 0, 'A', to_date('01/19/2013','mm/dd/yyyy'))
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 5, to_date('01/15/2013','mm/dd/yyyy'), 10 , 'A', to_date('01/09/2013','mm/dd/yyyy'));
INSERT INTO tableB
( id, process_date, amount2, txn_code, txn_date)
values
( 5, to_date('01/16/2013','mm/dd/yyyy'), 1, 'B', to_date('01/09/2013','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 5, to_date('12/11/2012','mm/dd/yyyy'), 500, 'A', to_date('12/09/2012','mm/dd/yyyy') );
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
(5, to_date('12/12/2012','mm/dd/yyyy'), 0, 'A', to_date('12/19/2012','mm/dd/yyyy'))
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 5, to_date('12/15/2012','mm/dd/yyyy'), 10 , 'A', to_date('12/09/2012','mm/dd/yyyy'));
INSERT INTO tableB
( id, process_date, amount2, txn_code, txn_date)
values
( 5, to_date('12/16/2012','mm/dd/yyyy'), 1, 'C', to_date('12/09/2012','mm/dd/yyyy'));
commit;Maybe
SELECT id,
month,
MAX(process_date) AS curr_month_amount
FROM (SELECT id,
process_date AS process_date,
amount1 AS amount,
TO_CHAR(txn_date,'mm') AS month
FROM tableA
WHERE (process_date,TO_CHAR(txn_date,'mm')) in
(SELECT MAX(process_date),TO_CHAR(txn_date,'mm')
FROM tableA
WHERE txn_date between ADD_MONTHS(TO_DATE('01-JAN-2013','dd-mon-yyyy'),-1)
AND TO_DATE('31-JAN-2013','dd-mon-yyyy')
AND amount1 = 0
GROUP BY TO_CHAR(txn_date,'mm')
AND amount1 = 0
UNION
SELECT id,
MAX(process_date) AS process_date,
0 AS amount,
TO_CHAR(txn_date,'mm') AS month
FROM tableB
WHERE txn_code = 'B'
AND txn_date between ADD_MONTHS(TO_DATE('01-JAN-2013','dd-mon-yyyy'),-1)
AND TO_DATE('31-JAN-2013','dd-mon-yyyy')
GROUP BY id,TO_CHAR(txn_date,'mm')
GROUP BY id,month
ID
MONTH
CURR_MONTH_AMOUNT
5
01
01/16/2013
1
12
12/19/2012
1
01
01/15/2013
Regards
Etbin -
How to optimize this SQL. Help needed.
Hi All,
Can you please help with this SQL:
SELECT /*+ INDEX(zl1 zipcode_lat1) */
zl2.zipcode as zipcode,l.location_id as location_id,
sqrt(POWER((69.1 * ((zl2.latitude*57.295779513082320876798154814105) - (zl1.latitude*57.295779513082320876798154814105))),2) + POWER((69.1 * ((zl2.longitude*57.295779513082320876798154814105) - (zl1.longitude*57.295779513082320876798154814105)) * cos((zl1.latitude*57.295779513082320876798154814105)/57.3)),2)) as distance
FROM location_atao l, zipcode_atao zl1, client c, zipcode_atao zl2
WHERE zl1.zipcode = l.zipcode
AND l.client_id = c.client_id
AND c.client_id = 306363
And l.appType = 'HOURLY'
and c.milessearchzipcode >= sqrt(POWER((69.1 * ((zl2.latitude*57.295779513082320876798154814105) - (zl1.latitude*57.295779513082320876798154814105))),2) + POWER((69.1 * ((zl2.longitude*57.295779513082320876798154814105) - (zl1.longitude*57.295779513082320876798154814105)) * cos((zl1.latitude*57.295779513082320876798154814105)/57.3)),2))
I tried to optimize it by adding country column in zipcode_atao table. So that we can limit the search in zipcode_atao table based on country.
Any other suggestions.
ThanksWelcome to the forum.
Please follow the instructions given in this thread:
How to post a SQL statement tuning request
HOW TO: Post a SQL statement tuning request - template posting
and add the nessecary details we need to your thread.
Depending on your database version (the result of: select * from v$version; ):
Have you tried running the query without the index-hint?
Are your table (and index) statatistics up-to-date? -
How do I use SQL statements to perform calculations with form fields????
Please help!!! I don't know how to use a SQL statement within my APEX form......
My form is below. The user will enter the values in the form. Click on Submit. Then we need to run a SQL select statement with those values.
Our form looks like this:
Start_Date ____________
Per_Period ____________
Period ____________
[Submit Button]
The user will enter these 3 values in the form.
This is an example of an user providing the values:
Start_Date 03/14/08_______
Per_Period $200.00________
Period 4____________
[Submit Button]
Then they will click the Submit Button.
The SQL statement (BELOW) returns output based on the users selections:
START_DATE PER_PERIOD PERIOD
14-MAR-2008 00:00 200 Week 1 of 4
21-MAR-2008 00:00 200 Week 2 of 4
28-MAR-2008 00:00 200 Week 3 of 4
04-APR-2008 00:00 200 Week 4 of 4
Total 800
This is the full text of the SQL that makes the output above:
with criteria as (select to_date('03/14/08', 'mm/dd/rr') as start_date,
4 as periods,
'Week' as period,
200 per_period from dual),
periods as (select 'Week' period, 7 days, 0 months from dual
union all select 'BiWeek', 14, 0 from dual
union all select 'Month', 0, 1 from dual
union all select 'ByMonth', 0, 2 from dual
union all select 'Quarter', 0, 3 from dual
union all select 'Year', 0 , 12 from dual
t1 as (
select add_months(start_date,months*(level-1))+days*(level-1) start_date,
per_period,
c.period||' '||level||' of '||c.periods period
from criteria c join periods p on c.period = p.period
connect by level <= periods)
select case grouping(start_date)
when 1 then 'Total'
else to_char(start_date)
end start_date,
sum(per_period) per_period,
period
from t1
group by rollup ((start_date, period))
THANKS VERY MUCH!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!You're just doing a parameterized report, where the input fields are your parameters.
Check out the Advanced Tutorial titled Parameterized Report here:
http://download.oracle.com/docs/cd/E10513_01/doc/appdev.310/e10497/rprt_query.htm#BGBEEBJA
Good luck,
Stew -
How to set Query SQL Statement parameter dynamically in Sender JDBCAdpter
Hi All,
I have one scenario in which we are using JDBC Sender Adapter.
Now in this case,we need to set Query SQL Statement with a SELECT statement based on some fields.
This SQL statement is not constant, it would need to be changed.
Means sometimes receiver will want to execute SQL statement with these fields and sometimes they will want to execute it with different fields.
We can create separate channels for each SQL statement but again that is not an optimum solution.
So ,I am looking out for a way to set these parameters dynamically or set SQL statement at Runtime.
Can you all please help me to get this?Shweta ,
<i>Sometimes receiver will want to execute SQL statement dynamically</i>....
How you will get the query dynamically? Ok Let me assume, consider they are sending the query through file, then its definitely possible. But u need BPM and also not sender JDBC receiver adapter instead, receiver JDBC adapter.
SQL Query File ->BPM>Synchronous send [Fetch data from DB]--->Response -
>...............
Do u think the above design will suit's ur case!!!!
Best regards,
raj. -
Hi
Please can you help me to create this SQL statement ?
I have two tables: table1 consist of data (AAA,BBB), table2 consist of data(AAA)
And there is relation between table1 and table2 for example (Invoice_id) .
My question is ( How can create SQL statement to display only data in table1(BBB) but not available In table2 ?
Thanks and best regards .
KhaledTry this:
SQL> ed
Wrote file afiedt.buf
1 with t as (select 'AAA' col1,1 inv_id from dual
2 UNION select 'BBB',1 from dual)
3 , s as (select 'AAA' col1,1 inv_id from dual)
4 SELECT col1,inv_id FROM t
5* WHERE NOT EXISTS (SELECT 1 FROM s WHERE inv_id = t.inv_id AND col1 = t.col1)
SQL> /
COL INV_ID
BBB 1
SQL> -
How to execute this SQL Query in ABAP Program.
Hi,
I have a string which is the SQL Query.
How to execute this sql Query (SQL_STR) in ABAP Program.
Code:-
DATA: SQL_STR type string.
SQL_STR = 'select * from spfli.'.
Thanks in Advance,
VinayHi Vinay
Here is a sample to dynamically generate a subroutine-pool having your SQL and calling it.
REPORT dynamic_sql_example .
DATA: BEGIN OF gt_itab OCCURS 1 ,
line(80) TYPE c ,
END OF gt_itab .
DATA gt_restab TYPE .... .
DATA gv_name(30) TYPE c .
DATA gv_err(120) TYPE c .
START-OF-SELECTION .
gt_itab-line = 'REPORT generated_sql .' .
APPEND gt_itab .
gt_itab-line = 'FORM exec_sql CHANGING et_table . ' .
APPEND gt_itab .
gt_itab-line = SQL_STR .
APPEND gt_itab .
gt_itab-line = 'ENDFORM.' .
APPEND gt_itab .
GENERATE SUBROUTINE POOL gt_itab NAME gv_name MESSAGE gv_err .
PERFORM exec_sql IN PROGRAM (gv_name) CHANGING gt_restab
IF FOUND .
WRITE:/ gv_err .
LOOP AT gt_result .
WRITE:/ .... .
ENDLOOP .
*--Serdar -
Whats wrong with this sql statement ??
Hello all, I am trying to run the below query out of persheet(tanel poder) performance excel chart...but i get below error...db is on 9.2
what is wrong with this sql statement ?
http://blog.tanelpoder.com/2008/12/28/performance-visualization-made-easy-perfsheet-20-beta/
select * from (
with fsq as (
select /*+ materialize */
i.dbid
, i.instance_name
, i.instance_number
-- , trunc(s.snap_time, 'DD') DAY
-- , to_number(to_char(s.snap_time, 'HH24')) HOUR
-- -- , to_char(s.snap_time, 'MI') MINUTE
-- , 0 MINUTE
, trunc(
lag(s.snap_time, 1)
over(
partition by
v.dbid
, i.instance_name
, v.instance_number
, v.event
order by
s.snap_time
, 'HH24'
) SNAP_TIME
, v.event_type EVENT_TYPE
, v.event EVENT_NAME
, nvl(
decode(
greatest(
time_waited_micro,
nvl(
lag(time_waited_micro,1,0)
over(
partition by
v.dbid
, i.instance_name
, v.instance_number
, v.event
order by v.snap_id
, time_waited_micro
time_waited_micro,
time_waited_micro - lag(time_waited_micro,1,0)
over (
partition by
v.dbid
, i.instance_name
, v.instance_number
, v.event
order by v.snap_id
time_waited_micro
, time_waited_micro
) / 1000000 SECONDS_SPENT
, total_waits WAIT_COUNT
from
(select distinct dbid, instance_name, instance_number from stats$database_instance) i
, stats$snapshot s
, ( select
snap_id, dbid, instance_number, 'WAIT' event_type, event, time_waited_micro, total_waits
from
stats$system_event
where
event not in (select event from stats$idle_event)
union all
select
snap_id, dbid, instance_number,
case
when name in ('CPU used by this session', 'parse time cpu', 'recursive cpu usage') then 'CPU'
when name like 'OS % time' then 'OS'
else 'STAT'
end,
name , value, 1
from
stats$sysstat
-- where name in ('CPU used by this session', 'parse time cpu', 'recursive cpu usage')
-- or name like('OS % time')
-- or 1 = 2 -- this will be a bind variable controlling whether all stats need to be returned
) v
where
i.dbid = s.dbid
and i.dbid = v.dbid
and s.dbid = v.dbid
and s.snap_id = v.snap_id
and s.snap_time between '%FROM_DATE%' and '%TO_DATE%'
and i.instance_name = '%INSTANCE%'
select * from (
select
instance_name
, instance_number
, snap_time
, trunc(snap_time, 'DD') DAY
, to_char(snap_time, 'HH24') HOUR
, to_char(snap_time, 'MI') MINUTE
, event_type
, event_name
, seconds_spent
, wait_count
, ratio_to_report(seconds_spent) over (
-- partition by (to_char(day, 'YYYYMMDD')||to_char(hour,'09')||to_char(minute, '09'))
partition by (snap_time)
) ratio
from fsq
where
snap_time is not null -- lag(s.snap_time, 1) function above will leave time NULL for first snapshot
-- to_char(day, 'YYYYMMDD')||to_char(hour,'09')||to_char(minute, '09')
-- > ( select min(to_char(day, 'YYYYMMDD')||to_char(hour,'09')||to_char(minute, '09')) from fsq)
where ratio > 0
order by
instance_name
, instance_number
, day
, hour
, minute
, event_type
, seconds_spent desc
, wait_count desc
Error at line 6
ORA-00604: error occurred at recursive SQL level 1
ORA-00972: identifier is too longHi Alex,
Subquery factoring a.k.a. the with-clause should be possible on 9.2:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_103a.htm#2075888
(used it myself as well on 9.2)
@OP
I recall having problems myself using PL/SQL Developer and trying to get the with clause to work on 9.2 some years ago.
A workaround might be to create a view based on the query.
Also, your error message is "ORA-00972: identifier is too long"...
http://download.oracle.com/docs/cd/B19306_01/server.102/b14219/e900.htm#sthref419
Can't test things currently, no 9.2 available at the moment, but perhaps tomorrow I'll have a chance. -
How to put the SQL-statement returned value into the field (as a default)
Hi,
I am using Developer/2000 (Forms Designer) under windows 98.
Please tell me how to put the SQL-statement value (as a default value) into the field before enter-query mode. Noted that I have tried the following ways but still some problems:-
1) Place the SQL-statement into PRE_QUERY trigger in the form/block level.
There is a message box which ask 'Do you want to save the changes?'.
2) Place the SQL-statement before execute enter_query. There is still a
message box which ask 'Do you want to save the changes?'.
Any hints? Thanks. Urgent.solved it!
1) Suppress DEFAULT save message
if form_failure then
raise form_trigger_failure;
end if;
2) Place the default value before enter-query.
Ref: Title='Default value in query field in ENTER_QUERY mode' in designer forum by CVZ
form level trigger
============
WHEN-NEW-ITEM-INSTANCE
=======================
if :system.mode = 'ENTER-QUERY' then
:block.item := 'default waarde';
end if;
3) Suppress the changes whenever leaving the default field.
if :block.item is null then
-- assign statement
end if; -
How to write the sql statement of my finder function in cmp?
hi,
I create a cmp ejb from table INFOCOLUMN,and I create a my finder function ,which sql statement is :
select * from INFOCOLUMN WHERE employee_id=id
employee_id is a column of the table,and id is the finder function parameter.
The error is : invalid column name
So,how to write the sql statement.
Thanks .Mole-
Bind variables are of the form $1, $2, etc., so your query stmt should look like:
select * from INFOCOLUMN WHERE employee_id=$1
-Jon -
How to optimize this select statement its a simple select....
how to optimize this select statement as the records in earlier table is abt i million
and this simplet select statement is not executing and taking lot of time
SELECT guid
stcts
INTO table gt_corcts
FROM corcts
FOR all entries in gt_mege
WHERE /sapsll/corcts~stcts = gt_mege-ctsex
and /sapsll/corcts~guid_pobj = gt_Sagmeld-guid_pobj.
regards
AroraHi Arora,
Using Package size is very simple and you can avoid the time out and as well as the problem because of memory. Some time if you have too many records in the internal table, then you will get a short dump called TSV_TNEW_PAGE_ALLOC_FAILED.
Below is the sample code.
DATA p_size = 50000
SELECT field1 field2 field3
INTO TABLE itab1 PACKAGE SIZE p_size
FROM dtab
WHERE <condition>
Other logic or process on the internal table itab1
FREE itab1.
ENDSELECT.
Here the only problem is you have to put the ENDSELECT.
How it works
In the first select it will select 50000 records ( or the p_size you gave). That will be in the internal table itab1.
In the second select it will clear the 50000 records already there and append next 50000 records from the database table.
So care should be taken to do all the logic or process with in select and endselect.
Some ABAP standards may not allow you to use select-endselect. But this is the best way to handle huge data without short dumps and memory related problems.
I am using this approach. My data is much more huge than yours. At an average of atleast 5 millions records per select.
Good luck and hope this help you.
Regards,
Kasthuri Rangan Srinivasan -
HOW TO: Post a SQL statement tuning request - template posting
This post is not a question, but similar to Rob van Wijk's "When your query takes too long ..." post should help to improve the quality of the requests for SQL statement tuning here on OTN.
On the OTN forum very often tuning requests about single SQL statements are posted, but the information provided is rather limited, and therefore it's not that simple to provide a meaningful advice. Instead of writing the same requests for additional information over and over again I thought I put together a post that describes how a "useful" post for such a request should look like and what information it should cover.
I've also prepared very detailed step-by-step instructions how to obtain that information on my blog, which can be used to easily gather the required information. It also covers again the details how to post the information properly here, in particular how to use the \ tag to preserve formatting and get a fixed font output:
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
So again: This post here describes how a "useful" post should look like and what information it ideally covers. The blog post explains in detail how to obtain that information.
In the future, rather than requesting the same additional information and explaining how to obtain it, I'll simply refer to this HOW TO post and the corresponding blog post which describes in detail how to get that information.
*Very important:*
Use the \ tag to enclose any output that should have its formatting preserved as shown below.
So if you want to use fixed font formatting that preserves the spaces etc., do the following:
\ This preserves formatting
\And it will look like this:
This preserves formatting
. . .Your post should cover the following information:
1. The SQL and a short description of its purpose
2. The version of your database with 4-digits (e.g. 10.2.0.4)
3. Optimizer related parameters
4. The TIMING and AUTOTRACE output
5. The EXPLAIN PLAN output
6. The TKPROF output snippet that corresponds to your statement
7. If you're on 10g or later, the DBMS_XPLAN.DISPLAY_CURSOR output
The above mentioned blog post describes in detail how to obtain that information.
Your post should have a meaningful subject, e.g. "SQL statement tuning request", and the message body should look similar to the following:
*-- Start of template body --*
The following SQL statement has been identified to perform poorly. It currently takes up to 10 seconds to execute, but it's supposed to take a second at most.
This is the statement:
select
from
t_demo
where
type = 'VIEW'
order by
id;It should return data from a table in a specific order.
The version of the database is 11.1.0.7.
These are the parameters relevant to the optimizer:
SQL>
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.1.0.7
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL>
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL>
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL>
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL>
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL>
SQL> select
2 sname
3 , pname
4 , pval1
5 , pval2
6 from
7 sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 01-30-2009 16:25
SYSSTATS_INFO DSTOP 01-30-2009 16:25
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 494,397
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.Here is the output of EXPLAIN PLAN:
SQL> explain plan for
2 -- put your statement here
3 select
4 *
5 from
6 t_demo
7 where
8 type = 'VIEW'
9 order by
10 id;
Explained.
Elapsed: 00:00:00.01
SQL>
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1390505571
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 60 | 0 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T_DEMO | 1 | 60 | 0 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IDX_DEMO | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("TYPE"='VIEW')
14 rows selected.Here is the output of SQL*Plus AUTOTRACE including the TIMING information:
SQL> rem Set the ARRAYSIZE according to your application
SQL> set autotrace traceonly arraysize 100
SQL> select
2 *
3 from
4 t_demo
5 where
6 type = 'VIEW'
7 order by
8 id;
149938 rows selected.
Elapsed: 00:00:02.21
Execution Plan
Plan hash value: 1390505571
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 60 | 0 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T_DEMO | 1 | 60 | 0 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IDX_DEMO | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("TYPE"='VIEW')
Statistics
0 recursive calls
0 db block gets
149101 consistent gets
800 physical reads
196 redo size
1077830 bytes sent via SQL*Net to client
16905 bytes received via SQL*Net from client
1501 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
149938 rows processed
SQL>
SQL> disconnect
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing optionsThe TKPROF output for this statement looks like the following:
TKPROF: Release 11.1.0.7.0 - Production on Mo Feb 23 10:23:08 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Trace file: orcl11_ora_3376_mytrace1.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
select
from
t_demo
where
type = 'VIEW'
order by
id
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1501 0.53 1.36 800 149101 0 149938
total 1503 0.53 1.36 800 149101 0 149938
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 88
Rows Row Source Operation
149938 TABLE ACCESS BY INDEX ROWID T_DEMO (cr=149101 pr=800 pw=0 time=60042 us cost=0 size=60 card=1)
149938 INDEX RANGE SCAN IDX_DEMO (cr=1881 pr=1 pw=0 time=0 us cost=0 size=0 card=1)(object id 74895)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1501 0.00 0.00
db file sequential read 800 0.05 0.80
SQL*Net message from client 1501 0.00 0.69
********************************************************************************The DBMS_XPLAN.DISPLAY_CURSOR output:
SQL> -- put your statement here
SQL> -- use the GATHER_PLAN_STATISTICS hint
SQL> -- if you're not using STATISTICS_LEVEL = ALL
SQL> select /*+ gather_plan_statistics */
2 *
3 from
4 t_demo
5 where
6 type = 'VIEW'
7 order by
8 id;
149938 rows selected.
Elapsed: 00:00:02.21
SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
SQL_ID d4k5acu783vu8, child number 0
select /*+ gather_plan_statistics */ * from t_demo
where type = 'VIEW' order by id
Plan hash value: 1390505571
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
| 0 | SELECT STATEMENT | | 1 | | 149K|00:00:00.02 | 149K| 1183 |
| 1 | TABLE ACCESS BY INDEX ROWID| T_DEMO | 1 | 1 | 149K|00:00:00.02 | 149K| 1183 |
|* 2 | INDEX RANGE SCAN | IDX_DEMO | 1 | 1 | 149K|00:00:00.02 | 1880 | 383 |
Predicate Information (identified by operation id):
2 - access("TYPE"='VIEW')
20 rows selected.I'm looking forward for suggestions how to improve the performance of this statement.
*-- End of template body --*
I'm sure that if you follow these instructions and obtain the information described, post them using a proper formatting (don't forget about the \ tag) you'll receive meaningful advice very soon.
So, just to make sure you didn't miss this point:Use proper formatting!
If you think I missed something important in this sample post let me know so that I can improve it.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/Alex Nuijten wrote:
...you missed the proper formatting of the Autotrace section ;-)Alex,
can't reproduce, does it still look unformatted? Or are you simply kidding? :-)
Randolf
PS: Just noticed that it actually sometimes doesn't show the proper formatting although the code tags are there. Changing to the \ tag helped in this case, but it seems to be odd.
Edited by: Randolf Geist on Feb 23, 2009 11:28 AM
Odd behaviour of forum software -
How to use Native SQL statement in JDBC receiver interface
Dear All,
Can any one please help us in using Native SQL statement in a JDBC receiver channel. The reason why I need to use Native SQL statement instead of standard XML structure is that I need to execute a dynamic SQL query in third party database system lke:-
Select Field1 Field2 from TABLE Where Field3 like "%Name'
I expect the the response in the form of XML file which I can pick up using synchornous interface as mentioned on help.sap.com:-
http://help.sap.com/saphelp_nw04/helpdata/en/64/ce4e886334ec4ea7c2712e11cc567c/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/64/ce4e886334ec4ea7c2712e11cc567c/frameset.htm
The value for %Name can change dynamically according to the transaction and hence cannot be inluded as a KEY element in standard XLM structure,
Hence I need to know:-
1. What message mapping I should use in case if I have to use Native SQL statement.
2.What operation mapping I should use in case if I have to use Native SQL statement.
If guess correclty I may have to use Java mapping to do the above activities. Hence I want to know
3 .How do to go about it to do the Java mapping.
Thanks
Ameet>
Ameet Deshpande wrote:
> Dear All,
>
> Can any one please help us in using Native SQL statement in a JDBC receiver channel. The reason why I need to use Native SQL statement instead of standard XML structure is that I need to execute a dynamic SQL query in third party database system lke:-
>
> "
> Select Field1 Field2 from TABLE Where Field3 like "%Name'
> "
> I expect the the response in the form of XML file which I can pick up using synchornous interface as mentioned on help.sap.com:-
>
> http://help.sap.com/saphelp_nw04/helpdata/en/64/ce4e886334ec4ea7c2712e11cc567c/frameset.htm
> http://help.sap.com/saphelp_nw04/helpdata/en/64/ce4e886334ec4ea7c2712e11cc567c/frameset.htm
>
> The value for %Name can change dynamically according to the transaction and hence cannot be inluded as a KEY element in standard XLM structure,
>
> Hence I need to know:-
>
> 1. What message mapping I should use in case if I have to use Native SQL statement.
> 2.What operation mapping I should use in case if I have to use Native SQL statement.
> If guess correclty I may have to use Java mapping to do the above activities. Hence I want to know
> 3 .How do to go about it to do the Java mapping.
>
> Thanks
> Ameet
You can use a stored procedure, and call it from jdbc receiver adapter.
I also solve this issue, with a DBLookup in message mapping. You can refer to my blog, and this usefull 3d:
http://simonlesflex.wordpress.com/2010/12/07/pi-oracle-dblookup/
/people/alessandro.guarneri/blog/2006/03/27/sap-xi-lookup-api-the-killer
/people/siva.maranani/blog/2005/08/23/lookup146s-in-xi-made-simpler
Maybe you are looking for
-
Hard disk failure and replacemen​t
got hello, i own HP-G6-2005AX.My problem is that my hard disk is creating problem, the system does'nt boots up, when i contacted HP customer care, they told me to run hard disk dst test, the test took 3 hrs and gave result as failure. i am posting it
-
F110 : Proposal not generated
Hi, i've carried out a Vendor payment with Tcode F110 and after executing the proposal i have this status : Status Parameters have been entered Proposal has been released 04.03.09 11:39:51 Please what does mean ? i actualize the status but it gives t
-
Undelivered full order qty but order status is completed
Hi, I have 2 Orders wherein the order qty was not fully delivered but the overall status of the order is completed. It is no longer in open order report. i check the Cust MD, it is not requiring complete delivery. The order has no delivery or billi
-
SSRS and SharePoint Integration Authentication Issue
We recently turned on SSRS for our SharePoint 2010 Test Environment. We are using an account that has rights to SharePoint as a site collection administrator, the feature is enabled on the site collection and site level, it has access to the SQL ins
-
Is in Adobe Forms with ZCI "ActiveX" mandatory?
Hi, I am wondering if in ZCI Adobe Forms ActiveX is still necessary? I hoped it was not, because our customer doesn't allow ActiveX. I thought when I have a native Form in my Web Dynpro Java I would NOT need ActiveX Controls in the browser any more.