Tune Query with Millions of Records
Hi everyone,
I've got an Oracle 11g tuning task set before me and I'm pretty novice when it comes to tuning.
The query itself is only about 10-15 lines of SQL, however, it's hitting four tables, one of them is 100 million records and one is 8 million. The other two are pretty small comparatively ( 6,000 and 300 records). The problem I am having is that the query actually needs to aggregate 3 million records.
I found an article about using the star_transformation_enabled = true parameter, then on the fact table I set all the foreign key to bitmaps and the dimensions have a standard primary key defined on the surrogate key. This strategy works but it still takes a long time for the query to crunch the 3 million records (takes about 30 minutes).
I know there's also the option of doing materialized views and using query re-write to take advantage of the MV, but my problem with that is that we're using OBIEE and we can't control how many different variations of these query's we see. So we would have to make a ton of MVs.
What are the best ways to tackle high volume queries like this from a system wide perspective?
Are there any benchmarks for what I should be seeing in terms of a 3 million record query? Is expecting under a minute even reasonable?
Any help would be appreciated!
Thanks!
-Joe
Here is the trace information:
SQL> set autotrace traceonly arraysize 1000
SQL> SELECT SUM(T91573.ACTIVITY_GLOBAL1_AMT) AS c2,
2 SUM(
3 CASE
4 WHEN T91573.DB_CR_IND = 'CREDIT'
5 THEN T91573.ACTIVITY_GLOBAL1_AMT
END ) AS c3,
T91397.GL_ACCOUNT_NAME AS c4,
6 7 8 T91397.GROUP_ACCOUNT_NUM AS c5,
9 SUM(T91573.BALANCE_GLOBAL1_AMT) AS c6,
10 T156337.ROW_WID AS c7
11 FROM W_MCAL_DAY_D T156337
12 /* Dim_W_MCAL_DAY_D_Fiscal_Day */
13 ,
14 W_INT_ORG_D T111515
/* Dim_W_INT_ORG_D_Company */
15 16 ,
W_GL_ACCOUNT_D T91397
17 18 /* Dim_W_GL_ACCOUNT_D */
19 ,
20 W_GL_BALANCE_F T91573
/* Fact_W_GL_BALANCE_F */
21 22 WHERE ( T91397.ROW_WID = T91573.GL_ACCOUNT_WID
23 AND T91573.COMPANY_ORG_WID = T111515.ROW_WID
24 AND T91573.BALANCE_DT_WID = T156337.ROW_WID
AND T111515.COMPANY_FLG = 'Y'
AND T111515.ORG_NUM = '02000'
25 26 27 AND T156337.MCAL_PER_NAME_QTR = '2010 Q 1' )
28 GROUP BY T91397.GL_ACCOUNT_NAME,
T91397.GROUP_ACCOUNT_NUM,
29 30 T156337.ROW_WID
31 ;
522 rows selected.
Execution Plan
Plan hash value: 2761996426
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 7882 | 700K| 7330 (1)| 00:01:28 |
| 1 | HASH GROUP BY | | 7882 | 700K| 7330 (1)| 00:01:28 |
|* 2 | HASH JOIN | | 7882 | 700K| 7329 (1)| 00:01:28 |
| 3 | VIEW | VW_GBC_13 | 7837 | 390K| 6534 (1)| 00:01:19 |
| 4 | TEMP TABLE TRANSFORMATION | | | | | |
| 5 | LOAD AS SELECT | SYS_TEMP_0FD9D7416_F97A325 | | | | |
|* 6 | VIEW | index$_join$_114 | 572 | 10296 | 191 (9)| 00:00:03 |
|* 7 | HASH JOIN | | | | | |
| 8 | BITMAP CONVERSION TO ROWIDS | | 572 | 10296 | 1 (0)| 00:00:01 |
|* 9 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F46 | | | | |
| 10 | INDEX FAST FULL SCAN | W_MCAL_DAY_D_P1 | 572 | 10296 | 217 (1)| 00:00:03 |
| 11 | HASH GROUP BY | | 7837 | 290K| 6343 (1)| 00:01:17 |
|* 12 | HASH JOIN | | 26186 | 971K| 6337 (1)| 00:01:17 |
| 13 | TABLE ACCESS FULL | SYS_TEMP_0FD9D7416_F97A325 | 572 | 5148 | 2 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID | W_GL_BALANCE_F | 26186 | 741K| 6334 (1)| 00:01:17 |
| 15 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 16 | BITMAP AND | | | | | |
| 17 | BITMAP MERGE | | | | | |
| 18 | BITMAP KEY ITERATION | | | | | |
|* 19 | TABLE ACCESS BY INDEX ROWID| W_INT_ORG_D | 2 | 32 | 3 (0)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | W_INT_ORG_ORG_NUM | 2 | | 1 (0)| 00:00:01 |
|* 21 | BITMAP INDEX RANGE SCAN | W_GL_BALANCE_F_F4 | | | | |
| 22 | BITMAP MERGE | | | | | |
| 23 | BITMAP KEY ITERATION | | | | | |
| 24 | TABLE ACCESS FULL | SYS_TEMP_0FD9D7416_F97A325 | 572 | 5148 | 2 (0)| 00:00:01 |
|* 25 | BITMAP INDEX RANGE SCAN | W_GL_BALANCE_F_F1 | | | | |
| 26 | VIEW | index$_join$_003 | 199K| 7775K| 794 (5)| 00:00:10 |
|* 27 | HASH JOIN | | | | | |
|* 28 | HASH JOIN | | | | | |
| 29 | BITMAP CONVERSION TO ROWIDS | | 199K| 7775K| 26 (0)| 00:00:01 |
| 30 | BITMAP INDEX FULL SCAN | W_GL_ACCOUNT_D_M1 | | | | |
| 31 | BITMAP CONVERSION TO ROWIDS | | 199K| 7775K| 118 (0)| 00:00:02 |
| 32 | BITMAP INDEX FULL SCAN | W_GL_ACCOUNT_D_M10 | | | | |
| 33 | INDEX FAST FULL SCAN | W_GL_ACCOUNT_D_M18 | 199K| 7775K| 733 (1)| 00:00:09 |
Predicate Information (identified by operation id):
2 - access("T91397"."ROW_WID"="ITEM_1")
6 - filter("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
7 - access(ROWID=ROWID)
9 - access("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
12 - access("T91573"."BALANCE_DT_WID"="C0")
19 - filter("T111515"."COMPANY_FLG"='Y')
20 - access("T111515"."ORG_NUM"='02000')
21 - access("T91573"."COMPANY_ORG_WID"="T111515"."ROW_WID")
25 - access("T91573"."BALANCE_DT_WID"="C0")
27 - access(ROWID=ROWID)
28 - access(ROWID=ROWID)
Note
- star transformation used for this statement
Statistics
1067 recursive calls
9 db block gets
417513 consistent gets
296603 physical reads
6708 redo size
25220 bytes sent via SQL*Net to client
520 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
522 rows processedAnd here is the cursor details:
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
SQL_ID 6s625d3821nq3, child number 0
SELECT /*+ gather_plan_statistics */ SUM(T91573.ACTIVITY_GLOBAL1_AMT)
AS c2, SUM( CASE WHEN T91573.DB_CR_IND = 'CREDIT' THEN
T91573.ACTIVITY_GLOBAL1_AMT END ) AS c3,
T91397.GL_ACCOUNT_NAME AS c4, T91397.GROUP_ACCOUNT_NUM
AS c5, SUM(T91573.BALANCE_GLOBAL1_AMT) AS c6, T156337.ROW_WID
AS c7 FROM W_MCAL_DAY_D T156337 /*
Dim_W_MCAL_DAY_D_Fiscal_Day */ , W_INT_ORG_D T111515 /*
Dim_W_INT_ORG_D_Company */ , W_GL_ACCOUNT_D T91397 /*
Dim_W_GL_ACCOUNT_D */ , W_GL_BALANCE_F T91573 /*
PLAN_TABLE_OUTPUT
Fact_W_GL_BALANCE_F */ WHERE ( T91397.ROW_WID =
T91573.GL_ACCOUNT_WID AND T91573.COMPANY_ORG_WID = T111515.ROW_WID
AND T91573.BALANCE_DT_WID = T156337.ROW_WID AND T111515.COMPANY_FLG
= 'Y' AND T111515.ORG_NUM = '02000' AND
T156337.MCAL_PER_NAME_QTR = '2010 Q 1' ) GROUP BY
T91397.GL_ACCOUNT_NAME, T91397.GROUP_ACCOUNT_NUM, T156337.ROW_WID
Plan hash value: 3262111942
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem| Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | 522 |00:51:34.16 | 424K| 111K| 2 | | | |
| 1 | HASH GROUP BY | | 1 | 7882 | 522 |00:51:34.16 | 424K| 111K| 2 | 748K| 748K| 1416K (0)|
|* 2 | HASH JOIN | | 1 | 7882 | 5127 |00:51:34.00 | 424K| 111K| 2 | 1035K| 1035K| 1561K (0)|
| 3 | VIEW | VW_GBC_13 | 1 | 7837 | 5127 |00:51:32.65 | 423K| 111K| 2 | | | |
| 4 | TEMP TABLE TRANSFORMATION | | 1 | | 5127 |00:51:32.64 | 423K| 111K| 2 | | | |
| 5 | LOAD AS SELECT | | 1 | | 0 |00:00:00.09 | 188 | 0 | 2 | 269K| 269K| 269K (0)|
|* 6 | VIEW | index$_join$_114 | 1 | 572 | 724 |00:00:00.01 | 183 | 0 | 0 | | | |
|* 7 | HASH JOIN | | 1 | | 724 |00:00:00.01 | 183 | 0 | 0 | 1011K| 1011K| 1573K (0)|
| 8 | BITMAP CONVERSION TO ROWIDS | | 1 | 572 | 724 |00:00:00.01 | 3 | 0 | 0 | | | |
|* 9 | BITMAP INDEX SINGLE VALUE | W_MCAL_DAY_D_F46 | 1 | | 1 |00:00:00.01 | 3 | 0 | 0 | | | |
| 10 | INDEX FAST FULL SCAN | W_MCAL_DAY_D_P1 | 1 | 572 | 64822 |00:00:00.06 | 180 | 0 | 0 | | | |
| 11 | HASH GROUP BY | | 1 | 7837 | 5127 |00:51:32.54 | 423K| 111K| 0 | 1168K| 1038K| 2598K (0)|
|* 12 | HASH JOIN | | 1 | 26186 | 3267K|03:18:27.02 | 423K| 111K| 0 | 1236K| 1236K| 1248K (0)|
| 13 | TABLE ACCESS FULL | SYS_TEMP_0FD9D73B3_F97A325 | 1 | 572 | 724 |00:00:00.02 | 7 | 2 | 0 | | | |
| 14 | TABLE ACCESS BY INDEX ROWID | W_GL_BALANCE_F | 1 | 26186 | 3267K|03:18:12.81 | 423K| 111K| 0 | | | |
| 15 | BITMAP CONVERSION TO ROWIDS | | 1 | | 3267K|00:00:06.29 | 16142 | 1421 | 0 | | | |
| 16 | BITMAP AND | | 1 | | 74 |00:00:03.06 | 16142 | 1421 | 0 | | | |
| 17 | BITMAP MERGE | | 1 | | 83 |00:00:00.08 | 393 | 0 | 0 | 1024K| 512K| 2754K (0)|
| 18 | BITMAP KEY ITERATION | | 1 | | 764 |00:00:00.01 | 393 | 0 | 0 | | | |
|* 19 | TABLE ACCESS BY INDEX ROWID| W_INT_ORG_D | 1 | 2 | 2 |00:00:00.01 | 3 | 0 | 0 | | | |
|* 20 | INDEX RANGE SCAN | W_INT_ORG_ORG_NUM | 1 | 2 | 2 |00:00:00.01 | 1 | 0 | 0 | | | |
|* 21 | BITMAP INDEX RANGE SCAN | W_GL_BALANCE_F_F4 | 2 | | 764 |00:00:00.01 | 390 | 0 | 0 | | | |
| 22 | BITMAP MERGE | | 1 | | 210 |00:00:03.12 | 15749 | 1421 | 0 | 57M| 7389K| 17M (3)|
| 23 | BITMAP KEY ITERATION | | 4 | | 16405 |00:00:15.36 | 15749 | 1421 | 0 | | | |
| 24 | TABLE ACCESS FULL | SYS_TEMP_0FD9D73B3_F97A325 | 4 | 572 | 2896 |00:00:00.05 | 16 | 6 | 0 | | | |
|* 25 | BITMAP INDEX RANGE SCAN | W_GL_BALANCE_F_F1 |2896 | | 16405 |00:00:24.99 | 15733 | 1415 | 0 | | | |
| 26 | VIEW | index$_join$_003 | 1 | 199K| 199K|00:00:02.50 | 737 | 1 | 0 | | | |
|* 27 | HASH JOIN | | 1 | | 199K|00:00:02.18 | 737 | 1 | 0 | 14M| 2306K| 17M (0)|
|* 28 | HASH JOIN | | 1 | | 199K|00:00:01.94 | 144 | 1 | 0 | 10M| 2639K| 13M (0)|
| 29 | BITMAP CONVERSION TO ROWIDS | | 1 | 199K| 199K|00:00:00.19 | 26 | 0 | 0 | | | |
| 30 | BITMAP INDEX FULL SCAN | W_GL_ACCOUNT_D_M1 | 1 | | 93 |00:00:00.01 | 26 | 0 | 0 | | | |
| 31 | BITMAP CONVERSION TO ROWIDS | | 1 | 199K| 199K|00:00:01.05 | 118 | 1 | 0 | | | |
| 32 | BITMAP INDEX FULL SCAN | W_GL_ACCOUNT_D_M10 | 1 | | 5791 |00:00:00.01 | 118 | 1 | 0 | | | |
| 33 | INDEX FAST FULL SCAN | W_GL_ACCOUNT_D_M18 | 1 | 199K| 199K|00:00:00.19 | 593 | 0 | 0 | | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - access("T91397"."ROW_WID"="ITEM_1")
6 - filter("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
7 - access(ROWID=ROWID)
9 - access("T156337"."MCAL_PER_NAME_QTR"='2010 Q 1')
12 - access("T91573"."BALANCE_DT_WID"="C0")
19 - filter("T111515"."COMPANY_FLG"='Y')
20 - access("T111515"."ORG_NUM"='02000')
21 - access("T91573"."COMPANY_ORG_WID"="T111515"."ROW_WID")
25 - access("T91573"."BALANCE_DT_WID"="C0")
27 - access(ROWID=ROWID)
28 - access(ROWID=ROWID)
PLAN_TABLE_OUTPUT
Note
- star transformation used for this statement
78 rows selected.Can any suggest a way to improve the performance? Or even hint at a good place for me to start looking?
Please let me know if there is any additional information I can give.
-Joe
Similar Messages
-
Tunning query with LIKE clause
Hi, is there any chance to improve execution plan for SQL query which is using LIKE clause?
Query:
SELECT * FROM [TABLE_NAME] WHERE ADDRESS LIKE :1 ESCAPE '\';
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 112K| 18M| 11361 (1)| 00:02:17 |
|* 1 | TABLE ACCESS FULL| [TABLE_NAME] | 112K| 18M| 11361 (1)| 00:02:17 |
Execution plan is far from ideal. Table has several millions of records.
This query is used by application to seach using patterns...
Any ideas?
Thx in advance!This example isn't entirely realistic. Your table T has only 1 column which is also indexed. Apparently, for small enough tables of one column it will search the entire index for the wildcard value. But if I add a second column, or base the single column version on a larger table, the optimizer uses a full table scan:
SQL> drop table t;
Table dropped.
SQL> CREATE TABLE t AS SELECT DBMS_RANDOM.STRING('a',100) a
2 ,DBMS_RANDOM.STRING('a',100) b
3 FROM user_objects;
Table created.
SQL>
SQL> CREATE INDEX t_idx ON t (a) COMPUTE STATISTICS;
Index created.
SQL>
SQL> SET autotrace traceonly explain
SQL>
SQL> SELECT *
2 FROM t
3 WHERE a LIKE '%acb%';
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=100)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=100)
SQL> drop table t;
Table dropped.
SQL> CREATE TABLE t AS SELECT DBMS_RANDOM.STRING('a',100) a
2 -- ,DBMS_RANDOM.STRING('a',100) b
3 FROM all_objects;
Table created.
SQL>
SQL> CREATE INDEX t_idx ON t (a) COMPUTE STATISTICS;
Index created.
SQL>
SQL> SET autotrace traceonly explain
SQL>
SQL> SELECT *
2 FROM t
3 WHERE a LIKE '%acb%';
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=20 Card=399 Bytes=39900)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=20 Card=399 Bytes=39900) -
Adding column in table having millions of records
Hi,
Oracle release :11.2.0.3
Table A exists with millions of records. - has many indexes on it.
Need to add one column which exists in table B with same no of records.
Rowid is common join condition between table A and B to update this new column in A.
Please advice the fastest way to update this column.
Explain plan output for update query is :
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 181M| 3287M| 95T (1)|999:59:59 |
| 1 | UPDATE | A | | | | |
| 2 | PX COORDINATOR | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 181M| 3287M| 205K (2)| 00:47:58 |
| 4 | PX BLOCK ITERATOR | | 181M| 3287M| 205K (2)| 00:47:58 |
| 5 | TABLE ACCESS FULL | A | 181M| 3287M| 205K (2)| 00:47:58 |
PLAN_TABLE_OUTPUT
|* 6 | FILTER | | | | | |
|* 7 | TABLE ACCESS BY INDEX ROWID | B | 301K| 15M| 528K (1)| 02:03:24 |
|* 8 | INDEX RANGE SCAN | SYS_C0073404 | 30M| | 31081 (1)| 00:07:16 |
Thanks in advancecreate table new_A as select *.A ,column_in_B from A,B where A.row_id=B.row_id;
drop table A;
rename new_A to A;
No - that can't be right.
You need to access the ROWID of table A; not some column in table A that is named 'row_id'. And that assumes, as PaulHorth asked, that table B DOES have a column named 'row_id' that contains the ROWID values of table A.
Also you only posted the plan for the update statement: post the actual query that plan is based on.
create table emp_rowid as
select 'new_' || e.ename new_ename, e.rowid row_id from emp e
where deptno = 20
update /*+parallel (4) */ emp e
set ename = (select new_ename from emp_rowid er
where er.row_id = e.rowid)
where rowid in (select row_id from emp_rowid)
select * from table(dbms_xplan.display_cursor()) -
WebInterfaces for Millions of records - Transactional InfoCube
Hi Gerd,
Could u please suggest me which one can i use when i'm dealing with millions of records-Large amount of data.
(Displaying data from planning folders or WebInterfaceBuilder)
Right now i'm using WebInterfaceBuilder when i'm doing planning where user is allowed to enter values - for millions of records like Revenue forecast planning on salesorders.
Thanks in advance,
Thanks for your time,
Saritha.Hello Saritha,
Well - technically there is no big difference whether you are using Web interfaces or planning folders. All data has to be selected from the data base, processed by the BPS, the information has to be transmitted to the PC and displayed there. So both front ends should have roughly the same speed.
Sorry, but one question - is it really necessary to work with millions of data records online? The philosophy of the BPS is that you should limit the number of records you use online as much as possible - it should be an amount also the user can handle online - i.e. manually working with every record (which is probably not possible when handling 1 million of records). If a large number of records should be calculated/manipulated this should be done in a batch job - i.e. a planning sequence that runs in the back ground. This prevents the system from terminating the operation due to a long run time (usual time until a time out for an online transaction occurs is about 20 min) and gives you also more opportunities to control memory use or parallelizing of processes (see note 645454).
Best regards,
Gerd Schoeffl
NetWeaver RIG BI -
Tunning the Query with Distinct Clause
Hi All,
I have the below query that returns 28113657 records
select src_Wc_id, osp_id, src_osp_id
from osp_complements o1
where exists (select 1 from wc_crossref wc
where wc.src_wc_id = o1.SRC_WC_ID
and wc.state = 'CA')
This query executes within a second...
But when i include a DISTINCT clause in the select statement, it takes more time ... (more than 20 mins)
I am trying to get it tunned. Please advice me with your knowledge to get it done
Thanks for your time
Kannan.Retrieving distinct rows requires a sort of all returned rows. 20 - 3 = ~17 mins for sorting 28 mln rows looks too much. You need to tune your instance in order to speed up sort operation. The amount of memory dedicated to sorts is controlled by PGA_AGGREGATE_TARGET parameter. If it's set to 0 (not recommended) then SORT_AREA_SIZE is used. The process of PGA tuning is quite complex and described in PGA Memory Management chapter of Performance Tuning Guide.
There is a workaround which allows to bypass sort operation, but it requires proper index and proper access by that index. The idea is that rows rertrieved via index are automatically ordered by indexed columns. If that and only that columns (possibly - in the same order as in the index, I don't know) are selected using DISTINCT then sort is not actually performed. Rows are already sorted due to access via index.
Hope this will help you.
Regards,
Dima -
Database table with potentially millions of records
Hello,
We want to keep track of user's transaction history from the performance database. The workload statistics contain the user transaction history information, however since the workload performance statistics are intended for temporary purposes and data from these tables are deleted every few months, we loose all the user's historical records.
We want to keep track of the following in a table that we can query later:
User ID - Length 12
Transaction - Length 20
Date - Length 8
With over 20,000 end users in production this can translate into thousands of records to be inserted into this table daily.
What is the best way to store this type of information? Is there a specific table type that is designed for storing massive data quantity? Also, over time (few years) this table can grow into millions or hundreds of millions of records. How can we manage that in terms of performance and storage space?
If anyone has worked with database tables with very large amounts of records, and would like to share your experiences, please let us know how we could/should structure this function in our environment.
Best Regards.Hi SS
Alternatively, you can use a <u>cluster table</u>. For more help refer to F1 help on <b>"IMPORT TO / EXPORT FROM DATABASE"</b> statements.
Or you can store data as a <u>file</u> on the application server using <b>"OPEN DATASET, TRANSFER, CLOSE DATASET"</b> statements.
You can also select to archieve data of older than some definite date.
You can also mix your alternatives for the recent and archieve data.
*--Serdar <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=qbk%2bsag%2bjiw%3d">[ BC ]</a> -
Query to get the records with same last name
I need to write a single sql query to get all records with duplicate last_name's. For example, if tab1 has 4 records:
10 Amit Kumar
20 Kishore Kumar
30 Sachin Gupta
40 Peter Gabriel
then the query should return
10 Amit Kumar
20 Kishore Kumar
id, name,L_name being the 3 columns in table
Apprecite you help.
Thank you
MarySQL> create table mytable (id,name,l_name)
2 as
3 select 10, 'Amit', 'Kumar' from dual union all
4 select 20, 'Kishore', 'Kumar' from dual union all
5 select 30, 'Sachin', 'Gupta' from dual union all
6 select 40, 'Peter', 'Gabriel' from dual
7 /
Table created.
SQL> select id
2 , name
3 , l_name
4 from ( select t.*
5 , count(*) over (partition by l_name) cnt
6 from mytable t
7 )
8 where cnt > 1
9 /
ID NAME L_NAME
10 Amit Kumar
20 Kishore Kumar
2 rows selected.Regards,
Rob. -
Update_Insert - 1 query with multiple records
Hi
I have a file coming with say 50 records ...each record having a key field ( order num and order Item )
I also know how to define a target JDBC structure...
My Q is is there a way that I can fire a single UPDATE_INSERT query to the data base with this 50 records..
I do not want to repeat the STATEMENT node and create 50 seperate queries...
Is this possibleAs docu says,
action=UPDATE_INSERT
The statement has the same format as for the UPDATE action. Initially, the same action is executed as for UPDATE. If no update to the database table can be made for this action (the condition does not apply to any table entry), values of the table described in the <access> element are inserted in accordance with the description of the action INSERT. <key> elements are ignored in this case.
The response document has the following format; one of the two values is always 0 because either an UPDATE or an INSERT action is always executed:
<update_count>count</update_count>
<insert_count>count</insert_count> -
Sql Except query to Display mismatched records along with Table names
Hi
I am using below query to display mismatch records between two tables
SELECT * FROM table1
EXCEPT
SELECT * FROM table2
UNION
SELECT * FROM table2
EXCEPT
SELECT * FROM table1
This displays mismatched records like below
Sunil 1000 india
Sunil 1500 india
I would like to display even the table names in the result For ex;
Sunil 1000 india Table1
Sunil 1500 india Table2
Can you please help us in this regard.cnk_gr's query should work for you.
One change that I would make is to use UNION ALL, not UNION. UNION eliminates duplicate rows, which means SQL has to do additional work (sort the result and then check for duplicates).
So if you can have duplicates and don't want them in your result, then you would use UNION. And if you can have duplicates and you want the duplicates in the result, you would use UNION ALL. But in cases like this, where you know you cannot have
duplicates (because column 1 contains 'TABLE1' for every row in the first half and column 1 contains 'TABLE2' for every row returned from the second half of the query), you should always use UNION ALL. It will be more efficient.
Tom -
Tuning SQL query with SDO and Contains?
I'trying to optimize a query
with a sdo_filter and an intermedia_contains
on a 3.000.000 records table,
the query look like this
SELECT COUNT(*) FROM professionnel WHERE mdsys.sdo_filter(professionnel.coor_cart,mdsys.sdo_geometry(2003, null, null,mdsys.sdo_elem_info_array(1,1003,4),mdsys.sdo_ordinate_array(809990,2087279,778784,2087279,794387,2102882)),'querytype=window') = 'TRUE' AND professionnel.code_rubr ='12 3 30' AND CONTAINS(professionnel.Ctx,'PLOMBERIE within Nom and ( RUE within Adresse1 )',1)>0
and it takes 15s on a bi 750 pentium III with
1.5Go of memory running under 8.1.6 linux.
What can i do to improve this query time?
nullHi Vincent,
We have patches for Oracle 8.1.6 Spatial
on NT and Solaris.
These patches include bug fixes and
performance enhancements.
We are in the process of making these patches
avaialble in a permanent place, but until then, I will temporarily put the patches on:
ftp://oracle-ftp.oracle.com/
Log in as anonymous and use your email for
password.
The patches are in /tmp/outgoing in:
NT816-000706.zip - NT patch
libordsdo.tar - Solaris patch
I recommend doing some analysis on
individual pieces of the query.
i.e. time the following:
1)
SELECT COUNT(*)
FROM professionnel
WHERE mdsys.sdo_filter(
professionnel.coor_cart,
mdsys.sdo_geometry(
2003, null, null,
mdsys.sdo_elem_info_array(1,1003,4),
mdsys.sdo_ordinate_array(
809990,2087279,
778784,2087279,
794387,2102882)),
'querytype=window') = 'TRUE';
2)
SELECT COUNT(*)
FROM professionnel
WHERE CONTAINS(professionnel.Ctx,
'PLOMBERIE within Nom and ( RUE within Adresse1)',1) >0;
You might want to try reorganizing the entire
query as follows (no promises).
If you contact me directly, I can try to
help to further tune the SQL.
Hope this helps. Thanks.
Dan
select count(*)
FROM
(SELECT /*+ no_merge */ rowid
FROM professionnel
WHERE mdsys.sdo_filter(
professionnel.coor_cart,
mdsys.sdo_geometry(
2003, null, null,
mdsys.sdo_elem_info_array(1,1003,4),
mdsys.sdo_ordinate_array(809990,2087279,
778784,2087279,
794387,2102882)),
'querytype=window') = 'TRUE'
) a,
(SELECT /*+ no_merge */ rowid
FROM professionnel
WHERE CONTAINS(professionnel.Ctx,
'PLOMBERIE within Nom and
( RUE within Adresse1)',1) >0
) b
where a.rowid = b.rowid
and professionnel.code_rubr ='12 3 30';
**NOTE** Try this with no index on code_rubr
null -
BDC select query with addition based on all If conditions
Hi can any one send me the select query with conditions like
If Itab is not initial.
Endif. and if possible with valiadations messages also.
IF CHECK_NUMBER of CHECK_ADVICE of the flat file = PAYR-CHECT. Then update SAP field BSEG-XREF2 .
9. When Flat file check Number = PAYR-CHECT then Insert or Update Flat file RECEIPT_DATE into SAP field BSEG-XREF1.
Please send me immediately.> SELECT rsnum
> rspos
> matnr
> werks
> lgort
> shkzg
> aufnr
> bdmng
> enmng
> FROM resb INTO TABLE it_rsnum FOR ALL ENTRIES IN it_aufnr
> WHERE rsnum EQ it_aufnr-rsnum
> AND xloek NE 'X'
> AND postp NE 'X'
> AND resbbdmng GE resbenmng.
> ENDIF.
>
> Database Table RESB: 40,000,000 Records (40 Million)
> Internal Table it_aufnr: 20,000 Entries
> Entries selected from RESB: 150,000.
>
Hi,
the problem is the big for all entries table.
Your 20.000 records FAE will be split into SEVERAL sql statements depending on the size of the table.
Where do you get the it_auftrn from?
If it's another transparent table try to JOIN this table to your big table.
SELECT rsnum
rspos
matnr
werks
lgort
shkzg
aufnr
bdmng
enmng
FROM resb JOIN tab_auftrn on resbrsnum = tab_auftrnauftrn
AND xloek NE 'X'
AND postp NE 'X'
AND resbbdmng GE resbenmng.
Make sure that your WHERE filter and the JOIN keys on both tables are supported by indexes.
150.000 records about of 40 Mio. can definitly be serviced by an appropriate index:
i.e. Index fields (check your PK or secondary indexes):
rsnum
enmng
aufnr
bye
yk -
Hi folks,
A simple question (8.0.5 on WinNT):
Is there any way to speed up the execution time for
a query with a 'unique' clause? I have a table with 1.6 million
records, and I want to get unique values for a
particular column which is not the primary key.
The result of the query -
select unique(col1) from table1;
typically returns between 1 to 100 unique values for col1.
As such, this query takes upto 18 secs on my test machine in
SQL Window itself (for 1.6 million records in the table). Any
ways to speed it up?
Thanks,
Cheers,
SanchayanHaving an index on the column should speed it up, but only
where the column is defined as NOT NULL I think. If you
can't put a NOT NULL constraint on the column consider
whether you need to know about NULL values.
If the index was of the bitmap type that would also speed it
up further, probably to take the query time down to one or
two seconds, but they come with a big list of warnings.
Actually, I can't remember now if they're available in your
version. -
Performance of APD with source as 'Query with conditions'
Hi,
I have a question regarding the performance of an APD query with conditions when it is run in the background.
I have an infocube with 80 million records.I have a query to find Top 15 (Condition In query) Customers per territory.
With the help of an APD Query (Source) i wanted to populate the details (customer category , etc) of the Top 15 Customers into a DSO (Target)
Right now it takes 6 minutes to run the query in web.
I wanted to know how feasible it is to use an APD (run in backgrnd) to dump the results of the BEx query (which has conditions Top 15) into DSO.
Also , what other options do i have ?
Appreciate your answers.
RegardsThomas thanks for the response. I have checked the file on the app server and found that the lines with more than 512 characters are being truncated after 512 characters. Basis team is not aware of any such limits on the app server. I am sure few on the forum would have extracted data to a file on the application server using APD. Please note that the file extraction to the client workstation is working as expected (posts lines more than 512 characters).
-
Query with order by taking 1 hr
Hi,
Query with order by taking 1 hr, without order by taking 9 seconds,
pls tell me ,wat will be the reason and how to tune.
Query:
SELECT
T17.CONFLICT_ID,
T17.LAST_UPD,
T17.CREATED,
T17.LAST_UPD_BY,
T17.CREATED_BY,
T17.MODIFICATION_NUM,
T17.ROW_ID,
T1.ACCNT_TYPE_CD,
T36.X_BRIDGESTATION,
T36.X_CTI_PIN,
T36.X_FLOOR,
T36.X_SEGMENT2,
T36.X_SEGMENT3,
T36.X_CONTACT_STATUS,
T36.X_DEALING_CODE,
T36.X_DELETE,
T36.X_DEPARTMENT,
T36.X_DIRECT_MKT,
T36.X_FASS_LAST_CONTACT_DATE,
T36.X_SEGMENT1,
T36.X_LAST_TRAINED_DATE,
T36.X_LEGAL_CONSENT,
T36.X_LOCAL_FST_NAME,
T36.X_LOCAL_LAST_NAME,
T36.X_PREF_LANG,
T36.X_PROD_BLEND,
T36.X_SALUTATION,
T36.X_BSC_SIDE,
T36.X_END_USR_ACT,
T36.X_PRIM_ASSET_CLASS,
T36.X_SEC_ASSET_CLASS,
T36.X_STATUS,
T36.X_LANGUAGE,
T36.X_SUPPRESS_SMS_FLG,
T36.X_TRAINING_ADDRESS,
T36.X_UPD_TYPE,
T36.X_XTRA_UPD,
T36.X_XTRA_ID,
T36.X_ESERVICE_USER,
T36.X_SALES_COMMENTS,
T36.PR_DEPT_OU_ID,
T1.INTEGRATION_ID,
T1.PRTNR_FLG,
T36.BIRTH_DT,
T36.CELL_PH_NUM,
T9.ATTRIB_07,
T5.LAST_UPD,
T36.EMAIL_ADDR,
T36.EMP_FLG,
T36.FAX_PH_NUM,
T36.FST_NAME,
T36.HOME_PH_NUM,
T36.JOB_TITLE,
T36.LAST_NAME,
T36.SEX_MF,
T36.PER_TITLE,
T36.MID_NAME,
T36.OWNER_PER_ID,
T17.NAME,
T36.PERSON_UID,
T36.PRIV_FLG,
T1.NAME,
T29.PR_ADDR_ID,
T36.PR_REP_DNRM_FLG,
T36.PR_REP_MANL_FLG,
T36.PR_REP_SYS_FLG,
T36.PR_MKT_SEG_ID,
T36.PR_GRP_OU_ID,
T36.PR_OPTY_ID,
T36.PR_PER_ADDR_ID,
T36.PR_PER_PAY_PRFL_ID,
T36.PR_POSTN_ID,
T36.PR_RESP_ID,
T19.OWN_INST_ID,
T19.INTEGRATION_ID,
T36.SOC_SECURITY_NUM,
T29.STATUS,
T36.SUPPRESS_CALL_FLG,
T36.SUPPRESS_MAIL_FLG,
T36.WORK_PH_NUM,
T36.BU_ID,
T36.PR_ALT_PH_NUM_ID,
T36.PR_EMAIL_ADDR_ID,
T36.PR_SYNC_USER_ID,
T18.SHARE_HOME_PH_FLG,
T36.PR_REGION_ID,
T36.NATIONALITY,
T36.CITIZENSHIP_CD,
T36.AGENT_FLG,
T36.MEMBER_FLG,
T13.PR_EMP_ID,
T36.PR_OU_ADDR_ID,
T33.PR_EMP_ID,
T13.PR_EMP_ID,
T21.LOGIN,
T26.LOGIN,
T25.PR_FAX_NUM_ID,
T36.PR_INDUST_ID,
T36.PR_NOTE_ID,
T1.PR_POSTN_ID,
T36.PR_PROD_LN_ID,
T25.PR_SMS_NUM_ID,
T36.PR_SECURITY_ID,
T6.NAME,
T36.MED_SPEC_ID,
T36.PR_STATE_LIC_ID,
T36.PR_TERR_ID,
T36.PROVIDER_FLG,
T36.CUST_SINCE_DT,
T34.ADDR,
T34.CITY,
T34.COUNTRY,
T34.ZIPCODE,
T34.STATE,
T4.NAME,
T36.CURR_PRI_LST_ID,
T27.ROW_STATUS,
T22.LOGIN,
T2.CITY,
T2.COUNTRY,
T2.ZIPCODE,
T2.COUNTY,
T2.ADDR,
T20.X_ACC_CLASS,
T20.X_FS_INLIMITS,
T20.X_PRIORITY,
T20.X_DC_LOC,
T20.X_SERV_PROV_ID,
T20.X_FS_LOC,
T20.X_LOCAL_ACCOUNT_NAME,
T20.NAME,
T20.LOC,
T20.PR_BL_ADDR_ID,
T20.PR_BL_PER_ID,
T20.PR_SHIP_ADDR_ID,
T20.PR_SHIP_PER_ID,
T20.OU_NUM,
T16.ROW_ID,
T20.PR_SRV_AGREE_ID,
T16.ROW_ID,
T15.PRIM_MARKET_CD,
T16.ROW_ID,
T14.CITY,
T14.COUNTRY,
T14.ZIPCODE,
T14.STATE,
T14.ADDR,
T35.NAME,
T32.NAME,
T8.CHRCTR_ID,
T32.PRIV_FLG,
T3.LOGIN,
T31.LOGIN,
T36.ROW_ID,
T36.MODIFICATION_NUM,
T36.CREATED_BY,
T36.LAST_UPD_BY,
T36.CREATED,
T36.LAST_UPD,
T36.CONFLICT_ID,
T36.PAR_ROW_ID,
T25.ROW_ID,
T25.MODIFICATION_NUM,
T25.CREATED_BY,
T25.LAST_UPD_BY,
T25.CREATED,
T25.LAST_UPD,
T25.CONFLICT_ID,
T25.PAR_ROW_ID,
T18.ROW_ID,
T18.MODIFICATION_NUM,
T18.CREATED_BY,
T18.LAST_UPD_BY,
T18.CREATED,
T18.LAST_UPD,
T18.CONFLICT_ID,
T18.PAR_ROW_ID,
T9.ROW_ID,
T9.MODIFICATION_NUM,
T9.CREATED_BY,
T9.LAST_UPD_BY,
T9.CREATED,
T9.LAST_UPD,
T9.CONFLICT_ID,
T9.PAR_ROW_ID,
T19.ROW_ID,
T19.MODIFICATION_NUM,
T19.CREATED_BY,
T19.LAST_UPD_BY,
T19.CREATED,
T19.LAST_UPD,
T19.CONFLICT_ID,
T19.PAR_ROW_ID,
T27.ROW_ID,
T24.ROW_ID,
T23.ROW_ID,
T2.ROW_ID,
T28.ROW_ID,
T16.ROW_ID,
T11.ROW_ID,
T14.ROW_ID,
T35.ROW_ID,
T8.ROW_ID,
T30.ROW_ID,
T7.ROW_ID
FROM
SIEBEL.S_ORG_EXT T1,
SIEBEL.S_ADDR_PER T2,
SIEBEL.S_USER T3,
SIEBEL.S_PRI_LST T4,
SIEBEL.S_PER_DEDUP_KEY T5,
SIEBEL.S_MED_SPEC T6,
SIEBEL.S_PARTY T7,
SIEBEL.S_CON_CHRCTR T8,
SIEBEL.S_CONTACT_X T9,
SIEBEL.S_POSTN T10,
SIEBEL.S_CON_ADDR T11,
SIEBEL.S_POSTN T12,
SIEBEL.S_POSTN T13,
SIEBEL.S_ADDR_PER T14,
SIEBEL.S_ORG_EXT_FNX T15,
SIEBEL.S_PARTY T16,
SIEBEL.S_PARTY T17,
SIEBEL.S_EMP_PER T18,
SIEBEL.S_CONTACT_SS T19,
SIEBEL.S_ORG_EXT T20,
SIEBEL.S_USER T21,
SIEBEL.S_USER T22,
SIEBEL.S_CON_ADDR T23,
SIEBEL.S_PARTY T24,
SIEBEL.S_CONTACT_LOYX T25,
SIEBEL.S_USER T26,
SIEBEL.S_POSTN_CON T27,
SIEBEL.S_PARTY_PER T28,
SIEBEL.S_POSTN_CON T29,
SIEBEL.S_PARTY T30,
SIEBEL.S_USER T31,
SIEBEL.S_CHRCTR T32,
SIEBEL.S_POSTN T33,
SIEBEL.S_ADDR_PER T34,
SIEBEL.S_CONTACT_XM T35,
SIEBEL.S_CONTACT T36
WHERE
T36.PR_DEPT_OU_ID = T1.PAR_ROW_ID (+) AND
T1.PR_POSTN_ID = T33.PAR_ROW_ID (+) AND
T36.PR_POSTN_ID = T13.PAR_ROW_ID (+) AND
T17.ROW_ID = T29.CON_ID (+) AND T29.POSTN_ID (+) = '1-ERPTObjMgrSqlLog' AND
T33.PR_EMP_ID = T21.PAR_ROW_ID (+) AND
T13.PR_EMP_ID = T26.PAR_ROW_ID (+) AND
T36.PR_PER_ADDR_ID = T34.ROW_ID (+) AND
T36.MED_SPEC_ID = T6.ROW_ID (+) AND
T36.CURR_PRI_LST_ID = T4.ROW_ID (+) AND
T17.ROW_ID = T5.PERSON_ID (+) AND
T17.ROW_ID = T36.PAR_ROW_ID AND
T17.ROW_ID = T25.PAR_ROW_ID (+) AND
T17.ROW_ID = T18.PAR_ROW_ID (+) AND
T17.ROW_ID = T9.PAR_ROW_ID AND
T17.ROW_ID = T19.PAR_ROW_ID (+) AND
T36.PR_POSTN_ID = T27.POSTN_ID AND T36.ROW_ID = T27.CON_ID AND
T27.POSTN_ID = T24.ROW_ID AND
T27.POSTN_ID = T12.PAR_ROW_ID (+) AND
T12.PR_EMP_ID = T22.PAR_ROW_ID (+) AND
T36.PR_OU_ADDR_ID = T23.ADDR_PER_ID (+) AND T36.PR_DEPT_OU_ID = T23.ACCNT_ID (+) AND
T36.PR_OU_ADDR_ID = T2.ROW_ID (+) AND
T36.PR_DEPT_OU_ID = T28.PARTY_ID (+) AND T36.ROW_ID = T28.PERSON_ID (+) AND
T36.PR_DEPT_OU_ID = T16.ROW_ID (+) AND
T36.PR_DEPT_OU_ID = T20.PAR_ROW_ID (+) AND
T36.PR_DEPT_OU_ID = T15.PAR_ROW_ID (+) AND
T29.PR_ADDR_ID = T11.ADDR_PER_ID (+) AND T29.CON_ID = T11.CONTACT_ID (+) AND
T29.PR_ADDR_ID = T14.ROW_ID (+) AND
T36.X_SEGMENT1 = T35.ROW_ID (+) AND
T36.PR_MKT_SEG_ID = T8.ROW_ID (+) AND
T8.CHRCTR_ID = T32.ROW_ID (+) AND
T1.PR_POSTN_ID = T30.ROW_ID (+) AND
T1.PR_POSTN_ID = T10.PAR_ROW_ID (+) AND
T10.PR_EMP_ID = T3.PAR_ROW_ID (+) AND
T36.PR_SYNC_USER_ID = T7.ROW_ID (+) AND
T36.PR_SYNC_USER_ID = T31.PAR_ROW_ID (+) AND
((T36.X_DELETE = 'N') AND
(T36.PRIV_FLG = 'N' AND T17.PARTY_TYPE_CD != 'Suspect')) AND
(T9.ATTRIB_10 = 'NObjMgrSqlLog')
ORDER BY
T36.LAST_NAME, T36.FST_NAME@afalty, the story you are telling about the order of the tables being important, from smallest to largest, et cetera, is only partially true and only when dealing with the rule based optimizer. Nowadays, almost everybody is using the cost based optimizer, so these remarks can very likely be ignored.
@original poster:
I think you are "measuring" the elapsed time by using TOAD, am I right? And you are probably writing 9 seconds, because it took TOAD 9 seconds before it could display the first records. If you would scroll down to the last record, it is likely taking much more time. A sort operation costs resources and time, but very unlikely this much.
When you want your rows sorted, all rows must have been visited before you know for sure which one is the smallest. That's why it takes longer to display the first row. Without an order by, the query can begin popping out rows much faster.
Regards,
Rob. -
Inserting Millions of records-Please Help!
Hi All,
I have a scenario where I hvae to query MARA and filter out some articles and then query WLK1 table(Article/Site Combination) and insert the records to a Custom (z) table.the result maybe millions of records,
can anyone tell me a efficient way to insert large number of records? This is urgent.Please help.
Warm Regards,
Sandeep ShenoyThis is a sample code i am using in one of my programs. You can try similar way and insert into custom table with every loop pass.
I am considering 2000 records at a time. You can decide the no and code accordingly.
if not tb_bkpf[] is initial.
fetching the data from BSEG for each 1000 entries in BKPF to
reduce the overhead of database extraction.
clear l_lines .
describe table tb_bkpf lines l_lines.
if l_lines >= 1.
clear: l_start, l_end.
do.
l_start = l_end + 1.
l_end = l_end + 2000.
if l_end > l_lines.
l_end = l_lines.
endif.
append lines of tb_bkpf from l_start to l_end to tb_bkpf_temp.
Populating the tb_bseg_tmp in the order of the database table
select bukrs
belnr
gjahr
buzei
shkzg
dmbtr
hkont
matnr
werks
from bseg
appending table tb_bseg_tmp
for all entries in tb_bkpf_temp
where bukrs = tb_bkpf_temp-bukrs and
belnr = tb_bkpf_temp-belnr and
gjahr = tb_bkpf_temp-gjahr and
hkont in s_hkont.
refresh tb_bkpf_temp.
if l_end >= l_lines.
exit.
endif.
enddo.
endif.
Maybe you are looking for
-
Yahoo calander won't sync with calendar app in Mountain Lion???
My yahoo calander will not sync to calanders app on my iMac using Mountain Lion. I have 4 email accounts total actually. I have two yahoo, one gmail, and one iCloud account. All of which sync fine on there own when I was running Lion. It's only since
-
EL : How do I insert a dynamic value in an expression ?
Hi, Here's my code - <c:forEach items="languageCodes" var="aCode"> <c:if test="${myMap.aCode} != null" > doSomething </c:if> </c:forEach> I need ${myMap.aCode} to work as if it was ${myMap.${aCode}}, invoking myMap's getter with the current value of
-
No authorization to purchase from product category
Hi I am trying to add an item to the shopping cart. When I press "Add to Shopping cart" the following warning message came. No authorization to purchase from product category 'Raw Materials'. How should I assign this authorization to my user name(I
-
Push button logic for module pool
I have 3 radio buttons ( group) and 2 buttons ok and cancel When the user selects a radio button and clicks on ok button i need to include an include program ( another program) if the user clicks on cancel button , i need to exit out of the screen Pl
-
TP Import issue on Ehp5 System
Dear Guru's, After the SAP ECC 6.0 Ehp5 installation, in the transport queue we have one request named as SYNCMARK and we're unable to set the client for this request because of this request we're unable to do the transports. Also we tried to delete