Better Performance ; View or derived table
Hi,
I need to know which one will perform better.
Approach I:
Create view view_table1 as select field1,field2,field3 from table1 where field_ind = 'Y';
Select A.field1,B.field3
from table1 A,
view_table1 B
where
a.field1 = b.field1
and a.field2 = b.field2;
Approach II:
Select A.field1,B.field3
from table1 A,
(select field1,field2,field3 from table1 where field_ind = 'Y') B
where
a.field1 = b.field1
and a.field2 = b.field2;
Thanks,
Vishu
The two are identical - a view is just stored sql, it will get substituted.
Similar Messages
-
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
What is the best way to replace the Inline Views for better performance ?
Hi,
I am using Oracle 9i ,
What is the best way to replace the Inline Views for better performance. I see there are lot of performance lacking with Inline views in my queries.
Please suggest.
RajWITH plus /*+ MATERIALIZE */ hint can do good to you.
see below the test case.
SQL> create table hx_my_tbl as select level id, 'karthick' name from dual connect by level <= 5
2 /
Table created.
SQL> insert into hx_my_tbl select level id, 'vimal' name from dual connect by level <= 5
2 /
5 rows created.
SQL> create index hx_my_tbl_idx on hx_my_tbl(id)
2 /
Index created.
SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user,'hx_my_tbl',cascade=>true)
PL/SQL procedure successfully completed.
Now this a normal inline view
SQL> select a.id, b.id, a.name, b.name
2 from (select id, name from hx_my_tbl where id = 1) a,
3 (select id, name from hx_my_tbl where id = 1) b
4 where a.id = b.id
5 and a.name <> b.name
6 /
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=7 Card=2 Bytes=48)
1 0 HASH JOIN (Cost=7 Card=2 Bytes=48)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
3 2 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
4 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
5 4 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
Now i use the with with the materialize hint
SQL> with my_view as (select /*+ MATERIALIZE */ id, name from hx_my_tbl where id = 1)
2 select a.id, b.id, a.name, b.name
3 from my_view a,
4 my_view b
5 where a.id = b.id
6 and a.name <> b.name
7 /
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=8 Card=1 Bytes=46)
1 0 TEMP TABLE TRANSFORMATION
2 1 LOAD AS SELECT
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
4 3 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
5 1 HASH JOIN (Cost=5 Card=1 Bytes=46)
6 5 VIEW (Cost=2 Card=2 Bytes=46)
7 6 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
8 5 VIEW (Cost=2 Card=2 Bytes=46)
9 8 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
here you can see the table is accessed only once then only the result set generated by the WITH is accessed.
Thanks,
Karthick. -
Difference between Temp table and Variable table and which one is better performance wise?
Hello,
Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
Which one is recommended to use for better performance?
also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
Is that Table variable using Memory or Disk space?
Thanks Shiven:) If Answer is Helpful, Please VoteCheck following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
But it also depends upon specific scenarios you are dealing with , can you share it?
~manoj | email: http://scr.im/m22g
http://sqlwithmanoj.wordpress.com
MCCA 2011 | My FB Page -
Database feature Derived Table nad performance
WE recently migrated our Data warehouse from DB2 to Oracle Exadata. Since the migration, i have noticed that some of the reports have becone extremely slow. e.g the report was runnign for 7 secs before is now running for over 6 minutes. In the database feature tab of the physical layer for the database connection, I have the database feature Derived Table turned on. If I turn this setting off, the same report runs in 3 secs. The Derived Table is supposed to help with the performance of the reports. but in this case it seems like it is hurting the performance than helping.
Should this setting be turned off? What are the side effects of turning this off? We can not do a full testing by changing this setting so i want to reach out to someone who had run into similar issues and what they did to remedy.
Any help will be appreciated.
Thanks!So the answer is "yes" but not quite in the way you might expect.
You have created the object where you can "borrow" the LOV for your derived table prompt. What you need to do is this. First, you need to create another object in your universe (I put it in the same class as the "code" object) that contains your object description. Then do this:
1. Double-click on the code object
2. Select the properties tab
3. Click on the Edit button. This does not edit the object definition itself, it edits the LOV definition.
4. On the query panel, add the Description object created earlier. Make sure it is the second object in the query panel.
5. You can opt to sort either by code or description, whichever makes sense to your users
6. Click "OK" to save the query definition, or click "Run" if you want to populate the LOV with values.
7. Make sure you click "Export with Universe" on the properties tab once you have customized the LOV, else your computer is the only one that will include the new LOV definition
8. The Hierarchical Display box may also be checked; for this case you have a code + description which are not hierarchical, so clear that box
That's it. When you export your universe, the LOV will go with it. When someone asks for a list of values for the code, the list will show both codes and descriptions, but only the code will be selected.
You do not need to make any changes to your current derived table prompt once the LOV has been customized. -
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 16001] ODBC error state: 37000 code: 8180 message: [Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared.. [nQSError: 16001] ODBC error state: 37000 code: 1033 message: [Microsoft][ODBC SQL Server Driver][SQL Server]The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified.. [nQSError: 16002] Cannot obtain number of columns for the query result. (HY000)
I have already tried to follow this thread below, but no change.
HI
In this specific report ran above the offending sql uses a CTE.
I am on 11.1.1.6.1, Windows Server 2008
Currently in testing phase in migration from 10g.
I know what the error means, just not how to resolve it or what setting may be causing this.In Physical layer, go to specific database(Physical layer's) properties, database features tab -> Un check the EXPRESSION_IN_ORDERBY_SUPPORTED.
For that failed report, go to Answers-> advance tab-> Look for 'Advanced SQL Clauses'
Check this box to issue an explicit Select Distinct
Let me know updates
If helps Pls mark correct or helpful -
WHY HANA VIEW ARE GIVING BAD PERFORMANCE IN DASHBOARD THEN TABLE
Dear Experts ,
I have created a dashboard on top of HANA view.
Scenario 1
HANA view is taking 3 seconds to fetch data in HANA studio .
When we create dashboard on that view, the dashboard is taking around 50sec.
Report has around 10 queries.
Scenario 2
Then we insert data into a table from same above view .
HANA table is taking 10 ms to fetch data in HANA studio .
When we create the same dashboard on that table, the report is taking
around 4 Seconds.
Report has around 10 query.
You may raise finger on dashboard tool. But by only converting view to table dashboard is giving good performance .
Please suggest what should we use Table or View for reporting tool ????
Only performance matter to Users and that's why we have taken HANA.
Thanks
Anupam."When you use attribute view in calculation view, hana will first build the attribute view first it will process all the data from memory in attribute view and then only will proceed with the calculation view."
That's actually completely false.
Attribute views are not materialized in full every time an Analytic View is queried. On the contrary, only the joins/materialization required to fulfill the query are performed. I have rolled out Analytic views with 50+ attribute views joined to them, if every one was built every time the AV was used, that would be a disaster.
An attribute view is fully executed if you query it directly with SQL, but the behavior changes when used in an AV.
Quick observation in good faith of my statement.
Query an analytic view with one column from a joined attribute view (Material - MARA)
SELECT "PAPH1", SUM("VVREV")
FROM "_SYS_BIC"."Main.Analytics.CO/AN_COPA"
WHERE "PERIO_T" = '2013010'
GROUP BY "PAPH1"
Only one join is executed to the fact table
Query an analytic view with one column from a joined attribute view (Material - MARA), plus another column from a joined table (T179T) inside the attribute view (snowflaked dimension).
SELECT "PAPH1", "VTEXT_1", SUM("VVREV")
FROM "_SYS_BIC"."Main.Analytics.CO/AN_COPA"
WHERE "PERIO_T" = '2013010'
GROUP BY "PAPH1", "VTEXT_1"
Only two joins are executed; one to the snowflaked dimension and one to the fact table
So even though I am issuing a query against an AV with a very large number of dimensions, only what is needed is actually executed.
Happy HANA,
Justin -
How to use LOV on a derived table dependent on a prompt for Webi web intelligence view.
The semantic layer has not created a defined LOV for the derived table. This only happens when pulling columns from derived table which depends on a prompt for unique selection. When pulling from derived table that does not depend on a date prompt does not have an issue in Webi.
I don't know if this is a universe design issue or WEBI.
Webi Rich internet application selects the date prompt first but if click on Storeno in List it returns a LOV with auto refresh.
Webi Web Intelligence returns List of Values for current prompt requires values for following prompts: and when you click to refresh webi responds with 'undefined'Hi…
Can you check Prompt : Store No is created at universe level and defined the syntax correctly
@Prompt(1,2,3,4,5) insert a prompts in SQL. When the user runs the query, the user is prompted to enter a value for a refresh each time.
1 = 'Prompt Text Message'
2 = 'Prompt Type' (i.e. A,N,D,U)
where A= Character, N= Number, D= Date, U= Unit.
3 = 'Class Name/ Object Name'
4 = 'Multi/ Mono'
Multi means Multiple ( Example = 2004,2005,2011)
Mono means Single ( Example = 2012)
5 = Free/Constrain (Type value or select value in LOV/ Select Value)
Example for creating a @prompt filter is below
@Prompt('enter year','A','time period/year', Multi', Free)
Pointer to a “List of Values” from an existing universe object. You invoke the target list of values by double-clicking on the object contain ing the list of values that you want to use in the "Classes and Objects" panel. This gives the Class name and the Object name, separate ed by a backslash. It must be enclosed in single quotes. For example: 'Client\Country'. When you are using “Index Awareness”, and you want to return the key values for an object,set the 5th value to primary_key
• Hard-coded list of single values or value pairs. The values in a pair are separated by a colon. Each value is enclosed in single quotes. The pairs of values are separated by a comma. The whole list is enclosed in curly brackets. Set the constraint to primary_key
http://www.businessobjectstips.com/tips/web-intelligence/prompt-functions-the-next-step-optional-prompts/ -
Hello,
Plz Ans me-
What is Derived table, where it is using derived table? advantages& Di- advantages derived table?Hi,
Derived tables are nothing else but InLine views (with the one additional benefit of being able to use @prompt syntax in a derived table) and as such do not contain any data, everything is calculated on the fly during query execution, meaning: whenever you refresh a report.
Derived tables are tables that you define in the universe schema. You create objects on them as you do with any other table. A derived table is defined by an SQL query at the universe level that can be used as a logical table in Designer.
Derived tables have the following advantages:
u2022 Reduced amount of data returned to the document for analysis. You can include complex calculations and functions in a derived table. These operations are performed before the result set is returned to a document, which saves time and reduces the need for complex analysis of large amounts of data at the report level.
u2022 Reduced maintenance of database summary tables. Derived tables can, in some cases, replace statistical tables that hold results for complex calculations that are incorporated into the universe using aggregate awareness. These aggregrate tables are costly to maintain and refresh frequently. Derived tables can return the same data and provide real time data analysis.
Derived tables are similar to database views, with the advantage that the SQL for a derived table can include BusinessObjects prompts.
Thanks,
Amit -
Insert performance issue with Partitioned Table.....
Hi All,
I have a performance issue during with a table which is partitioned. without table being partitioned
it ran in less time but after partition it took more than double.
1) The table was created initially without any partition and the below insert took only 27 minuts.
Total Rec Inserted :- 2424233
PL/SQL procedure successfully completed.
Elapsed: 00:27:35.20
2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
Is there anyway i can achive the better performance during insert on this partitioned table?
[ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
with partitioning the table, it took 18 hours... ]
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 4195045590
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
|* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
| 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
|* 3 | HASH JOIN | | | | | | |
| 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
| 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
PLAN_TABLE_OUTPUT
| 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
Predicate Information (identified by operation id):
1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
"A"."VENDOR_CD"="B"."COMPANY_NO")
3 - access(ROWID=ROWID)
Open C1;
Loop
Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
Forall I In 1..C_Rectype.Count
Insert test
col1,col2,col3)
Values
val1, val2,val3);
V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
Commit;
Exit When C_Rectype.Count = 0;
C_Rectype.delete;
End Loop;
End;
Total Rec Inserted :- 2424233
PL/SQL procedure successfully completed.
Elapsed: 00:51:01.22
Edited by: user520824 on Jul 16, 2010 9:16 AMI'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
I don't think there is a nologging hint :)
So, try something like
insert /*+ hints */ into ...
Select
A.Ing_Acct_Nbr, currency_Symbol,
Balance_Date, Company_No,
Substr(Account_No,1,8) Account_No,
Substr(Account_No,9,1) Typ_Cd ,
Substr(Account_No,10,1) Chk_Cd,
Td_Balance, Sd_Balance,
Sysdate, 'Sisadmin'
From Ideaal_Cons.Tb_Account_Master_Base A,
Ideaal_Staging.Tb_Sisadmin_Balance B
Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
And A.Vendor_Cd = b.company_no
;Edited by: riedelme on Jul 16, 2010 7:42 AM -
Slow performance of Query-Large Table how to optimize for performance
Hi Friends,
I am an ORacle DBA and just recently I have been asked to Administer an ORacle HRMS Database as a substitute for the HRMS DBA who have gone on vacation.
I have been reported that few queries are taking a long time to refresh and populate the forms. After some investigation it is found that the tables: HR.PAY_ELEMENT_ENTRY_VALUES_F has some more than 15 million rows in it. The storage parameters specified for table r Oracle Defaults. The table has grown a lot big and even a Count(*) takes more than 7 mins to respond.
My question is: Is there anyway it can be tuned for better performance without an overhaul. Is it normal for this table to grow this big for 6000 employees data for 4 years....
Any response/help in this regard will be appreciated. U may please ans me at [email protected]
Thanks in Advance.
Rajeev.That was a good suggestion by Karthick_Arp, but there is a chance that it is not logically identical depending on the data (I believe that is the reason for his warning).
Try this rewrite, which moves T6 to an inline view and uses the DECODE function to determine if the one row returned from T6 should be used:
SELECT
ASSOC.NAME_FIRST || ' ' || ASSOC.NAME_LAST AS CLIENT_MANAGER
FROM
T1 ASSOC,
T2 CE,
T3 AA,
T4 ACT,
T5 CC,
(SELECT
CA.ASSOC_ID
FROM
T6 CA
WHERE
CA.COMP_ID = :P_ENT_ID
AND CA.CD_CODE IN ('CMG','RCM','BCM','CCM','BAE')
GROUP BY
CA.ASSOC_ID) CA
WHERE
CE.ENT_ID = ACT.PRIMARY_ENT_ID(+)
AND CE.ENT_ID = :P_ENT_ID
AND ASSOC.ID = DECODE(AA.ASSOC_ID, NULL, CA.ASSOC_ID, AA.ASSOC_ID)
AND NVL(ACT.ACTIVITY_ID, 0) = NVL(AA.ACTIVITY_ID, 0)
AND ASSOC.BK_CODE = CC.CPY_NO
AND ASSOC.CENTER = CC.CCT_NO
AND AA.ROLE_CODE IN ('CMG', 'RCM', 'BCM', 'CCM', 'BAE');Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
@aggregate aware or Derived Tables ..
Hello,
the original data in my Universe has about 2 Mio. i have 2 questions ..
is there any reference values for best performance of a universe (e.g 1. Mio rows .. what ever)
how can i get best performance: @aggregate aware or with derived tables .. (other Alternatives)
Production DB: Oracle
BOXIR2 SP 4Hi,
You can get better performance with aggregate awareness, but you need the aggregate tables in the db.
As a brief overview consider a table prod_cust_order (1million rows)
Product, Customer, Order Amount
This could be aggregated to product_order Product, Order Amount (500 rows)
If the user wants to see order amount by customer then they are going to have to query the original table, but if they are interested in product only they they can go to Product_order table.
The aggegrate awareness allows you to create a measure object with both tables as the source, and will then select the approriate one when building the query based on the dimension you have selected. So if the user selects customer then it will go to the top prod_cust_order but if they don't it will source it from the much smaller Product_order table. The user does not see the change in WebI unless they look at the underlying SQL.
Derived tables are not so good for performance as the enter derived query is included in the SQL built.
Regards
Alan -
Performance improvement for af:table
My page consists in a table and button. The button displays a popup containing several tabs with trees inside them that allows the user to filter the data. Clicking OK on the button runs the query and refresh the table. The table is configured as followed
autoHeightRows="1000"
fetchSize="The number of rows returned by the query"
contentDelivery="immediate"
immediate="true"
value="call a method returning a List<MyLineBean> from managed bean"
One requirement is to display the table with no scrollbar.
The first issue is that displaying a table with 1000 rows is slow to render, but also makes the browser slow (Chrome in my case). The corresponding js file is about 11MB. I can understand that processing a 11MB JS file can be slow especially with DOM creation.
The other issue I noticed is that the speed to display the popup depends on the size of the table. With a 1000 rows table, I click on the button, the first server request is done after 3s. The JS size is about 20KB and network latency is low. Closing the popup with no processing is also slow (~2s). Now if I do the same experiments with a table of 13 rows (180KB of JS), the popup displays and closes instantaneously.
My priority is to improve the speed of displaying of the popup. Is there any reason why the speed depends on the size of the table?
ADF 11gR1 + WebCenter PortalHi user,
Follow this link GEBS | ADF View Object Performance Tunning Analysis for better performance of table..
Thank You. -
How do I use Derived Table to dynamically choose fact table
How do I use the Derived Table functionality to dynamically choose a fact table?
I am using BO XI R2 querying against Genesys Datamart kept in Oracle 10g. The datamart contains aggregated fact tables at different levels (no_agg, hour, day, week, etc...) I would like to build my universe so that if the end user chooses a parameter to view reports at daily granularity, then the daily fact table is used; choose hourly granularity, then hourly fact table is used, etc....
I tried using dynamic SQL in Oracle Syntax, but Business Obljects Universe didn't like that type of coding.
The tables look something like this:
O_LOB1_NO_AGG o
inner join V_LOB1_NO_AGG v on o.object_id = v.object_id
inner join T_LOB1_NO_AGG t on v.timekey = t.timekey
Likewise, in the 'hour', 'day', 'week', etc... fact tables, the Primary Key to Foreign Key names and relationships are the same. And the columns in each O_, V_, T_ fact table is the same, or very similar (just aggregated at different levels of time).
I was thinking of going a different route and using aggregate aware; but there are many Lines of Business (20+) and multiple time dimensions (7) and I believe aggregate aware would require me to place all relevant tables in the Universe as separate objects, which would create a large Universe with multiple table objects, and not be maintenance-friendly. I also was going to dynamically choose Line of Business (LOB) in the derived tables, based on the end user choosing a parameter for LOB, but that is out-of-scope for my current question. But that information sort of points you down the train of thought I am travelling. Thanks for any help you can provide!You can create a derived table containing a union like the following:
select a,b,c from DailyFacts where (@prompt('View'....) = 'Daily' and (<rest of your where conditions here if necessary>)
union
(select a,b,c from MonthlyFacts where (@prompt('View'....) = 'Monthly' and (<rest of your where conditions here if necessary>))
union
(select a,b,c from YearlyFacts where (@prompt('View'....) = 'Yearly' and (<rest of your where conditions here if necessary>))
I assume that you are familiar with the @prompt syntax
Regards,
Stratos -
Problem with WebIntelligence and Universe Designer Derived Table
Hi people, i have an issue with a report in WebIntelligence that i want to build. Here it goes:
I created a derived table that brings every material that has or not any movement. The thing is that when I build the report using other information like Material Name, for example. The report filters by the coincidence between materials in the derived table and the SAP Standard table. I tried to modify the SQL query but, Oracle does not allow it.
So here are my questions:
1)Is any way to do a Left outer join in order to have any single material and do not allow WebIntelligence to do Inline views?
2)Do i have to modify the derived table? and use the standard tables?
3)Can i work with a derived table that does not have any join with the standard tables?
Thanks in advance,
ReynaldoIf I understand you correctly, it sounds like you are getting an inner join where you want an outer join? You have several options:
1. You can do an outer join in the universe, or even embedded in your derived table (if that is what you are trying to do)
2. You can have a derived table that is not joined with any other tables in the Universe. But you will have to merge the dimensions in the Webi report, and then be sure to put the correct dimension(s) on the report in order to reflect the outer join you want.
I hope that helps.
Maybe you are looking for
-
So i have developed a flash website works great in firefox and safari but doesnt work in internet explorer just wondering where im messing up. ive embeded the .swf file into an .html file using this code <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 S
-
I have found a person who has a driver software to connect Labview with a general PCI card. But he says that i need to download a freeware software from NI website to configure the PCI card. But you have mentioned that freeware is for GPIB boards. Is
-
Can I use my canon mp 160 printer with my iPad ?
Any help is helpful . Thank you !
-
Previously reserved stock can still be issued to a different order
why is it that stock that is reserved for a particular order, can be issued (261) through MIGO for a different order? shouldnt be there reservations for that?
-
Workflow in Content Management
I have created forms using XMLFormsBuilder tool to put in content. a) Now how do I set up step-by-step workflow with approval process before the content is actually published? b) where and how do I provide Publishing Date, Archive Date and Offline Da