Migrate execution plan
I have 2 Oracle instances with nearly identical sets of objects -- one for development and one for production use. I am about to deploy some upgraded functionality from development to production and I have spent hours optimizing application SQL statements with Enterprise Manager's SQL Tuner and it has found several optimized execution plans that make a significant difference in performance. Is there a way to migrate these optimized execution plans from development to production when I deploy the rest of my application upgrade or will I need to again spend time running things through the SQL Tuner on production?
Thanks for your help.
I think you can export and import the histogram statistics of that schema or tables (DBMS_STATS.EXPORT_SCHEMA_STATS or DBMS_STATS.EXPORT_TABLE_STATS) from your development to production.
http://download-east.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#sthref7903
Optimizing SQL Statements
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/part4.htm
Exporting and Importing Statistics
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41854
Similar Messages
-
Very different execution plan in Oracle 10g vs. 9i
Good afternoon,
A few days ago I migrated an Oracle database 9i to 10g.
Right now I have an exact copy of the 9i instance running on another machine.
I noticed that the time for some queries take much more. For example, when comparing the execution plan for this query on the instance with Oracle 9i and Oracle 10g instance, I see that the cost in 10g is 94981765382, while in 9i is 120106.
Any idea what might be happening?
Execution Plan Oracle 9: http://blog.davidlozanolucas.com/uploads/execution_plan_Oracle_9i.jpg
Execution Plan Oracle 10g: http://blog.davidlozanolucas.com/uploads/execution_plan_Oracle_10g.jpg
Edited by: david_lozano_lucas on Dec 14, 2011 4:54 PMSorry,
Here are the details.
For Oracle 9i:
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 16M| 3675M| | 120K (0)| | | | | |
|* 1 | FILTER | | | | | | | | | | |
|* 2 | HASH JOIN | | 16M| 3675M| | 120K (0)| | | 03,09 | P->S | QC (RAND) |
| 3 | TABLE ACCESS FULL | ORG_LOC_DM | 2261 | 13566 | | 8 (0)| | | 03,01 | S->P | HASH |
|* 4 | HASH JOIN OUTER | | 16M| 3582M| 1023M| 120K (0)| | | 03,08 | P->P | HASH |
|* 5 | HASH JOIN OUTER | | 16M| 2783M| 1041M| 102K (0)| | | 03,08 | PCWP | |
|* 6 | HASH JOIN OUTER | | 16M| 2521M| 715M| 69515 (0)| | | 03,08 | PCWP | |
| 7 | PARTITION RANGE ALL | | | | | | 1 | 14 | 03,08 | PCWP | |
| 8 | TABLE ACCESS FULL | SURT_SKU_LD_DM | 16M| 1922M| | 60988 (0)| 1 | 14 | 03,04 | P->P | HASH |
| 9 | VIEW | | 909K| 33M| | | | | 03,05 | P->P | HASH |
| 10 | SORT GROUP BY | | 909K| 19M| 62M| 1553 (0)| | | 03,05 | PCWP | |
|* 11 | HASH JOIN | | 909K| 19M| | 164 (0)| | | 03,05 | PCWP | |
|* 12 | TABLE ACCESS FULL| ORG_LOC_DM | 1131 | 6786 | | 8 (0)| | | 03,00 | S->P | HASH |
|* 13 | TABLE ACCESS FULL| INV_MOVE_SKU_LCM_DM | 909K| 13M| | 154 (0)| 60 | 60 | 03,02 | P->P | HASH |
|* 14 | TABLE ACCESS FULL | SURT_SKU_LCM_DM | 16M| 260M| | 6457 (0)| 59 | 59 | 03,06 | P->P | HASH |
| 15 | VIEW | | 1795K| 89M| | | | | 03,07 | P->P | HASH |
| 16 | SORT GROUP BY | | 1795K| 63M| 233M| 4565 (0)| | | 03,07 | PCWP | |
|* 17 | TABLE ACCESS FULL | SLS_SKU_LCM_DM | 1795K| 63M| | 599 (0)| 60 | 60 | 03,03 | P->P | HASH |
| 18 | NESTED LOOPS | | 1 | 16 | | 7 (15)| | | | | |
| 19 | TABLE ACCESS FULL | MAINT_LOAD_DT | 1 | 8 | | 1 (0)| | | | | |
|* 20 | INDEX RANGE SCAN | DAY_IDNT_I1 | 1 | 8 | | 1 (0)| | | | | |
Predicate Information (identified by operation id):
1 - filter("SYS_ALIAS_1"."DAY_IDNT"= (SELECT /*+ */ :B1 FROM "RDW91_DM"."MAINT_LOAD_DT" "Y","RDW91_DM"."TIME_DAY_DM" "X" WHERE
"X"."DAY_DT"="Y"."CURR_LOAD_DT"))
2 - access("SYS_ALIAS_1"."LOC_KEY"="G"."LOC_KEY")
4 - access("SYS_ALIAS_1"."LOC_KEY"="I"."LOC_KEY"(+) AND "SYS_ALIAS_1"."SKU_KEY"="I"."SKU_KEY"(+))
5 - access("SYS_ALIAS_1"."LOC_KEY"="B"."LOC_KEY"(+) AND "SYS_ALIAS_1"."SKU_KEY"="B"."SKU_KEY"(+))
6 - access("SYS_ALIAS_1"."LOC_KEY"="J"."LOC_KEY"(+) AND "SYS_ALIAS_1"."SKU_KEY"="J"."SKU_KEY"(+))
11 - access("A"."LOC_KEY"="B"."LOC_KEY")
12 - filter("B"."LOC_TYPE_CDE"='W')
13 - filter("A"."CMTH_IDNT"=201112)
14 - filter("B"."CMTH_IDNT"(+)=201111)
17 - filter("SLS_SKU_LCM_DM"."CMTH_IDNT"=201112)
20 - access("X"."DAY_DT"="Y"."CURR_LOAD_DT")
For 10g:
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 16M| 3685M| | 94G| | | | | |
| 1 | FILTER | | | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10008 | 16M| 3685M| | 94G| | | Q1,08 | P->S | QC (RAND) |
| 4 | BUFFER SORT | | 16M| 3685M| | | | | Q1,08 | PCWP | |
| 5 | NESTED LOOPS OUTER | | 16M| 3685M| | 94G| | | Q1,08 | PCWP | |
| 6 | HASH JOIN | | 16M| 3424M| | 201K| | | Q1,08 | PCWP | |
| 7 | BUFFER SORT | | | | | | | | Q1,08 | PCWC | |
| 8 | PX RECEIVE | | 2261 | 13566 | | 8 | | | Q1,08 | PCWP | |
| 9 | PX SEND HASH | :TQ10001 | 2261 | 13566 | | 8 | | | | S->P | HASH |
| 10 | TABLE ACCESS FULL | ORG_LOC_DM | 2261 | 13566 | | 8 | | | | | |
| 11 | PX RECEIVE | | 16M| 3332M| | 201K| | | Q1,08 | PCWP | |
| 12 | PX SEND HASH | :TQ10007 | 16M| 3332M| | 201K| | | Q1,07 | P->P | HASH |
| 13 | HASH JOIN OUTER BUFFERED | | 16M| 3332M| 982M| 201K| | | Q1,07 | PCWP | |
| 14 | HASH JOIN OUTER | | 16M| 2733M| 735M| 127K| | | Q1,07 | PCWP | |
| 15 | PX RECEIVE | | 16M| 1934M| | 75785 | | | Q1,07 | PCWP | |
| 16 | PX SEND HASH | :TQ10004 | 16M| 1934M| | 75785 | | | Q1,04 | P->P | HASH |
| 17 | PX BLOCK ITERATOR | | 16M| 1934M| | 75785 | 1 | 14 | Q1,04 | PCWC | |
| 18 | TABLE ACCESS FULL | SURT_SKU_LD_DM | 16M| 1934M| | 75785 | 1 | 14 | Q1,04 | PCWP | |
| 19 | PX RECEIVE | | 1450K| 71M| | 3892 | | | Q1,07 | PCWP | |
| 20 | PX SEND HASH | :TQ10005 | 1450K| 71M| | 3892 | | | Q1,05 | P->P | HASH |
| 21 | VIEW | | 1450K| 71M| | 3892 | | | Q1,05 | PCWP | |
| 22 | SORT GROUP BY | | 1450K| 53M| 188M| 3892 | | | Q1,05 | PCWP | |
| 23 | PX RECEIVE | | 1450K| 53M| | 562 | | | Q1,05 | PCWP | |
| 24 | PX SEND HASH | :TQ10002 | 1450K| 53M| | 562 | | | Q1,02 | P->P | HASH |
| 25 | PX BLOCK ITERATOR | | 1450K| 53M| | 562 | 60 | 60 | Q1,02 | PCWC | |
| 26 | MAT_VIEW ACCESS FULL| SLS_SKU_LCM_DM | 1450K| 53M| | 562 | 60 | 60 | Q1,02 | PCWP | |
| 27 | PX RECEIVE | | 652K| 24M| | 1150 | | | Q1,07 | PCWP | |
| 28 | PX SEND HASH | :TQ10006 | 652K| 24M| | 1150 | | | Q1,06 | P->P | HASH |
| 29 | VIEW | | 652K| 24M| | 1150 | | | Q1,06 | PCWP | |
| 30 | SORT GROUP BY | | 652K| 14M| | 1150 | | | Q1,06 | PCWP | |
| 31 | HASH JOIN | | 652K| 14M| | 128 | | | Q1,06 | PCWP | |
| 32 | BUFFER SORT | | | | | | | | Q1,06 | PCWC | |
| 33 | PX RECEIVE | | 1131 | 6786 | | 8 | | | Q1,06 | PCWP | |
| 34 | PX SEND HASH | :TQ10000 | 1131 | 6786 | | 8 | | | | S->P | HASH |
| 35 | TABLE ACCESS FULL | ORG_LOC_DM | 1131 | 6786 | | 8 | | | | | |
| 36 | PX RECEIVE | | 652K| 10M| | 118 | | | Q1,06 | PCWP | |
| 37 | PX SEND HASH | :TQ10003 | 652K| 10M| | 118 | | | Q1,03 | P->P | HASH |
| 38 | PX BLOCK ITERATOR | | 652K| 10M| | 118 | 60 | 60 | Q1,03 | PCWC | |
| 39 | MAT_VIEW ACCESS FULL| INV_MOVE_SKU_LCM_DM | 652K| 10M| | 118 | 60 | 60 | Q1,03 | PCWP | |
| 40 | PARTITION RANGE SINGLE | | 1 | 17 | | 8609 | 59 | 59 | Q1,08 | PCWP | |
| 41 | TABLE ACCESS FULL | SURT_SKU_LCM_DM | 1 | 17 | | 8609 | 59 | 59 | Q1,08 | PCWP | |
| 42 | NESTED LOOPS | | 1 | 16 | | 6 | | | | | |
| 43 | TABLE ACCESS FULL | MAINT_LOAD_DT | 1 | 8 | | 1 | | | | | |
| 44 | INDEX RANGE SCAN | DAY_IDNT_I1 | 1 | 8 | | 5 | | | | | |
Note
- 'PLAN_TABLE' is old version
- cpu costing is off (consider enabling it) -
Could execution plan be influenced when CPU_COUNT changed?
Dear guru,
I have a question about optimizer.
I migrated DB server from IBM p5 570 1.6GHz 16 core to p6 570 5.0GHz 8 core. Only servers was changed and storage was attached to new server without any modification.
In such cases, could execution plan be influenced ? If query plan is changed, I was wondering what factor have an influence on optimizer.Hi Aden,
You can refer the link Optimizer
*009* -
Error while building execution plan
Hi, I'm trying to create an execution plan with container EBS 11.5.10 and subject area Project Analytics.
I get this error while building:
PA-EBS11510
MESSAGE:::group TASK_GROUP_Load_PositionHierarchy for SIL_PositionDimensionHierarchy_PostChangeTmp is not found!!!
EXCEPTION CLASS::: java.lang.NullPointerException
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:818)
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
Sorry for my english, I'm french.
Thank you for helping meHi,
Find the what are all the subjectarea's in execution plan having the task 'SIL_PositionDimensionHierarchy_PostChangeTmp ', add the 'TASK_GROUP_Load_PositionHierarchy ' task group to those those subjectares.
Assemble your subject area's and build the execution plan again.
Thanks -
Force statement to use a given rule or execution plan
Hi!
We have a statement that in our production system takes 6-7 seconds to complete. The statement comes from our enterprise application's core code and we are not able to change the statement.
When using a RULE-hint (SELECT /*+RULE*/ 0 pay_rec...........) for this statement, the execution time is down to 500 milliseconds.
My question is: Is there any way to pin a execution plan to a given statement. I have started reading about outlines, which seems promising. However, the statement is not using bind-variables, and since this is core code in an enterprise application I cannot change that either. Is it possible to use outlines with such a statement?
Additional information:
When I remove all statistics for the involved tables, the query blows away in 500 ms.
The table tran_info_types has 61 rows and is a stable table with few updates
The table ab_tran_info has 1 717 439 records and is 62 MB in size.
The table query_result has 777 015 records and is 216 MB in size. This table is constantly updated/insterted/deleted.
The query below return 0 records as there is no hits in the table query_result.
This is the statement:
SELECT /*+ALL_ROWS*/
0 pay_rec, abi.tran_num, abi.type_id, abi.VALUE
FROM ab_tran_info abi,
tran_info_types ti,
query_result qr1,
query_result qr2
WHERE abi.tran_num = qr1.query_result
AND abi.type_id = qr2.query_result
AND abi.type_id = ti.type_id
AND ti.ins_or_tran = 0
AND qr1.unique_id = 5334549
AND qr2.unique_id = 5334550
UNION ALL
SELECT 1 pay_rec, abi.tran_num, abi.type_id, abi.VALUE
FROM ab_tran_info abi,
tran_info_types ti,
query_result qr1,
query_result qr2
WHERE abi.tran_num = qr1.query_result
AND abi.type_id = qr2.query_result
AND abi.type_id = ti.type_id
AND ti.ins_or_tran = 0
AND qr1.unique_id = 5334551
AND qr2.unique_id = 5334552;Here is the explain plan with statistics:
Plan
SELECT STATEMENT HINT: ALL_ROWSCost: 900 Bytes: 82 Cardinality: 2
15 UNION-ALL
7 NESTED LOOPS Cost: 450 Bytes: 41 Cardinality: 1
5 NESTED LOOPS Cost: 449 Bytes: 1,787,940 Cardinality: 59,598
3 NESTED LOOPS Cost: 448 Bytes: 19,514,824 Cardinality: 1,027,096
1 INDEX RANGE SCAN UNIQUE TRADEDB.TIT_DANIEL_2 Search Columns: 1 Cost: 1 Bytes: 155 Cardinality: 31
2 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_7 Search Columns: 1 Cost: 48 Bytes: 471,450 Cardinality: 33,675
4 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 11 Cardinality: 1
6 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 11 Cardinality: 1
14 NESTED LOOPS Cost: 450 Bytes: 41 Cardinality: 1
12 NESTED LOOPS Cost: 449 Bytes: 1,787,940 Cardinality: 59,598
10 NESTED LOOPS Cost: 448 Bytes: 19,514,824 Cardinality: 1,027,096
8 INDEX RANGE SCAN UNIQUE TRADEDB.TIT_DANIEL_2 Search Columns: 1 Cost: 1 Bytes: 155 Cardinality: 31
9 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_7 Search Columns: 1 Cost: 48 Bytes: 471,450 Cardinality: 33,675
11 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 11 Cardinality: 1
13 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 11 Cardinality: 1 Here is the execution plan when I have removed all statistics (exec DBMS_STATS.DELETE_TABLE_STATS(.........,..........); )
Plan
SELECT STATEMENT HINT: ALL_ROWSCost: 12 Bytes: 3,728 Cardinality: 16
15 UNION-ALL
7 NESTED LOOPS Cost: 6 Bytes: 1,864 Cardinality: 8
5 NESTED LOOPS Cost: 6 Bytes: 45,540 Cardinality: 220
3 NESTED LOOPS Cost: 6 Bytes: 1,145,187 Cardinality: 6,327
1 TABLE ACCESS FULL TRADEDB.TRAN_INFO_TYPES Cost: 2 Bytes: 104 Cardinality: 4
2 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_6 Search Columns: 1 Cost: 1 Bytes: 239,785 Cardinality: 1,547
4 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 26 Cardinality: 1
6 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 26 Cardinality: 1
14 NESTED LOOPS Cost: 6 Bytes: 1,864 Cardinality: 8
12 NESTED LOOPS Cost: 6 Bytes: 45,540 Cardinality: 220
10 NESTED LOOPS Cost: 6 Bytes: 1,145,187 Cardinality: 6,327
8 TABLE ACCESS FULL TRADEDB.TRAN_INFO_TYPES Cost: 2 Bytes: 104 Cardinality: 4
9 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_6 Search Columns: 1 Cost: 1 Bytes: 239,785 Cardinality: 1,547
11 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 26 Cardinality: 1
13 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 26 Cardinality: 1 Our Oracle 9.2 database is set up with ALL_ROWS.
Outlines: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/outlines.htm#13091
Cursor sharing: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3696883368520Hi!
We are on Oracle 9iR2, running on 64-bit Linux.
We are going to upgrade to Oracle 10gR2 in some months. Oracle 11g is not an option for us as our application is not certified by our vendor to run on that version.
However, our performance problems are urgent so we are looking for a solution before we upgrade as we are not able to upgrade before we have done extensive testing which takes 2-3 months.
We have more problem sql's than the one shown in this post. I am using the above SQL as a sample as I think we can solve many other slow running SQL's if we solve this one.
Is the SQL Plan management an option on Oracle 9i and/or Oracle 10g? -
Execution plan with Concatenation
Hi All,
Could anyone help in finding why concatenation is being used by optimizer and how can i avoid it.
Oracle Version : 10.2.0.4
select * from
select distinct EntityType, EntityID, DateModified, DateCreated, IsDeleted
from ife.EntityIDs i
join (select orgid from equifaxnormalize.org_relationships where orgid is not null and related_orgid is not null
and ((Date_Modified >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and Date_Modified < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
OR (Date_Created >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and Date_Created < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
) r on(r.orgid= i.entityid)
where EntityType = 1
and ((DateModified >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and DateModified < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
OR (DateCreated >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and DateCreated < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
and ( IsDeleted = 0)
and IsDistributable = 1
and EntityID >= 0
order by EntityID
--order by NLSSORT(EntityID,'NLS_SORT=BINARY')
where rownum <= 10;
Execution Plan
Plan hash value: 227906424
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 10 | 570 | 39 (6)| 00:00:01 | | |
|* 1 | COUNT STOPKEY | | | | | | | |
| 2 | VIEW | | 56 | 3192 | 39 (6)| 00:00:01 | | |
|* 3 | SORT ORDER BY STOPKEY | | 56 | 3416 | 39 (6)| 00:00:01 | | |
| 4 | HASH UNIQUE | | 56 | 3416 | 38 (3)| 00:00:01 | | |
| 5 | CONCATENATION | | | | | | | |
|* 6 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 29 | 1 (0)| 00:00:01 | | |
| 7 | NESTED LOOPS | | 27 | 1647 | 17 (0)| 00:00:01 | | |
| 8 | TABLE ACCESS BY GLOBAL INDEX ROWID| ENTITYIDS | 27 | 864 | 4 (0)| 00:00:01 | ROWID | ROWID |
|* 9 | INDEX RANGE SCAN | UX_TYPE_MOD_DIST_DEL_ENTITYID | 27 | | 2 (0)| 00:00:01 | | |
|* 10 | INDEX RANGE SCAN | IX_EFX_ORGRELATION_ORGID | 1 | | 1 (0)| 00:00:01 | | |
|* 11 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 29 | 1 (0)| 00:00:01 | | |
| 12 | NESTED LOOPS | | 29 | 1769 | 20 (0)| 00:00:01 | | |
| 13 | PARTITION RANGE ALL | | 29 | 928 | 5 (0)| 00:00:01 | 1 | 3 |
|* 14 | TABLE ACCESS BY LOCAL INDEX ROWID| ENTITYIDS | 29 | 928 | 5 (0)| 00:00:01 | 1 | 3 |
|* 15 | INDEX RANGE SCAN | IDX_ENTITYIDS_ETYPE_DC | 29 | | 4 (0)| 00:00:01 | 1 | 3 |
|* 16 | INDEX RANGE SCAN | IX_EFX_ORGRELATION_ORGID | 1 | | 1 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=10)
3 - filter(ROWNUM<=10)
6 - filter(("DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13
14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')) AND "RELATED_ORGID" IS NOT NULL)
9 - access("I"."ENTITYTYPE"=1 AND "I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0 AND "I"."DATEMODIFIED"<=TO_DATE(' 2011-06-13 14:00:00',
'syyyy-mm-dd hh24:mi:ss'))
filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0)
10 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)
11 - filter(("DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13
14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')) AND "RELATED_ORGID" IS NOT NULL)
14 - filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND (LNNVL("I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("I"."DATEMODIFIED"<=TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
"I"."ENTITYID">=0)
15 - access("I"."ENTITYTYPE"=1 AND "I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."DATECREATED"<=TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
16 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)ife.entityids table has been range - partitioned on data_provider column.
Is there any better way to rewrite this sql OR is there any way to eliminate concatenation ?
ThanksWe cant use data_provider in the given query. We need to pull data irrespective of data_provider and it should be based on ENTITYID.
Yes table has only three partitions...
Not sure issue is due to concatenation....but we are in process to create desired indexes which will help for this sql.
In development we have created multicolumn index and below is the execution plan.....Also in development it takes just 4-5 seconds to execute. But in production it takes more than 8-9 minutes.
Below is the execution plan from Dev which seems to perform fast:
Execution Plan
Plan hash value: 3121857971
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 57 | 353 (1)| 00:00:05 | | |
|* 1 | COUNT STOPKEY | | | | | | | |
| 2 | VIEW | | 1 | 57 | 353 (1)| 00:00:05 | | |
|* 3 | SORT ORDER BY STOPKEY | | 1 | 58 | 353 (1)| 00:00:05 | | |
| 4 | HASH UNIQUE | | 1 | 58 | 352 (1)| 00:00:05 | | |
| 5 | CONCATENATION | | | | | | | |
|* 6 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 26 | 3 (0)| 00:00:01 | | |
| 7 | NESTED LOOPS | | 1 | 58 | 170 (1)| 00:00:03 | | |
| 8 | PARTITION RANGE ALL | | 56 | 1792 | 16 (0)| 00:00:01 | 1 | 3 |
|* 9 | TABLE ACCESS BY LOCAL INDEX ROWID| ENTITYIDS | 56 | 1792 | 16 (0)| 00:00:01 | 1 | 3 |
|* 10 | INDEX RANGE SCAN | IDX_ENTITYIDS_ETYPE_DC | 56 | | 7 (0)| 00:00:01 | 1 | 3 |
|* 11 | INDEX RANGE SCAN | EFX_ORGID | 2 | | 2 (0)| 00:00:01 | | |
|* 12 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 26 | 3 (0)| 00:00:01 | | |
| 13 | NESTED LOOPS | | 1 | 58 | 181 (0)| 00:00:03 | | |
| 14 | PARTITION RANGE ALL | | 57 | 1824 | 10 (0)| 00:00:01 | 1 | 3 |
|* 15 | INDEX RANGE SCAN | UX_TYPE_MOD_DIST_DEL_ENTITYID | 57 | 1824 | 10 (0)| 00:00:01 | 1 | 3 |
|* 16 | INDEX RANGE SCAN | EFX_ORGID | 2 | | 2 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=10)
3 - filter(ROWNUM<=10)
6 - filter("RELATED_ORGID" IS NOT NULL AND ("DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')))
9 - filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0)
10 - access("I"."ENTITYTYPE"=1 AND "I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."DATECREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
11 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)
12 - filter("RELATED_ORGID" IS NOT NULL AND ("DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')))
15 - access("I"."ENTITYTYPE"=1 AND "I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0 AND "I"."DATEMODIFIED"<TO_DATE(' 2011-06-13 14:00:00',
'syyyy-mm-dd hh24:mi:ss'))
filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND (LNNVL("I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("I"."DATECREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
"I"."ENTITYID">=0)
16 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)Thanks -
Query optimization - Query is taking long time even there is no table scan in execution plan
Hi All,
The below query execution is taking very long time even there are all required indexes present.
Also in execution plan there is no table scan. I did a lot of research but i am unable to find a solution.
Please help, this is required very urgently. Thanks in advance. :)
WITH cte
AS (
SELECT Acc_ex1_3
FROM Acc_ex1
INNER JOIN Acc_ex5 ON (
Acc_ex1.Acc_ex1_Id = Acc_ex5.Acc_ex5_Id
AND Acc_ex1.OwnerID = Acc_ex5.OwnerID
WHERE (
cast(Acc_ex5.Acc_ex5_92 AS DATETIME) >= '12/31/2010 18:30:00'
AND cast(Acc_ex5.Acc_ex5_92 AS DATETIME) < '01/31/2014 18:30:00'
SELECT DISTINCT R.ReportsTo AS directReportingUserId
,UC.UserName AS EmpName
,UC.EmployeeCode AS EmpCode
,UEx1.Use_ex1_1 AS PortfolioCode
SELECT TOP 1 TerritoryName
FROM UserTerritoryLevelView
WHERE displayOrder = 6
AND UserId = R.ReportsTo
) AS BranchName
,GroupsNotContacted AS groupLastContact
,GroupCount AS groupTotal
FROM ReportingMembers R
INNER JOIN TeamMembers T ON (
T.OwnerID = R.OwnerID
AND T.MemberID = R.ReportsTo
AND T.ReportsTo = 1
INNER JOIN UserContact UC ON (
UC.CompanyID = R.OwnerID
AND UC.UserID = R.ReportsTo
INNER JOIN Use_ex1 UEx1 ON (
UEx1.OwnerId = R.OwnerID
AND UEx1.Use_ex1_Id = R.ReportsTo
INNER JOIN (
SELECT Accounts.AssignedTo
,count(DISTINCT Acc_ex1_3) AS GroupCount
FROM Accounts
INNER JOIN Acc_ex1 ON (
Accounts.AccountID = Acc_ex1.Acc_ex1_Id
AND Acc_ex1.Acc_ex1_3 > '0'
AND Accounts.OwnerID = 109
GROUP BY Accounts.AssignedTo
) TotalGroups ON (TotalGroups.AssignedTo = R.ReportsTo)
INNER JOIN (
SELECT Accounts.AssignedTo
,count(DISTINCT Acc_ex1_3) AS GroupsNotContacted
FROM Accounts
INNER JOIN Acc_ex1 ON (
Accounts.AccountID = Acc_ex1.Acc_ex1_Id
AND Acc_ex1.OwnerID = Accounts.OwnerID
AND Acc_ex1.Acc_ex1_3 > '0'
INNER JOIN Acc_ex5 ON (
Accounts.AccountID = Acc_ex5.Acc_ex5_Id
AND Acc_ex5.OwnerID = Accounts.OwnerID
WHERE Accounts.OwnerID = 109
AND Acc_ex1.Acc_ex1_3 NOT IN (
SELECT Acc_ex1_3
FROM cte
GROUP BY Accounts.AssignedTo
) TotalGroupsNotContacted ON (TotalGroupsNotContacted.AssignedTo = R.ReportsTo)
WHERE R.OwnerID = 109
Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash JugranHi All,
Thanks for the replies.
I have optimized that query to make it run in few seconds.
Here is my final query.
select ReportsTo as directReportingUserId,
UserName AS EmpName,
EmployeeCode AS EmpCode,
Use_ex1_1 AS PortfolioCode,
BranchName,
GroupInfo.groupTotal,
GroupInfo.groupLastContact,
case when exists
(select 1 from ReportingMembers RM
where RM.ReportsTo = UserInfo.ReportsTo
and RM.MemberID <> UserInfo.ReportsTo
) then 0 else UserInfo.ReportsTo end as memberid1,
(select code from Regions where ownerid=109 and name=UserInfo.BranchName) as BranchCode,
ROW_NUMBER() OVER (ORDER BY directReportingUserId) AS ROWNUMBER
FROM
(select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
(select top 1 TerritoryName
from UserTerritoryLevelView
where displayOrder = 6
and UserId = R.ReportsTo) as BranchName,
Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
from ReportingMembers R
INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo )
inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
union
select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
(select top 1 TerritoryName
from UserTerritoryLevelView
where displayOrder = 6
and UserId = R.ReportsTo) as BranchName,
Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
from ReportingMembers R
--INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo)
inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
where R.MemberID = 1
) UserInfo
inner join
select directReportingUserId, sum(Groups) as groupTotal, SUM(GroupsNotContacted) as groupLastContact
from
select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
FROM ReportingMembers R
INNER JOIN TeamMembers T
ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
--where TerritoryID in ( select ChildRegionID RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
union
select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
FROM ReportingMembers R
INNER JOIN TeamMembers T
ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
--where TerritoryID in ( select ChildRegionID RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
where R.MemberID = 1
) GroupWiseInfo
group by directReportingUserId
) GroupInfo
on UserInfo.ReportsTo = GroupInfo.directReportingUserId
Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran -
Effect of RLS policy (VPD) on execution plan of a query
Hi
I have been working on tuning of few queries. A RLS policy is defined on most of the tables which appends an extra where condition (something like AREA_CODE=1). I am not able to understand the effect of this extra where clause on the execution plan of the query. In the execution plan there is no mention of the clause added by VPD. In 10046 trace it does show the policy function being executed but nothing after that.
Can someone shed some light on the issue that has VPD any effect on the execution plan of the query ? Also would it matter whether the column on which VPD is applied, was indexed or non-indexed ?
Regards,
Amardeep SidhuAmardeep Sidhu wrote:
I have been working on tuning of few queries. A RLS policy is defined on most of the tables which appends an extra where condition (something like AREA_CODE=1). I am not able to understand the effect of this extra where clause on the execution plan of the query. In the execution plan there is no mention of the clause added by VPD. In 10046 trace it does show the policy function being executed but nothing after that.
VPD is supposed to be invisible - which is why you get minimal information about security predicates in the standard trace file. However, if you reference a table with a security preidcate in your query, the table is effectively replaced by an inline view of the form: "select * from original_table where {security_predicate}", and the result is then optimised. So the effects of the security predicate is just the same as you writing the predicate into the query.
Apart from your use of v$sql_plan to show the change in plan and the new predicates, you can see the effects of the predicates by setting event 10730 with 10046. In current versions of Oracle this causes the substitute view being printed in the trace file.
Bear in mind that security predicates can be very complex - including subqueries - so the effect isn't just that of including the selectivity of "another simple predicate".
Can someone shed some light on the issue that has VPD any effect on the execution plan of the query ? Also would it matter whether the column on which VPD is applied, was indexed or non-indexed ?
Think of the effect of changing the SQL by hand - and how you would need to optimise the resultant query. Sometimes you do need to modify your indexing to help the security predicates, sometimes it won't make enough difference to matter.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format. -
Execution plan with bind variable
dear all
I join two tables and get "index fast full scan" with high cost using bind variable
but when I remove the bind variable it executes with "index" and with lower cost
What is the reason and how should I know which execution plan is really used in real life?
thanks
john1) What is oracle version?
2) Post here both query and their explain plan.
In fact INDEX FAST FULL SCAN indicate is multiblock read for composite indexes(and based on your query and predicates).In this case CBO behavior as FULL TABLE SCAN(affected by db_multiblock_read_count,system stats,etc).And you use bind variable.So in bind variable case CBO define selectivity based on arithmetic(5% of the cardinality) ,if you use concrete values instead of bind variable CBO identify other selectivity based on statistics,histograms,.... then it was identify cost of multiblock read and single block reads.You can see these 10053 event.Finally it choose lower cost`s plan. -
SQL query with Bind variable with slower execution plan
I have a 'normal' sql select-insert statement (not using bind variable) and it yields the following execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=148)
1 0 HASH JOIN (Cost=7 Card=1 Bytes=148)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=4 Card=1 Bytes=100)
3 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=3 Card=1)
4 1 INDEX (FAST FULL SCAN) OF 'TABLEB_IDX_003' (NON-UNIQUE)
(Cost=2 Card=135 Bytes=6480)
Statistics
0 recursive calls
18 db block gets
15558 consistent gets
47 physical reads
9896 redo size
423 bytes sent via SQL*Net to client
1095 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
I have the same query but instead running using bind variable (I test it with both oracle form and SQL*plus), it takes considerably longer with a different execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=407 Card=1 Bytes=148)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=3 Card=1 Bytes=100)
2 1 NESTED LOOPS (Cost=407 Card=1 Bytes=148)
3 2 INDEX (FAST FULL SCAN) OF TABLEB_IDX_003' (NON-UNIQUE) (Cost=2 Card=135 Bytes=6480)
4 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=2 Card=1)
Statistics
0 recursive calls
12 db block gets
3003199 consistent gets
54 physical reads
9448 redo size
423 bytes sent via SQL*Net to client
1258 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
TABLEA has around 3million record while TABLEB has 300 records. Is there anyway I can improve the speed of the sql query with bind variable? I have DBA Access to the database
Regards
IvanMany thanks for your reply.
I have run the statistic already for the both tableA and tableB as well all the indexes associated with both table (using dbms_stats, I am on 9i db ) but not the indexed columns.
for table I use:-
begin
dbms_stats.gather_table_stats(ownname=> 'IVAN', tabname=> 'TABLEA', partname=> NULL);
end;
for index I use:-
begin
dbms_stats.gather_index_stats(ownname=> 'IVAN', indname=> 'TABLEB_IDX_003', partname=> NULL);
end;
Is it possible to show me a sample of how to collect statisc for INDEX columns stats?
regards
Ivan -
DAC Execution plan for financial analytics keeps running (OBIA 7.9.4)
Hi Gurus,
I have problem when tried to run full load on financial analytics execution plan. I used OBIA 7.9.4, Informatica 8.6.1 and DAC 10.1.3.4.1.patch.20100901.0520. This runs on Windows Server 2008 Service Pack 2 with 6 GB of memory. I set $$INITIAL_EXTRACT_DATE='01/01/2011' so it should pull data greater than $$INITIAL_EXTRACT_DATE, which I believed not more than 3 years data and should be not more than 2 days for loading this data. But execution plan keeps running and not yet completed even after more than 2 days. I have already checked for DAC execution plan log, But nothing I can found from the log.
Here is the log file:
12 INFO Thu Jun 27 13:52:12 SGT 2013 DAC Version: Dac Build AN 10.1.3.4.1.patch.20100901.0520
13 INFO Thu Jun 27 13:52:12 SGT 2013 The System properties are: java.runtime.name Java(TM) SE Runtime Environment
sun.boot.library.path C:\orahome\10gR3_1\jdk\jre\bin
java.vm.version 10.0-b23
java.vm.vendor Sun Microsystems Inc.
java.vendor.url http://java.sun.com/
path.separator ;
ETL_EXECUTION_MODE ETL_SERVER
java.vm.name Java HotSpot(TM) Server VM
file.encoding.pkg sun.io
sun.java.launcher SUN_STANDARD
user.country US
sun.os.patch.level Service Pack 2
java.vm.specification.name Java Virtual Machine Specification
user.dir C:\orahome\10gR3_1\bifoundation\dac
java.runtime.version 1.6.0_07-b06
java.awt.graphicsenv sun.awt.Win32GraphicsEnvironment
java.endorsed.dirs C:\orahome\10gR3_1\jdk\jre\lib\endorsed
os.arch x86
java.io.tmpdir C:\Users\ADMINI~1\AppData\Local\Temp\2\
line.separator
java.vm.specification.vendor Sun Microsystems Inc.
user.variant
os.name Windows Server 2008
REPOSITORY_STAMP 4E8A758D24E553DBB74B325E89718660
sun.jnu.encoding Cp1252
java.library.path C:\orahome\10gR3_1\jdk\bin;.;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Informatica\PowerCenter8.6.1\server\bin;C:\OracleBI\server\Bin;C:\OracleBI\web\bin;C:\OracleBI\web\catalogmanager;C:\OracleBI\SQLAnywhere;C:\Program Files (x86)\Java\jdk1.5.0_22\bin;C:\app\oracle\product\11.2.0\dbhome_1\bin;C:\Program Files\HP\NCU;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Program Files\OmniBack\bin\;C:\Windows\System32\WindowsPowerShell\v1.0\
java.specification.name Java Platform API Specification
java.class.version 50.0
sun.management.compiler HotSpot Tiered Compilers
os.version 6.0
user.home C:\Users\Administrator
user.timezone Asia/Singapore
java.awt.printerjob sun.awt.windows.WPrinterJob
file.encoding Cp1252
java.specification.version 1.6
java.class.path .\lib\msbase.jar;.\lib\mssqlserver.jar;.\lib\msutil.jar;.\lib\sqljdbc.jar;.\lib\ojdbc6.jar;.\lib\ojdbc5.jar;.\lib\ojdbc14.jar;.\lib\db2java.zip;.\lib\terajdbc4.jar;.\lib\log4j.jar;.\lib\teradata.jar;.\lib\tdgssjava.jar;.\lib\tdgssconfig.jar;.\DAWSystem.jar;.;
user.name Administrator
java.vm.specification.version 1.0
java.home C:\orahome\10gR3_1\jdk\jre
sun.arch.data.model 32
user.language en
java.specification.vendor Sun Microsystems Inc.
awt.toolkit sun.awt.windows.WToolkit
java.vm.info mixed mode
java.version 1.6.0_07
java.ext.dirs C:\orahome\10gR3_1\jdk\jre\lib\ext;C:\Windows\Sun\Java\lib\ext
sun.boot.class.path C:\orahome\10gR3_1\jdk\jre\lib\resources.jar;C:\orahome\10gR3_1\jdk\jre\lib\rt.jar;C:\orahome\10gR3_1\jdk\jre\lib\sunrsasign.jar;C:\orahome\10gR3_1\jdk\jre\lib\jsse.jar;C:\orahome\10gR3_1\jdk\jre\lib\jce.jar;C:\orahome\10gR3_1\jdk\jre\lib\charsets.jar;C:\orahome\10gR3_1\jdk\jre\classes
java.vendor Sun Microsystems Inc.
file.separator \
java.vendor.url.bug http://java.sun.com/cgi-bin/bugreport.cgi
sun.io.unicode.encoding UnicodeLittle
sun.cpu.endian little
sun.desktop windows
sun.cpu.isalist pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 i86
14 SEVERE Thu Jun 27 13:52:12 SGT 2013
START OF ETL
15 SEVERE Thu Jun 27 13:52:36 SGT 2013 Unable to evaluate method getNamedSourceIdentifier for class com.siebel.analytics.etl.etltask.InformaticaTask
16 SEVERE Thu Jun 27 13:52:36 SGT 2013 Unable to evaluate method getNamedSource for class com.siebel.analytics.etl.etltask.InformaticaTask
17 SEVERE Thu Jun 27 13:52:36 SGT 2013 Unable to evaluate method getNamedSourceIdentifier for class com.siebel.analytics.etl.etltask.PauseTask
18 SEVERE Thu Jun 27 13:52:36 SGT 2013 Unable to evaluate method getNamedSource for class com.siebel.analytics.etl.etltask.PauseTask
19 SEVERE Thu Jun 27 13:52:40 SGT 2013 Unable to evaluate method getNamedSourceIdentifier for class com.siebel.analytics.etl.etltask.TaskPrecedingActionScriptTask
20 SEVERE Thu Jun 27 13:52:40 SGT 2013 Unable to evaluate method getNamedSource for class com.siebel.analytics.etl.etltask.TaskPrecedingActionScriptTask
21 SEVERE Thu Jun 27 13:52:46 SGT 2013 Starting ETL Process.
22 SEVERE Thu Jun 27 13:52:47 SGT 2013 Informatica Status Poll Interval new value : 20000(milli-seconds)
24 SEVERE Thu Jun 27 13:52:52 SGT 2013 C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\FLAT FILE specified is not a currently existing directory
25 SEVERE Thu Jun 27 13:52:52 SGT 2013 Request to start workflow : 'SILOS:SIL_InsertRowInRunTable' has completed with error code 0
26 SEVERE Thu Jun 27 13:53:12 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_InsertRowInRunTable.txt SIL_InsertRowInRunTable
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
27 SEVERE Thu Jun 27 13:53:13 SGT 2013 Number of running sessions : 13
28 SEVERE Thu Jun 27 13:53:13 SGT 2013 Number of running sessions : 12
29 SEVERE Thu Jun 27 13:53:14 SGT 2013 Number of running sessions : 11
30 SEVERE Thu Jun 27 13:53:15 SGT 2013 Number of running sessions : 10
32 SEVERE Thu Jun 27 13:53:18 SGT 2013 C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\ORACLE specified is not a currently existing directory
33 SEVERE Thu Jun 27 13:53:18 SGT 2013 C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\Oracle specified is not a currently existing directory
34 SEVERE Thu Jun 27 13:53:18 SGT 2013 C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\oracle specified is not a currently existing directory
35 SEVERE Thu Jun 27 13:53:18 SGT 2013 C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\ORACLE (THIN) specified is not a currently existing directory
36 SEVERE Thu Jun 27 13:53:19 SGT 2013 Request to start workflow : 'SILOS:SIL_CurrencyTypes' has completed with error code 0
37 SEVERE Thu Jun 27 13:53:19 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_ValueSetHier_Extract_Full' has completed with error code 0
38 SEVERE Thu Jun 27 13:53:19 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_GLAccount_SegmentConfig_Extract' has completed with error code 0
39 SEVERE Thu Jun 27 13:53:19 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_LocalCurrency_Temporary' has completed with error code 0
40 SEVERE Thu Jun 27 13:53:19 SGT 2013 Request to start workflow : 'SILOS:SIL_Parameters_Update' has completed with error code 0
41 SEVERE Thu Jun 27 13:53:19 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_GLAccountDimension_FinSubCodes' has completed with error code 0
42 SEVERE Thu Jun 27 13:53:19 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_ValueSet_Extract' has completed with error code 0
43 SEVERE Thu Jun 27 13:53:19 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GLSegmentDimension_Full' has completed with error code 0
44 SEVERE Thu Jun 27 13:53:19 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_InternalOrganizationDimension_BalanceSegmentValue_LegalEntity' has completed with error code 0
45 SEVERE Thu Jun 27 13:53:39 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_CurrencyTypes.txt SIL_CurrencyTypes
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
46 SEVERE Thu Jun 27 13:53:39 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_ValueSetHier_Extract_Full.txt SDE_ORA_Stage_ValueSetHier_Extract_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
47 SEVERE Thu Jun 27 13:53:40 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_GLAccount_SegmentConfig_Extract.txt SDE_ORA_Stage_GLAccount_SegmentConfig_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
48 SEVERE Thu Jun 27 13:53:40 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_LocalCurrency_Temporary.txt SDE_ORA_LocalCurrency_Temporary
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
49 SEVERE Thu Jun 27 13:53:40 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SILOS.SIL_Parameters_Update.txt SIL_Parameters_Update
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
50 SEVERE Thu Jun 27 13:53:40 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_GLAccountDimension_FinSubCodes.txt SDE_ORA_Stage_GLAccountDimension_FinSubCodes
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
51 SEVERE Thu Jun 27 13:53:40 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_ValueSet_Extract.txt SDE_ORA_Stage_ValueSet_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
52 SEVERE Thu Jun 27 13:53:40 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GLSegmentDimension_Full.txt SDE_ORA_GLSegmentDimension_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
53 SEVERE Thu Jun 27 13:53:40 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_InternalOrganizationDimension_BalanceSegmentValue_LegalEntity.txt SDE_ORA_InternalOrganizationDimension_BalanceSegmentValue_LegalEntity
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
54 SEVERE Thu Jun 27 13:53:40 SGT 2013 Number of running sessions : 10
55 SEVERE Thu Jun 27 13:53:40 SGT 2013 Number of running sessions : 10
56 SEVERE Thu Jun 27 13:53:40 SGT 2013 Number of running sessions : 11
57 SEVERE Thu Jun 27 13:53:40 SGT 2013 Number of running sessions : 11
58 SEVERE Thu Jun 27 13:53:40 SGT 2013 Number of running sessions : 11
59 SEVERE Thu Jun 27 13:53:41 SGT 2013 Number of running sessions : 11
60 SEVERE Thu Jun 27 13:53:41 SGT 2013 Number of running sessions : 11
61 SEVERE Thu Jun 27 13:53:41 SGT 2013 Number of running sessions : 11
62 SEVERE Thu Jun 27 13:53:41 SGT 2013 Number of running sessions : 11
63 SEVERE Thu Jun 27 13:53:42 SGT 2013 Number of running sessions : 10
64 SEVERE Thu Jun 27 13:53:42 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GLBalanceFact_Full' has completed with error code 0
65 SEVERE Thu Jun 27 13:53:46 SGT 2013 Request to start workflow : 'SILOS:SIL_Stage_GroupAccountNumberDimension_FinStatementItem' has completed with error code 0
66 SEVERE Thu Jun 27 13:53:46 SGT 2013 Request to start workflow : 'SILOS:SIL_GlobalCurrencyGeneral_Update' has completed with error code 0
67 SEVERE Thu Jun 27 13:53:46 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_GroupAccountNumberDimension' has completed with error code 0
68 SEVERE Thu Jun 27 13:53:46 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_ARSubType_Extract' has completed with error code 0
69 SEVERE Thu Jun 27 13:53:46 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_GLRevenueTypeExtract' has completed with error code 0
70 SEVERE Thu Jun 27 13:53:46 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_ValueSetHier_Flatten' has completed with error code 0
71 SEVERE Thu Jun 27 13:53:47 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_ExchangeRateGeneral_Full' has completed with error code 0
72 SEVERE Thu Jun 27 13:53:47 SGT 2013 Request to start workflow : 'SILOS:SIL_ListOfValuesGeneral_Unspecified' has completed with error code 0
73 SEVERE Thu Jun 27 13:54:06 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_Stage_GroupAccountNumberDimension_FinStatementItem.txt SIL_Stage_GroupAccountNumberDimension_FinStatementItem
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
74 SEVERE Thu Jun 27 13:54:07 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SILOS.SIL_GlobalCurrencyGeneral_Update.txt SIL_GlobalCurrencyGeneral_Update
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
75 SEVERE Thu Jun 27 13:54:07 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_GroupAccountNumberDimension.txt SDE_ORA_Stage_GroupAccountNumberDimension
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
76 SEVERE Thu Jun 27 13:54:07 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_GLRevenueTypeExtract.txt SDE_ORA_Stage_TransactionTypeDimension_GLRevenueTypeExtract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
77 SEVERE Thu Jun 27 13:54:07 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_ARSubType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_ARSubType_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
78 SEVERE Thu Jun 27 13:54:07 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\DataWarehouse.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_ValueSetHier_Flatten.txt SDE_ORA_Stage_ValueSetHier_Flatten
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
79 SEVERE Thu Jun 27 13:54:07 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_ExchangeRateGeneral_Full.txt SDE_ORA_ExchangeRateGeneral_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
80 SEVERE Thu Jun 27 13:54:07 SGT 2013 Number of running sessions : 11
81 SEVERE Thu Jun 27 13:54:07 SGT 2013 Number of running sessions : 10
82 SEVERE Thu Jun 27 13:54:07 SGT 2013 Number of running sessions : 10
83 SEVERE Thu Jun 27 13:54:07 SGT 2013 Number of running sessions : 10
84 SEVERE Thu Jun 27 13:54:08 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_ListOfValuesGeneral_Unspecified.txt SIL_ListOfValuesGeneral_Unspecified
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
85 SEVERE Thu Jun 27 13:54:08 SGT 2013 Number of running sessions : 10
86 SEVERE Thu Jun 27 13:54:08 SGT 2013 Number of running sessions : 10
87 SEVERE Thu Jun 27 13:54:08 SGT 2013 Number of running sessions : 10
88 SEVERE Thu Jun 27 13:54:09 SGT 2013 Number of running sessions : 10
89 SEVERE Thu Jun 27 13:54:12 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GLJournals_Full' has completed with error code 0
90 SEVERE Thu Jun 27 13:54:13 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_APTransactionFact_Payment_Full' has completed with error code 0
91 SEVERE Thu Jun 27 13:54:13 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_APType_Extract' has completed with error code 0
92 SEVERE Thu Jun 27 13:54:13 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_APTransactionFact_PaymentSchedule_Full' has completed with error code 0
93 SEVERE Thu Jun 27 13:54:13 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_APTransactionFact_Distributions_Full' has completed with error code 0
94 SEVERE Thu Jun 27 13:54:13 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_GLRevenueSubType_Extract' has completed with error code 0
95 SEVERE Thu Jun 27 13:54:13 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_APSubType_Extract' has completed with error code 0
96 SEVERE Thu Jun 27 13:54:14 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_GLCOGSSubType_Extract' has completed with error code 0
97 SEVERE Thu Jun 27 13:54:15 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_ValueSetHier_DeriveRange' has completed with error code 0
98 SEVERE Thu Jun 27 13:54:33 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_APTransactionFact_Payment_Full.txt SDE_ORA_APTransactionFact_Payment_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
99 SEVERE Thu Jun 27 13:54:33 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_APType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_APType_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
100 SEVERE Thu Jun 27 13:54:33 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_APTransactionFact_PaymentSchedule_Full.txt SDE_ORA_APTransactionFact_PaymentSchedule_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
101 SEVERE Thu Jun 27 13:54:33 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_APTransactionFact_Distributions_Full.txt SDE_ORA_APTransactionFact_Distributions_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
102 SEVERE Thu Jun 27 13:54:34 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_GLRevenueSubType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_GLRevenueSubType_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
103 SEVERE Thu Jun 27 13:54:34 SGT 2013 Number of running sessions : 12
104 SEVERE Thu Jun 27 13:54:34 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_APSubType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_APSubType_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
105 SEVERE Thu Jun 27 13:54:34 SGT 2013 Number of running sessions : 11
106 SEVERE Thu Jun 27 13:54:34 SGT 2013 Number of running sessions : 10
107 SEVERE Thu Jun 27 13:54:34 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_GLCOGSSubType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_GLCOGSSubType_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
108 SEVERE Thu Jun 27 13:54:34 SGT 2013 Number of running sessions : 11
109 SEVERE Thu Jun 27 13:54:34 SGT 2013 Number of running sessions : 10
110 SEVERE Thu Jun 27 13:54:34 SGT 2013 Number of running sessions : 10
111 SEVERE Thu Jun 27 13:54:35 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\DataWarehouse.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_ValueSetHier_DeriveRange.txt SDE_ORA_Stage_ValueSetHier_DeriveRange
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
112 SEVERE Thu Jun 27 13:54:36 SGT 2013 Number of running sessions : 10
113 SEVERE Thu Jun 27 13:54:36 SGT 2013 Number of running sessions : 9
114 SEVERE Thu Jun 27 13:54:36 SGT 2013 Number of running sessions : 8
115 SEVERE Thu Jun 27 13:54:40 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_GLCOGSType_Extract' has completed with error code 0
116 SEVERE Thu Jun 27 13:54:40 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_ExpenditureCategory' has completed with error code 0
117 SEVERE Thu Jun 27 13:54:40 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_InternalOrganizationDimensionHierarchy_HROrgsTemporary_LatestVersion' has completed with error code 0
118 SEVERE Thu Jun 27 13:54:40 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_ARType_Extract' has completed with error code 0
119 SEVERE Thu Jun 27 13:54:40 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GL_AP_LinkageInformation_Extract_Full' has completed with error code 0
120 SEVERE Thu Jun 27 13:54:41 SGT 2013 Request to start workflow : 'SILOS:SIL_TimeOfDayDimension' has completed with error code 0
121 SEVERE Thu Jun 27 13:55:00 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_GLCOGSType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_GLCOGSType_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
122 SEVERE Thu Jun 27 13:55:00 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_TransactionTypeDimension_ExpenditureCategory.txt SDE_ORA_TransactionTypeDimension_ExpenditureCategory
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
123 SEVERE Thu Jun 27 13:55:00 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_ARType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_ARType_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
124 SEVERE Thu Jun 27 13:55:00 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_InternalOrganizationDimensionHierarchy_HROrgsTemporary_LatestVersion.txt SDE_ORA_InternalOrganizationDimensionHierarchy_HROrgsTemporary_LatestVersion
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
125 SEVERE Thu Jun 27 13:55:00 SGT 2013 Number of running sessions : 7
126 SEVERE Thu Jun 27 13:55:00 SGT 2013 Number of running sessions : 6
127 SEVERE Thu Jun 27 13:55:01 SGT 2013 Number of running sessions : 6
128 SEVERE Thu Jun 27 13:55:01 SGT 2013 Number of running sessions : 10
129 SEVERE Thu Jun 27 13:55:01 SGT 2013 Number of running sessions : 9
130 SEVERE Thu Jun 27 13:55:01 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_TimeOfDayDimension.txt SIL_TimeOfDayDimension
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
131 SEVERE Thu Jun 27 13:55:02 SGT 2013 Number of running sessions : 9
132 SEVERE Thu Jun 27 13:55:06 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_GLRevenueDerive' has completed with error code 0
133 SEVERE Thu Jun 27 13:55:06 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_APDerive' has completed with error code 0
134 SEVERE Thu Jun 27 13:55:06 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_GLOther' has completed with error code 0
135 SEVERE Thu Jun 27 13:55:06 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_ARSubType_ExtractApplication' has completed with error code 0
136 SEVERE Thu Jun 27 13:55:06 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_GLCOGSDerive' has completed with error code 0
137 SEVERE Thu Jun 27 13:55:08 SGT 2013 Request to start workflow : 'SILOS:SIL_HourOfDayDimension' has completed with error code 0
138 SEVERE Thu Jun 27 13:55:21 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GL_AP_LinkageInformation_Extract_Full.txt SDE_ORA_GL_AP_LinkageInformation_Extract_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
139 SEVERE Thu Jun 27 13:55:21 SGT 2013 Number of running sessions : 9
140 SEVERE Thu Jun 27 13:55:27 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GL_PO_LinkageInformation_Extract_Full' has completed with error code 0
141 SEVERE Thu Jun 27 13:55:27 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_ARSubType_ExtractApplication.txt SDE_ORA_Stage_TransactionTypeDimension_ARSubType_ExtractApplication
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
142 SEVERE Thu Jun 27 13:55:27 SGT 2013 Number of running sessions : 8
143 SEVERE Thu Jun 27 13:55:28 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\DataWarehouse.DataWarehouse.SILOS.SIL_HourOfDayDimension.txt SIL_HourOfDayDimension
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
144 SEVERE Thu Jun 27 13:55:29 SGT 2013 Number of running sessions : 7
145 SEVERE Thu Jun 27 13:55:47 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GL_PO_LinkageInformation_Extract_Full.txt SDE_ORA_GL_PO_LinkageInformation_Extract_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
146 SEVERE Thu Jun 27 13:55:47 SGT 2013 Number of running sessions : 7
147 SEVERE Thu Jun 27 13:55:53 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GL_AR_REV_LinkageInformation_Extract_Full' has completed with error code 0
148 SEVERE Thu Jun 27 13:55:53 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GLJournals_Full.txt SDE_ORA_GLJournals_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
149 SEVERE Thu Jun 27 13:56:00 SGT 2013
ANOMALY INFO::: DataWarehouse:SELECT COUNT(*) FROM W_ORA_GLRF_DERV_F_TMP
ORA-00942: table or view does not exist
MESSAGE:::ORA-00942: table or view does not exist
EXCEPTION CLASS::: java.sql.SQLSyntaxErrorException
oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:439)
oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:395)
oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:802)
oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:436)
oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186)
oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:521)
oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:194)
oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:853)
oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1145)
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1267)
oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1469)
oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:389)
com.siebel.etl.database.cancellable.CancellableStatement.executeQuery(CancellableStatement.java:69)
com.siebel.etl.database.DBUtils.executeQuery(DBUtils.java:150)
com.siebel.etl.database.WeakDBUtils.executeQuery(WeakDBUtils.java:242)
com.siebel.analytics.etl.etltask.CountTableTask.doExecute(CountTableTask.java:99)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:411)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:307)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:214)
com.siebel.etl.engine.core.Session.getTargetTableRowCounts(Session.java:3375)
com.siebel.etl.engine.core.Session.run(Session.java:3251)
java.lang.Thread.run(Thread.java:619)
150 SEVERE Thu Jun 27 13:56:00 SGT 2013
ANOMALY INFO::: Error while executing : COUNT TABLE:W_ORA_GLRF_DERV_F_TMP
MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DataWarehouse:SELECT COUNT(*) FROM W_ORA_GLRF_DERV_F_TMP
ORA-00942: table or view does not exist
EXCEPTION CLASS::: java.lang.Exception
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:450)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:307)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:214)
com.siebel.etl.engine.core.Session.getTargetTableRowCounts(Session.java:3375)
com.siebel.etl.engine.core.Session.run(Session.java:3251)
java.lang.Thread.run(Thread.java:619)
::: CAUSE :::
MESSAGE:::DataWarehouse:SELECT COUNT(*) FROM W_ORA_GLRF_DERV_F_TMP
ORA-00942: table or view does not exist
EXCEPTION CLASS::: com.siebel.etl.database.IllegalSQLQueryException
com.siebel.etl.database.DBUtils.executeQuery(DBUtils.java:160)
com.siebel.etl.database.WeakDBUtils.executeQuery(WeakDBUtils.java:242)
com.siebel.analytics.etl.etltask.CountTableTask.doExecute(CountTableTask.java:99)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:411)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:307)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:214)
com.siebel.etl.engine.core.Session.getTargetTableRowCounts(Session.java:3375)
com.siebel.etl.engine.core.Session.run(Session.java:3251)
java.lang.Thread.run(Thread.java:619)
::: CAUSE :::
MESSAGE:::ORA-00942: table or view does not exist
EXCEPTION CLASS::: java.sql.SQLSyntaxErrorException
oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:439)
oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:395)
oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:802)
oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:436)
oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186)
oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:521)
oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:194)
oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:853)
oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1145)
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1267)
oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1469)
oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:389)
com.siebel.etl.database.cancellable.CancellableStatement.executeQuery(CancellableStatement.java:69)
com.siebel.etl.database.DBUtils.executeQuery(DBUtils.java:150)
com.siebel.etl.database.WeakDBUtils.executeQuery(WeakDBUtils.java:242)
com.siebel.analytics.etl.etltask.CountTableTask.doExecute(CountTableTask.java:99)
com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:411)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:307)
com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:214)
com.siebel.etl.engine.core.Session.getTargetTableRowCounts(Session.java:3375)
com.siebel.etl.engine.core.Session.run(Session.java:3251)
java.lang.Thread.run(Thread.java:619)
151 SEVERE Thu Jun 27 13:56:12 SGT 2013 Number of running sessions : 10
152 SEVERE Thu Jun 27 13:56:12 SGT 2013 Number of running sessions : 10
153 SEVERE Thu Jun 27 13:56:13 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GL_AR_REV_LinkageInformation_Extract_Full.txt SDE_ORA_GL_AR_REV_LinkageInformation_Extract_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
154 SEVERE Thu Jun 27 13:56:13 SGT 2013 Number of running sessions : 10
155 SEVERE Thu Jun 27 13:56:17 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_APTermsDimension_Full' has completed with error code 0
156 SEVERE Thu Jun 27 13:56:18 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_PartyContactStaging_Full' has completed with error code 0
157 SEVERE Thu Jun 27 13:56:18 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_MovementTypeDimension' has completed with error code 0
158 SEVERE Thu Jun 27 13:56:18 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionSourceDimension_AP_LKP_Extract_Full' has completed with error code 0
159 SEVERE Thu Jun 27 13:56:19 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionSourceDimension_AP_CC_Extract' has completed with error code 0
160 SEVERE Thu Jun 27 13:56:38 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_APTermsDimension_Full.txt SDE_ORA_APTermsDimension_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
161 SEVERE Thu Jun 27 13:56:38 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_PartyContactStaging_Full.txt SDE_ORA_PartyContactStaging_Full
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
162 SEVERE Thu Jun 27 13:56:38 SGT 2013 Number of running sessions : 10
163 SEVERE Thu Jun 27 13:56:38 SGT 2013 Number of running sessions : 9
164 SEVERE Thu Jun 27 13:56:39 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_TransactionSourceDimension_AP_CC_Extract.txt SDE_ORA_TransactionSourceDimension_AP_CC_Extract
Status Desc : Succeeded
WorkFlowMessage : Workflow executed successfully.
Error Message : Successfully completed.
ErrorCode : 0
165 SEVERE Thu Jun 27 13:56:39 SGT 2013 Number of running sessions : 8
166 SEVERE Thu Jun 27 13:56:44 SGT 2013 Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GL_COGS_LinkageInformation_Extract_Full' has completed with error code 0
167 SEVERE Thu Jun 27 13:57:04 SGT 2013 pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GL_COGS_LinkageInformation_Extract_Full.txt SDE_ORA_GL_COGS_LinkageInformation_Extract_Full
Status Desc : Succeeded
After certain tasks I found Informatica workflow did not run. In my case after 190 of 403 tasks but sometime 183 of 403 tasks so it is intermittent.
This sample tasks that is still running.
SDE_ORA_EmployeeDimension
SDE_ORA_GeoCountryDimension
SDE_ORA_MovementTypeDimension
SDE_ORA_Project
SDE_ORA_StatusDimension_ProjectStatus
SDE_ORA_TransactionSourceDimension_AP_LKP_Extract
SDE_ORA_TransactionTypeDimension_APDerive
SDE_ORA_TransactionTypeDimension_GLCOGSDerive
In Informatica Monitor I found this
The Integration Service INT_AFDROBI is alive. Ping time is 0 ms.
The Integration Service INT_AFDROBI is alive. Ping time is 0 ms.
The Integration Service INT_AFDROBI is alive. Ping time is 0 ms.
The Integration Service INT_AFDROBI is alive. Ping time is 0 ms.
Is it mean something?
I have tried to bounce the server bounce the dac server and start ETL but problem still happen.
Did I miss something on DAC or Informatica configuration?
Thanks
JoniHi Naga,
NagarajuCool Jul 8, 2013 6:49 AM (in response to JonHi Naga
1)Check once Relational connections of workflow manager.
Same with tns name
2)Name should be same in workflow manager,ODBC and DAC client.
in DAC I set same with SID -->connection test successful
3)Have you mentioned custom properties in informatica integration service
overrideMpltVarwithMapVar Yes
4)stop infa services and copy pmcmd file from server to client and start sevrices.
Done
5)And check once all DAC configuration once.
Done
After applied above setting DAC Execution Plan for Financial Analytics still take long time. Anything I should try?
Thanks,
Joni
Branch
Report Abuse
Like (0)
Reply -
Hello ,
I am working on oracle 11g R2 on AIX.
One query was performing good around 20 sec. but suddenly it took more then 15 min.
We check that the sql executoin plan changes , it showing that order of operation changed like order of using indexes is different.
Now the new plan is not good.
we want to force the old plan of sql to use in future.
I read about sql plan management , it shows a manual method to create baseline and evolve the all plan. In one texample we found that
first query execution plan was created using with out index and then with index So, second plan was good and accepted.
But in this case we do not need to change any thing ,query is performing bad may be becasue changes order of operation ..
One other way to use hint , but for this we need to change sqls , which is not possiable in production now.
The issue is
For this we need to run the sql again and oracle may not create plan like old one.So we will not be having old good plan to accept.
All 2 execution plan are already in cache .
I am looking for a way using that we can set sql plan hash value ( of good plan) or any other id of that sql plan to force to use that plan only.
any idea how to do it ..Stored Outlines are deprecated.
OP:
To fix a specific plan you have two choices:
1. SQL Plan Baselines - assuming the "good" plan is in AWR still then the steps are along the lines of load old plan from AWR into sql tuning set using DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY and DBMS_SQLTUNE.LOAD_SQLSET then load plans from sqlset into sql plan baseline using DBMS_SPM.LOAD_PLANS_FROM_SQLSET.
2. Using SQL Profiles to fix the outline hints - so similar to a stored outline but using the sql profile mechanism - using the coe_xfr_sql_profile.sql script, part of an approach in Oracle support doc id 215187.1
But +1 for Nikolay's recommendation of understanding whether there is a root cause to this problem instability (plan instability being "normal", but flip flopping between "good" and "bad" being a problem). Cardinality feedback is an obvious possible influence, different peeked binds another, stat changes, etc. -
How to get rid of 'BITMAP CONVERSION' in Execution Plan.
Hi I am using oracle 10g.
I am getting the path of execution of the query something as below. I have posted some part of it not all.
I want to get rid of this 'BITMAP CONVERSION' Section of the path, is there any hint for the same in oracle?
| 56 | TABLE ACCESS BY INDEX ROWID | XMVL_ONLINE_VIEW_BASE | 7274 | 1662K| | 3320 (21)| 00:00:17 |
| 57 | BITMAP CONVERSION TO ROWIDS | | | | | | |
| 58 | BITMAP AND | | | | | | |
| 59 | BITMAP OR | | | | | | |
| 60 | BITMAP CONVERSION FROM ROWIDS| | | | | | |
| 61 | SORT ORDER BY | | | | | | |
| 62 | INDEX RANGE SCAN | IDX_XMVLONLINEVIEW_BCOMPANYPK | | | | 4 (50)| 00:00:01 |
| 63 | BITMAP CONVERSION FROM ROWIDS| | | | | | |
| 64 | SORT ORDER BY | | | | 3608K| | |
| 65 | INDEX RANGE SCAN | IDX_XMVLONLINEVIEW_BCOMPANYPK | | | | 4 (50)| 00:00:01 |
| 66 | BITMAP CONVERSION FROM ROWIDS | | | | | | |
| 67 | SORT ORDER BY | | | | 3288K| | |
| 68 | INDEX RANGE SCAN | IDX_XMVLVIEW_UPPERVENDORNAME | | | | 59 (45)| 00:00:01 |Mark,
Please check below link :
http://www.orafaq.com/node/1420
In the above link there is a query :
EXPLAIN PLAN FOR
SELECT *
FROM ef_actl_expns
WHERE lbcr_sk IN (
SELECT lbcr_sk
FROM ed_lbr_cst_role
WHERE lbr_actv_typ_cd = 'A'
If it is ALTER SESSION SET OPTIMIZER_MODE = 'FIRST_ROWS' then there is "BITMAP CONVERSION TO ROWIDS" in the execution plan, but if it is ALTER SESSION SET OPTIMIZER_MODE = 'FIRST_ROWS_1' then there is no "BITMAP CONVERSION" in the plan. So, can we suggest to OP to go for ALTER SESSION SET OPTIMIZER_MODE = 'FIRST_ROWS_1' ?
But yes, as you stated that what is 4 digit of Oracle version is also mising in the above link. I am just asking that is it good to go with ALTER SESSION SET OPTIMIZER_MODE = 'FIRST_ROWS_1' please ? Because generally in the execution plan "BITMAP CONVERSION" happens for star transformation so, I think below link may also be interest to OP :
http://docs.oracle.com/cd/B19306_01/server.102/b14223/schemas.htm
Regards
Girish Sharma -
How to get execution plan in a CLOB?
Hello,
I want to get execution plans of a query in a CLOB format. I am trying to run following query against v$sql view. One of the columns I want is the execution plan for the query. I am getting following error. Eventually, I am going to insert this data into a log table to keep history of all SQLs and their execution plans.
How can I do that?
Thank you in advance.
SQL> SELECT sql_id,
2 plan_hash_value,
3 TO_CLOB (DBMS_XPLAN.display_awr (sql_id => sql_id,
4 plan_hash_value => plan_hash_value,
5 format => '+PEEKED_BINDS'))
6 sql_plan_clob,
7 buffer_gets,
8 executions
9 FROM v$sql
10 WHERE ROWNUM < 5;
TO_CLOB (DBMS_XPLAN.display_awr (sql_id => sql_id,
ERROR at line 3:
ORA-00932: inconsistent datatypes: expected NUMBER got SYS.DBMS_XPLAN_TYPE_TABLEtry first
select *
from table(DBMS_XPLAN.display_awr(sql_id => sql_id,
plan_hash_value => plan_hash_value,
format => '+PEEKED_BINDS'
)to see the table returned
then try to adapt http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:68212348056 to produce fixed length rows.
look for describe_columns procedures in http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_sql.htm#i1025449 for column length information available in internal tables and write the rows directly to clob instead of writing to disk and loading your clob from there.
Regards
Etbin
Edited by: Etbin on 31.10.2012 19:56
pointing to dbms_sql package added -
Same query at same time, but different execution plans from two schemas
Hi!
We just had some performance problems in our production system and I would like to ask for some advice from you.
We had a select-query that was run from USER1 on SCHEMA1, and it ran a table scan on a huge table.
Using session browser in TOAD I copied the Sql-statement, logged on SCHEMA1 and ran the same query. I got a different execution plan where I avoided the table scan.
So my question is:
How is it possible that the same query get different execution plans when running in two different schemas at the same time.
Some more information:
The user USER1 runs "alter session set current_schema=SCHEMA1;" when it logs on. Besides that it doesn't do anything so session parameter values are the same for USER1 and SCHEMA1.
SCHEMA1 is the schema owning the tables.
ALL_ROWS is used for both USER1 and SCHEMA1
Our database:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for Linux: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
Anybody has some suggestions to why I experience different execution plan for the same query, run at the same time, but from different users?Thanks for clarification of the schema structure.
What happens if instead of setting the current session schema to SCHEMA1, if you simply add the schema name to alle tables, views and other objects inside your select statement?
As in select * from schema1.dual;I know that this is not what you want eventually, but it might help to find any misleading objects.
Furthermore it is not clear what you meant with: "avoided a table scan".
Did you avoid a full table scan (FTS) or was the table completely removed from the execution plan?
Can you post both plans?
Edited by: Sven W. on Mar 30, 2010 5:27 PM
Maybe you are looking for
-
Home sharing works for movies but not music?
I have Home Sharing turned on my imac. It works fine streaming movies and tv shows to my Apple TV, Mac Mini and ipads. The media is stored on a NAS. My problem is with music. When on the ipad, I open the Home Sharing folder. Only half a dozen albu
-
Error - Creation of Billing Document - Item is not relevant for billing
Hi While trying to Create a Billing document after entering the Order No.the following error msg shown in the Tool Bar No Billing Documents were generated. See log The Error Log shows the following msg with Yellow coloured indication : 000000xxxxxx 0
-
Report in which i have to run bdc and do changes
Hi ABAP Gurus, my requirement is i have to develop report in which i need to have 2 radiobuttons if i select first radiobutton my program should run Report and if i select second radiobutton it should run va02 and make some chan
-
How can we set the Isolation levels for Transaction in Session Beans. and as well as Entity Beans i.e. CMP
-
Multiple instances of iTunes library in Computers menu
Since upgrading to Yosemite/iTunes 12, I see two instances of my iTunes library in the Apple TV "Computers" menu. The thing is, they're there even when iTunes isn't running, and home sharing is off for that library. I've tried restarting the ATV and