Multi Source execution plan build issue

Hi,
I am trying to create/build a multi source (homogenous) execution plan in DAC from 2 containers for 2 of the same subject areas (Financial Payables) from 2 different EBS sources. I successfully built each individual subject area in each container and ran an executin plan for each individually and it works fine. Now I am trying to create 1 execution plan based on the DAC steps. So far I have done the following:
- Configured all items for both subject areas in the DAC (tables, tasks, task groups, indexesconnections, informatica logical, physical folders, etc)
- Assigned EBS system A a sifferent priority than EBS system B under Setup->Physical Data Sources
- I noticed that I have to change the Physical Folder priorities for each informatica folder (SDE for Container A versus SDE for Container B). I assigned system A a higher priority
- Build the Execution plan..
I am able to build the execution plan successfully but I have the following issues/questions:
1) I assumed that by doing the steps above..it will ONLY execute the extract (SDE) for BOTH system containers but only have ONE Load (SIL and PLP)..and I do see that the SDEs are ok..but I see the SIL for Inder Row in RunTABLE running for BOTH...why is this? When I run the EP...I get an unique constraint index error..since its entering two records for each system. Isnt the DAC suppose to include only one instance of this task?
2) When I build the execution PLAN, it is including the SILOS and PLP tasks from BOTH source system containers (SILOS and PLP folders exist in both containers)...why is this? I thought that there is only one set of LOAD tasks and only SDE are run for each (as this is a homogenous load).
3) What exactly does the Physical folder Priority do? How is this different than the Source System priority? When I have a multi source execution plan, do I need to assign physical folder priorites to just the SDE folders?
4) When we run a multi source execution plan, after the first full load, can we somehow allow Incremental loads only from Container A subject area? Basically, I dont want to load incrementally from source sytem A after the first full load.
4) Do I have to set a DELAY? In my case, my systems are both in the same time zone..so I assume I can leave this DELAY option blank. Is that correct?
Thanks in advance
Edited by: 848613 on May 26, 2011 7:32 AM
Edited by: 848613 on May 26, 2011 12:24 PM

Hi
you are having 2 sources like Ora11510 and OraR1211 so you will be having 2 DAC containers
You need these below mandatory changes
for your issue
+++++++++++++++++++++++++++++++++
Message: Database errors occurred:
ORA-00001: unique constraint (XXDBD_OBAW.W_ETL_RUN_S_U2) violated while inserting into W_ETL_RUN_S
You need to Inactivate 2 tasks in R12 container.
#1 Load Row into Run Table
#2 Update Row into Run Table
+++++++++++++++++++++++++++++++++
There are other tasks that has to be executed only once
(ie Inactivate the Below in One of the container)
SIL_TimeOfDayDimension
SIL_DayDimension_GenerateSeed
SIL_DayDimension_CleanSeed
SIL_TimeOfDayDimension
SIL_CurrencyTypes
SIL_Stage_GroupAccountNumberDimension_FinStatementItem
SIL_ListOfValuesGeneral_Unspecified
PLP_StatusDimension_Load_StaticValues
SIL_TimeDimension_CalConfig
SIL_GlobalCurrencyGeneral_Update <dont Inactivate this> <check for any issues while running>
Update Parameters <dont Inactivate this> <check for any issues while running>
+++++++++++++++++++++++++++++++++++
Task :SDE_ORA_EmployeeDimension_Addresses
Unique Index Failure on "W_EMP_D_ADR_TMP_U1"
As you are load from 11.5.10 & R12 , for certain data which is common across the systems the ETL index creation Fails.
Customize the Index Creation in DAC with another unique columns (data_source_numID).
++++++++++++++++++++++++++++++++++++
Task :SDE_ORA_GeoCountryDimension
Unique Index Failure on "W_GEO_COUNTRY_DS_P1 " As you are loading from 11.5.10 & R12 , for certain data which is common across the systems the ETL index creation Fails.
Option1) Customize the Index Creation in DAC with another unique columns (data_source_numID)
++++++++++++++++++++++++++++++++++
This changes were mandate
Regards,
Kumar

Similar Messages

  • DAC Execution plan build error - for multi-sources

    We are implementing BI apps 7.9.6.1 Financial& HR analytics. We have multiple sources ( Peoplesoft (oracle)8.9 & 9.0, DB2, Flat file ....) and built 4 containers one for each source.We can build the execution plan sucessfully, however getting the below error when trying to reset the sources . This message is very sporadic. We used workaround to run the build and reset sources multiple times. This workaround seems to be working when we have 3 containers in execution plan and It is not working for 4 containers. Does anybody come across this issue.
    Thank you in advance for your help.
    DAC ver 10.1.3.4.1 patch .20100105.0812 Build date: Jan 5 2010
    ANOMALY INFO::: Failure
    MESSAGE:::com.siebel.analytics.etl.execution.NoSuchDatabaseException: No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.ExecutionPlanInitializationException
    com.siebel.analytics.etl.execution.ExecutionPlanDiscoverer.<init>(ExecutionPlanDiscoverer.java:62)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:189)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
    ::: CAUSE :::
    MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
    com.siebel.analytics.etl.execution.ExecutionParameterHelper.substituteNodeTables(ExecutionParameterHelper.java:176)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.retrieveExecutionPlanTasks(ExecutionPlanDesigner.java:420)
    com.siebel.analytics.etl.execution.ExecutionPlanDiscoverer.<init>(ExecutionPlanDiscoverer.java:60)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:189)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)

    Hi, In reference to this message
    MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
    1. I notice that you are using custom DAC Logical Name DBConnection_OLTP_ELM.
    2. While you Generate Parameters before Building execution plan , Can you please verify to which Source System container you are using this logical name DBConnection_OLTP_ELM as a source? and what is the value assigned to it
    3. Are you building Execution Plan with Subject Areas from 4 Containers.? did u Generate Parameters before Building Execution Plan?
    4. Also verify at the DAC Task level for the 4th Container what is the Primary Source value for all the Tasks? (TASK_GROUP_Extract_CodeDimension)

  • Error in DAC 7.9.4 while building the execution plan

    I'm getting Java exception EXCEPTION CLASS::: java.lang.NullPointerException while building the execution plan. The parameters are properly generated.
    Earlier we used to get the error - No physical database mapping for the logical source was found for :DBConnection_OLAP as used in QUERY_INDEX_CREATION(DBConnection_OLAP->DBConnection_OLAP)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
    We resolved this issue by using the in built connection parameters i.e. DBConnection_OLAP. This connection parameter has to be used because the execution plan cannot be built without OLAP connection.
    We are not using 7.9.4 OLAP data model since we have highly customized 7.8.3 OLAP model. We have imported 7.8.3 tables in DAC.
    We have created all the tasks with syncronzation method, created the task group and subject area. We are using in built DBConnection_OLAP and DBConnection_OLTP parameters and pointed them to relevant databases.
    system set up -
    OBI DAC server - windows server
    Informatica server and repository sever 7.1.4 - installed on local machine and
    provied PATH variables.
    IS this problem regarding the different versions i.e. we are using OBI DAC 7.9.4 and underlying data model is 7.8.3?
    Please help,
    Thanks and regards,
    Ashish

    Hi,
    Can anyone help me here as I have stuck with the following issue................?
    I have created a command task in workflow at Informatica that will execute a script in Unix to purge chache at OBIEE.But I want that workflow to be added as a task in DAC at already existing Plan and should be run at the last one whenever the Incremental load happens.
    I created a Task in DAC with name of Workflow like WF_AUTO_PURGE and added that task as following task at Execution mode,The problem here is,I want to build that task after adding to the plan.I some how stuck here , When I try to build the task It is giving following error !!!!!
    MESSAGE:::Error while loading pre post steps for Execution Plan. CompleteLoad_withDeleteNo physical database mapping for the logical source was found for :DBConnection_INFA as used in WF_AUTO_PURGE (DBConnection_INFA->DBConnection_INFA)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.ExecutionPlanInitializationException
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1317)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
    ::: CAUSE :::
    MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_INFA as used in WF_AUTO_PURGE(DBConnection_INFA->DBConnection_INFA)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
    com.siebel.analytics.etl.execution.ExecutionParameterHelper.substitute(ExecutionParameterHelper.java:208)
    com.siebel.analytics.etl.execution.ExecutionParameterHelper.parameterizeTask(ExecutionParameterHelper.java:139)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.handlePrePostTasks(ExecutionPlanDesigner.java:949)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:790)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
    Regards,
    Arul
    Edited by: 869389 on Jun 30, 2011 11:02 PM
    Edited by: 869389 on Jul 1, 2011 2:00 AM

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • OBI DAC 7.9.5 Unable to Build a new Execution Plan

    Hi,
    I have created a new Execution Plan. When I am trying to do the "Build", am getting an error. Details are below.
    Here is the list of steps I tried:
    1. Created a new Execution Plan
    2. Associated Subject Area
    3. Genereated Parameters
    4. Made sure that Logical Datasources have mapping with the physical ones
    5. Clicked on Build
    It always gives the below error:
    MESSAGE:::No physical database mapping for the logical source was found for :
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
    com.siebel.analytics.etl.execution.ExecutionParameterHelper.substituteNodeTables(ExecutionParameterHelper.java:187)
    com.siebel.analytics.etl.execution.ExecutionParameterHelper.parameterizeTask(ExecutionParameterHelper.java:152)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:619)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1106)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.buildWithConfirmation(EtlDefnTable.java:230)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.build(EtlDefnTable.java:150)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.performOperation(EtlDefnTable.java:99)
    com.siebel.analytics.etl.client.view.table.BasicTable.actionPerformed(BasicTable.java:990)
    javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1849)
    javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2169)
    javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:420)
    javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:258)
    javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:236)
    java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:231)
    java.awt.Component.processMouseEvent(Component.java:5488)
    javax.swing.JComponent.processMouseEvent(JComponent.java:3126)
    java.awt.Component.processEvent(Component.java:5253)
    java.awt.Container.processEvent(Container.java:1966)
    java.awt.Component.dispatchEventImpl(Component.java:3955)
    java.awt.Container.dispatchEventImpl(Container.java:2024)
    java.awt.Component.dispatchEvent(Component.java:3803)
    java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4212)
    java.awt.LightweightDispatcher.processMouseEvent(Container.java:3892)
    java.awt.LightweightDispatcher.dispatchEvent(Container.java:3822)
    java.awt.Container.dispatchEventImpl(Container.java:2010)
    java.awt.Window.dispatchEventImpl(Window.java:1778)
    java.awt.Component.dispatchEvent(Component.java:3803)
    java.awt.EventQueue.dispatchEvent(EventQueue.java:463)
    java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:242)
    java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:163)
    java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:157)
    java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:149)
    java.awt.EventDispatchThread.run(EventDispatchThread.java:110)
    I have verified every data source and all seems to be correct.
    Could someone please help me.
    Regards,
    ksks

    Hi,
    If you have created some custom mappings in informatica, please make sure that those folders are to be added in DAC. If you can provide some more details about the problem it would be better.
    Thanks,
    Navin Kumar Bolla

  • Error while building execution plan

    Hi, I'm trying to create an execution plan with container EBS 11.5.10 and subject area Project Analytics.
    I get this error while building:
    PA-EBS11510
    MESSAGE:::group TASK_GROUP_Load_PositionHierarchy for SIL_PositionDimensionHierarchy_PostChangeTmp is not found!!!
    EXCEPTION CLASS::: java.lang.NullPointerException
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:818)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
    Sorry for my english, I'm french.
    Thank you for helping me

    Hi,
    Find the what are all the subjectarea's in execution plan having the task 'SIL_PositionDimensionHierarchy_PostChangeTmp ', add the 'TASK_GROUP_Load_PositionHierarchy ' task group to those those subjectares.
    Assemble your subject area's and build the execution plan again.
    Thanks

  • BI Apps DAC error while building execution plan

    While building execution plan in DAC i am getting following error.
    C_MICRO_INCR_LOAD_V1
    MESSAGE:::group TASK_GROUP_Past_Due_Cost for PLP_ARSnapshotInvoiceAging is not found!!!
    EXCEPTION CLASS::: java.lang.NullPointerException
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:818)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)

    Hi,
    Go To Views -> Design -> Subject Areas tab and select your Subject Area.
    Upon selecting the subject area, in the lower pane you will find Tasks tab. Click on the task tab, there Add/Remove button will appear.
    Click on the Add/Remove button, one dialog box will be shown and in that click on the Query button and enter task group name *"TASK_GROUP_Past_Due_Cost"* and click on go button.
    Once that task appears click on Add button and click on the Save button.
    This will add that particular task group to your subject Area. Once these steps are done build the execution plan and start the DAC load.
    Hope this helps....
    Thanks,
    Navin Kumar Bolla

  • Issue in  pulling  the execution plan from awrsqrpt report.

    Hi All,
    In my production database , recently we faced some performance issue in daily job and i like to pull the old execution plan of that particular job from awrsqrpt.sql report but i got below error.
    Interesting information is i can able to generate the addm reports & awr reports between the same SNAP id's Why not AWRSQRPT report ???.
    Version : Oracle 11gR2
    Error :
    +++++++++
    Specify the SQL Id
    ~~~~~~~~~~~~~~~~~~
    Enter value for sql_id: b9shw6uakgbdt
    SQL ID specified: b9shw6uakgbdt
    declare
    ERROR at line 1:
    ORA-20025: SQL ID b9shw6uakgbdt does not exist for this database/instance
    ORA-06512: at line 22
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    Old history of SQL id :
    ++++++++++++++++++++
    SQL> select distinct SNAP_ID,SESSION_ID,SESSION_SERIAL#,USER_ID, SQL_EXEC_START from dba_hist_active_sess_history where sql_id='b9shw6uakgbdt' order by SQL_EXEC_START;
    SNAP_ID SESSION_ID SESSION_SERIAL# USER_ID SQL_EXEC_
    13095 1026 23869 86 29-AUG-12
    13096 1026 23869 86 29-AUG-12
    13118 582 14603 95 30-AUG-12
    13119 582 14603 95 30-AUG-12
    13139 708 51763 95 30-AUG-12
    13140 708 51763 95 30-AUG-12
    13142 900 2897 86 31-AUG-12
    13143 900 2897 86 31-AUG-12
    13215 1285 62559 86 03-SEP-12
    13216 1285 62559 86 03-SEP-12
    13238 1283 9057 86 04-SEP-12
    13239 1283 9057 86 04-SEP-12
    Thanks

    Hi,
    Are you using a cluster database (RAC), and running this report on the wrong instance?
    This report validates the SQL ID you specify against the dba_hist_sqlstat view, so check there if you have that SQL ID:
    select dbid, instance_number, snap_id
    from dba_hist_sqlstat
    where sql_id='b9shw6uakgbdt';Regards.
    Nelson

  • Subquery execution plan issue

    Hi All,
    Oracle v11.2.0.2
    I have a SELECT query which executes in less than a second and selects few records.
    Now, if I put this SELECT query in IN clause of a DELETE command, that takes ages (even when DELETE is done using its primary key).
    See below query and execution plan.
    Here is the SELECT query
    SQL> SELECT   ITEM_ID
      2                         FROM   APP_OWNER.TABLE1
      3                        WHERE   COLUMN1 = 'SomeValue1234'
      4                                OR (COLUMN1 LIKE 'SomeValue1234%'
      5                                    AND REGEXP_LIKE (
      6                                          COLUMN1,
      7                                          '^SomeValue1234[A-Z]{3}[0-9]{5}$'
      8  ));
       ITEM_ID
      74206192
    1 row selected.
    Elapsed: 00:00:40.87
    Execution Plan
    Plan hash value: 3153606419
    | Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |             |     2 |    38 |     7   (0)| 00:00:01 |
    |   1 |  CONCATENATION     |             |       |       |            |          |
    |*  2 |   INDEX RANGE SCAN | PK_TABLE1   |     1 |    19 |     4   (0)| 00:00:01 |
    |*  3 |   INDEX UNIQUE SCAN| PK_TABLE1   |     1 |    19 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("COLUMN1" LIKE 'SomeValue1234%')
           filter("COLUMN1" LIKE 'SomeValue1234%' AND  REGEXP_LIKE
                  ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$'))
       3 - access("COLUMN1"='SomeValue1234')
           filter(LNNVL("COLUMN1" LIKE 'SomeValue1234%') OR LNNVL(
                  REGEXP_LIKE ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$')))
    Statistics
              0  recursive calls
              0  db block gets
              8  consistent gets
              0  physical reads
              0  redo size
            348  bytes sent via SQL*Net to client
            360  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedNow see the DELETE command. ITEM_ID is the primary key for TABLE2
    SQL> delete from TABLE2 where ITEM_ID in (
      2  SELECT   ITEM_ID
      3                         FROM   APP_OWNER.TABLE1
      4                        WHERE   COLUMN1 = 'SomeValue1234'
      5                                OR (COLUMN1 LIKE 'SomeValue1234%'
      6                                    AND REGEXP_LIKE (
      7                                          COLUMN1,
      8                                          '^SomeValue1234[A-Z]{3}[0-9]{5}$'
      9  ))
    10  );
    1 row deleted.
    Elapsed: 00:02:12.98
    Execution Plan
    Plan hash value: 173781921
    | Id  | Operation               | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT        |                             |     4 |   228 | 63490   (2)| 00:12:42 |
    |   1 |  DELETE                 | TABLE2                      |       |       |            |          |
    |   2 |   NESTED LOOPS          |                             |     4 |   228 | 63490   (2)| 00:12:42 |
    |   3 |    SORT UNIQUE          |                             |     1 |    19 | 63487   (2)| 00:12:42 |
    |*  4 |     INDEX FAST FULL SCAN| I_TABLE1_3                  |     1 |    19 | 63487   (2)| 00:12:42 |
    |*  5 |    INDEX RANGE SCAN     | PK_TABLE2                   |     7 |   266 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("COLUMN1"='SomeValue1234' OR "COLUMN1" LIKE 'SomeValue1234%' AND
                  REGEXP_LIKE ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$'))
       5 - access("ITEM_ID"="ITEM_ID")
    Statistics
              1  recursive calls
              5  db block gets
         227145  consistent gets
         167023  physical reads
            752  redo size
            765  bytes sent via SQL*Net to client
           1255  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
              1  rows processedWhat can be the issue here?
    I tried NO_UNNEST hint, which made difference, but still the DELETE was taking around a minute (instead of 2 minutes), but that is way more than that sub-second response.
    Thanks in advance

    rahulras wrote:
    SQL> delete from TABLE2 where ITEM_ID in (
    2  SELECT   ITEM_ID
    3                         FROM   APP_OWNER.TABLE1
    4                        WHERE   COLUMN1 = 'SomeValue1234'
    5                                OR (COLUMN1 LIKE 'SomeValue1234%'
    6                                    AND REGEXP_LIKE (
    7                                          COLUMN1,
    8                                          '^SomeValue1234[A-Z]{3}[0-9]{5}$'
    9  ))
    10  );
    The optimizer will transform this delete statement into something like:
    delete from table2 where rowid in (
        select t2.rowid
        from
            table2 t2,
            table1 t1
        where
                t1.itemid = t2.itemid  
        and     (t1.column1 =  etc.... )
    )With the standalone subquery against t1 the optimizer has been a little clever with the concatenation operation, but it looks as if there is something about this transformed join that makes it impossible for the concatenation mechanism to be used. I'd also have to guess that something about the way the transformation has happened has made Oracle "lose" the PK index. As I said in another thread a few minutes ago, I don't usually look at 10053 trace files to solve optimizer problems - but this is the second one today where I'd start looking at the trace if it were my problem.
    You could try rewriting the query in this explicit join and select rowid form - that way you could always force the optimizer into the right path through table1. It's probably also possible to hint the original to make the expected path appear, but since the thing you hint and the thing that Oracle optimises are so different it might turn out to be a little difficult. I'd suggest raising an SR with Oracle.
    Regards
    Jonathan Lewis

  • Help in TKPROF Output: Row Source Operation v.s Execution plan confusing

    Hello,
    Working with oracle 10g/widnows, and trying to understand from the TKPROF what is the purpose of the "Row Source operation" section.
    From the "Row Source Operation" section the PMS_ROOM table is showing 16 rows selected, and accessed by an ACCESS FULL, and the following script gives another value.
    select count(*) from pms_folio give the following.
    COUNT(*)
    148184
    But in the execution plan section, the PMS_FOLIO table is accessed by ROW ID after index scan in the JIE_BITMAP_CONVERSION index
    What really means Row Source operation compares to the execution plan and how both information should be read to fully know if the optimizer is not making wrong estimation?
    furthermore readding 13594 buffers to fetch 2 rows, show the SQL Script itself is not sufficient, but the elapsed time is roughly 0.7seconds,but shrinking the # of buffers to be read should probably shrink the response time.
    The following TKPROF output.
    Thanks very much for your help
    SELECT NVL(SUM(NVL(t1.TOTAL_GUESTS, 0)), 0)
    FROM DEV.PMS_FOLIO t1
    WHERE (t1.FOLIO_STATUS <> 'CANCEL'
    AND t1.ARRIVAL_DATE <= TO_DATE(:1, 'SYYYY/MMDDHH24MISS')
    AND t1.DEPARTURE_DATE > TO_DATE(:1, 'SYYYY/MMDDHH24MISS')
    AND t1.PRIMARY_OR_SHARE = 'P' AND t1.IS_HOUSE = 'N')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.12 0.12 0 13594 0 2
    total 5 0.12 0.12 0 13594 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 82 (PMS5000)
    Rows Row Source Operation
    2 SORT AGGREGATE (cr=13594 pr=0 pw=0 time=120165 us)
    16 TABLE ACCESS FULL PMS_FOLIO (cr=13594 pr=0 pw=0 time=121338 us)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    2 SORT (AGGREGATE)
    16 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PMS_FOLIO'
    (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF
    'JIE_BITMAP_CONVERSION' (INDEX)
    <Edited by: user552326 on 8-Apr-2009 12:49 PM
    Edited by: user552326 on 8-Apr-2009 12:52 PM
    Edited by: user552326 on 8-Apr-2009 12:53 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Your query is using bind variables. Explain Plan doesn't work exactly the same way -- it can't handle bind variables.
    See http://www.oracle.com/technology/oramag/oracle/08-jan/o18asktom.html
    In your output, the row source operations listing is the real execution.
    The explain plan listing may well be misleading as Oracle uses cardinality estimates when trying to explain with bind variables.
    Also, it seems that your plan table may be a 9i version, not the 10g PLAN_TABLE created by catplan.sql There are additional columns in the 10g PLAN_TABLE that explain uses well.
    BTW, you start off with a mention of "PMS_ROOM" showing 16 rows, but it doesn't appear in the data you have presented.

  • Hi..Getting error while Building Execution Plans

    Client Profitability
    MESSAGE:::group Client Profitability for SDE_EducationalPrograms is not found!!!
    EXCEPTION CLASS::: java.lang.NullPointerException
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:818)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
    Pls let me know what can be done to remove the error and bulit the execution plan sucessfully..?
    Edited by: user9172867 on Feb 24, 2010 1:53 AM

    Have you verified the task SDE_EducationalPrograms in DAC points to the appropriate folder in Informatica?

  • Partitioned views in SQL 2014 have incorrect execution plans

    I've been using partitioned views in the past
    and used the check constraint in the source tables to make sure the only the table with the condition in the where clause on the view was used. In SQL Server 2012 this was working just fine (I had to do some tricks to suppress parameter sniffing, but it was
    working correct after doing that). Now I've been installing SQL Server 2014 Developer and used exactly the same logic and in the actual query plan it is still using the other tables. I've tried the following things to avoid this:
    - OPTION (RECOMPILE)
    - Using dynamic SQL to pass the parameter values as a static string.
    - Set the lazy schema validation option to true on the linked servers
    To explain wat I'm doing is this:
    1. I have 3 servers with the same source tables, the only difference in the tables is one column with the server name.
    2. I've created a CHECK CONSTRAINT on the server name column on each server.
    3. On one of the three server (in my case server 3) I've setup linked server connections to Server 1 and 2.
    4. On Server 3 I've created a partioned view that is build up like this:
    SELECT * FROM [server1].[database].[dbo].[table]
    UNION ALL SELECT * FROM [server2].[database].[dbo].[table]
    UNION ALL SELECT * FROM [server3].[database].[dbo].[table]
    5. To query the partioned view I use a query like this:
    SELECT *
    FROM [database].[dbo].[partioned_view_name]
    WHERE [server_name] = 'Server2'
    Now when I look at the execution plan on the 2014 environment it is still using all the servers instead of just Server2 like it should be. The strange thing is that SQL 2008 and 2012 are working just fine but 2014 seems not to use the correct plan. Maybe I
    forgot something, or something is changed in 2014 and a new approach is needed but I'm a little stuck here. 
    Did someone experience the same thing and if so, how did you fix this? 

    Hi Jos,
    Glad to hear that you have found a solution for your problem. Thank you for your sharing which will help other forum members who have the similar issue.
    Regards,
    Charlie Liao
    TechNet Community Support

  • HR Execution plan failing because target table needs to be extended?

    I'm going a fresh install of OBIEE 10.1.3.4.1 with Dac Build AN 10.1.3.4.1.20090415.0146. This is first one so is it common to have this issue or is something else going on that I just can't see? Should I just enable auto extend on this one table or could there be corruption causing this error?
    Thanks
    In DAC I can validate Informatic and physical servers all connect and are OK and iunder execute, I've ran analyze repository tables and it reporting OK, ran create repository report and no errors or mission objects reports. When I run Execution plan for Human Resources - Oracle R1211 - Flexfield, it failed 5 of the 6 phases wherne 3 are stopped (never ran) and two failed (SDE_ORA_Flx_EBSSegDataTmpLoad, SDE_ORA_Flx_EBSValidationTableDataTmpLoad).
    When I review the logs on the unix server under the /u02/BIAPPS_SUITE/INFR/server/infa_shared/SessLogs/ORA_R1211.DATAWAREHOUSE.SDE_ORAR1211_Adaptor.SDE_ORA_Flx_EBSSegDataTmpLoad.log, I see that it is failing because it has a problem with the target table W_ORA_FLX_EBS_SEG_DATA_TMP
    READER_1_1_1> RR_4049 SQL Query issued to database : (Fri May 28 23:04:22 2010)
    READER_1_1_1> RR_4050 First row returned from database to reader : (Fri May 28 23:04:22 2010)
    WRITER_1_*_1> WRT_8167 Start loading table [W_ORA_FLX_EBS_SEG_DATA_TMP] at: Fri May 28 23:04:21 2010
    WRITER_1_*_1> CMN_1761 Timestamp Event: [Fri May 28 23:04:22 2010]
    WRITER_1_*_1> WRT_8229 Database errors occurred:
    **************ORA-01652: unable to extend temp segment by 128 in tablespace BIA_DW_TBS*
    When I review the ORA_R1211.DATAWAREHOUSE.SDE_ORAR1211_Adaptor.SDE_ORA_Flx_EBSValidationTableDataTmpLoad.log i found the following errors:
    INSERT INTO
    W_ORA_FLX_EBS_VALID_TAB_TMP(FLEX_VALUE_SET_ID,APPLICATION_TABLE_NAME,VALUE_COLUMN_NAME,ID_COLUMN_NAME,VALUE_SET_WHERE_CLAUSE, DATASOURCE_NUM_ID,VALUE_SET_SQL) VALUES ( ?, ?, ?, ?, ?, ?, ?) WRITER_1_*_1> WRT_8020
    ************ No column marked as primary key for table[W_ORA_FLX_EBS_VALID_TAB_TMP]. UPDATEs Not Supported.
    WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_ORA_FLX_EBS_VALID_TAB_TMP]
    WRITER_1_*_1> WRT_8003 Writer initialization complete.
    READER_1_1_1> BLKR_16007 Reader run started.
    WRITER_1_*_1> WRT_8005 Writer run started.
    WRITER_1_*_1> WRT_8158 And then again the can't extend error WRT_8036 Target: W_ORA_FLX_EBS_VALID_TAB_TMP (Instance Name: [W_ORA_FLX_EBS_VALID_TAB_TMP])
    WRT_8038 Inserted rows - Requested: 10080 Applied: 10080 Rejected: 0 Affected: 10080
    READER_1_1_1> BLKR_16019 Read [13892] rows, read [0] error rows for source table [FND_FLEX_VALIDATION_TABLES]
    instance name [FND_FLEX_VALIDATION_TABLES]
    READER_1_1_1> BLKR_16008 Reader run completed.
    WRITER_1_*_1> CMN_1761 Timestamp Event: [Fri May 28 23:04:21 2010]
    WRITER_1_*_1> WRT_8229 Database errors occurred:
    ***** ORA-01652: unable to extend temp segment by 128 in tablespace BIA_DW_TBS

    Worked with DBA and though source tables were not huge, we extended manually and proccess completed once we re-ran.
    Thanks all.

  • Execution Plan Failed for Full Load

    Hi Team,
    When i run the full Load the execution plan failed and verified the log and found below information from SEBL_VERT_8_1_1_FLATFILE.DATAWAREHOUSE.SIL_Vert.SIL_InsertRowInRunTable
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [SEBL_VERT_8_1_1_FLATFILE.DATAWAREHOUSE.SIL_Vert.SIL_InsertRowInRunTable.log] for session parameter:[$PMSessionLogFile].
    DIRECTOR> VAR_27028 Use override value [1] for mapping parameter:[mplt_SIL_InsertRowInRunTable.$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27028 Use override value [21950495] for mapping parameter:[MPLT_GET_ETL_PROC_WID.$$ETL_PROC_WID].
    DIRECTOR> TM_6014 Initializing session [SIL_InsertRowInRunTable] at [Mon Sep 26 15:53:45 2011].
    DIRECTOR> TM_6683 Repository Name: [infa_rep]
    DIRECTOR> TM_6684 Server Name: [infa_service]
    DIRECTOR> TM_6686 Folder: [SIL_Vert]
    DIRECTOR> TM_6685 Workflow: [SIL_InsertRowInRunTable] Run Instance Name: [] Run Id: [8]
    DIRECTOR> TM_6101 Mapping name: SIL_InsertRowInRunTable [version 1].
    DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
    DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
    DIRECTOR> TM_6827 [H:\Informatica901\server\infa_shared\Storage] will be used as storage directory for session [SIL_InsertRowInRunTable].
    DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6703 Session [SIL_InsertRowInRunTable] is run by 32-bit Integration Service [node01_eblnhif-czc80685], version [9.0.1 HotFix2], build [1111].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [ASCII]
    MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6151 The session sort order is [Binary].
    MAPPING> TM_6156 Using low precision processing.
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6187 Session target-based commit interval is [10000].
    MAPPING> TM_6307 DTM error log disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> DBG_21075 Connecting to database [Connect_to_OLAP], user [OBAW]
    MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
    MAPPING> CMN_1022 Database driver error...
    CMN_1022 [
    Database driver error...
    Function Name : Logon
    ORA-12154: TNS:could not resolve service name
    Database driver error...
    Function Name : Connect
    Database Error: Failed to connect to database using user [OBAW] and connection string [Connect_to_OLAP].]
    MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
    MAPPING> CMN_1076 ERROR creating database connection.
    MAPPING> DBG_21520 Transform : LKP_W_PARAM_G_Get_ETL_PROC_WID, connect string : Relational:DataWarehouse
    MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
    MAPPING> TE_7017 Internal error. Failed to initialize transformation [MPLT_GET_ETL_PROC_WID.LKP_ETL_PROC_WID]. Contact Informatica Global Customer Support.
    MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
    MAPPING> TM_6006 Error initializing DTM for session [SIL_InsertRowInRunTable].
    MANAGER> PETL_24005 Starting post-session tasks. : (Mon Sep 26 15:53:45 2011)
    MANAGER> PETL_24029 Post-session task completed successfully. : (Mon Sep 26 15:53:45 2011)
    MAPPING> TM_6018 The session completed with [0] row transformation errors.
    MANAGER> PETL_24002 Parallel Pipeline Engine finished.
    DIRECTOR> PETL_24013 Session run completed with failure.
    DIRECTOR> TM_6022
    SESSION LOAD SUMMARY
    ================================================
    DIRECTOR> TM_6252 Source Load Summary.
    DIRECTOR> CMN_1740 Table: [SQ_FILE_DUAL] (Instance Name: [SQ_FILE_DUAL])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6253 Target Load Summary.
    DIRECTOR> TM_6023
    ===================================================
    DIRECTOR> TM_6020 Session [SIL_InsertRowInRunTable] completed at [Mon Sep 26 15:53:46 2011].
    I checked physical datasource connection in DAC and workflow manager and tested and verified but still facing same issue. I connected through oracle merant ODBC driver.
    Pls. let me know the solution for this error
    Regards,
    VSR

    Hi,
    Did you try using Oracle 10g/11g drivers as datasource? If not I think you should try that but before that ensure that from your DAC server box you are able to tnsping the OLAP database. Hope this helps
    If this is helpful mark is helpful
    Regards,
    BI Learner

  • Error while running ETL Execution plan in DAC(BI Financial Analytics)

    Hi All,
    I have Installed and configured BI Analytics and everything gone well but when I run ETL in DAC to to load the data from source to Analytics Warehouse Execution plan Failed with the following error message - Error while creating Server connections Unable to ping repository server.can anyone please help me on resolving the error.and here is the error error description.
    Error message description:
    ETL Process Id : 4
    ETL Name : New_Tessco_Financials_Oracle R12
    Run Name : New_Tessco_Financials_Oracle R12: ETL Run - 2009-02-06 16:08:48.169
    DAC Server : oratestbi(oratestbi.tessco.com)
    DAC Port : 3141
    Status: Failed
    Log File Name: New_Tessco_Financials_Oracle_R12.4.log
    Database Connection(s) Used :
         DataWarehouse jdbc:oracle:thin:@oratestbi:1521:DEVBI
         ORA_R12 jdbc:oracle:thin:@oratestr12:1531:DEV
    Informatica Server(s) Used :
    Start Time: 2009-02-06 16:08:48.177
    Message: Error while creating Server connections Unable to ping repository server.
    Actual Start Time: 2009-02-06 16:08:48.177
    End Time: 2009-02-06 16:08:51.785
    Total Time Taken: 0 Minutes
    Thanks in Advance,
    Prashanth
    Edited by: user10719430 on Feb 6, 2009 2:08 PM

    I am facing a similar error.. can you pls help me in fixing it.
    Following is the log from DAC server:
    31 SEVERE Fri Oct 16 17:22:18 EAT 2009
    START OF ETL
    32 SEVERE Fri Oct 16 17:22:21 EAT 2009 MESSAGE:::Unable to ping :'ebsczc9282brj', because '
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMCMD, version [8.1.1 SP5], build [135.0129], Windows 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    Invoked at Fri Oct 16 17:22:20 2009
    The command: [pingserver] is deprecated. Please use the command [pingservice] in the future.
    ERROR: Cannot connect to Integration Service [ebsczc9282brj:6006].
    Completed at Fri Oct 16 17:22:21 2009
    =====================================
    ERROR OUTPUT
    =====================================
    ' Make sure that the server is up and running.
    EXCEPTION CLASS::: com.siebel.etl.gui.core.MetaDataIllegalStateException
    com.siebel.etl.engine.bore.ServerTokenPool.populate(ServerTokenPool.java:231)
    com.siebel.etl.engine.core.ETL.thisETLProcess(ETL.java:225)
    com.siebel.etl.engine.core.ETL.run(ETL.java:604)
    com.siebel.etl.engine.core.ETL.execute(ETL.java:840)
    com.siebel.etl.etlmanager.EtlExecutionManager$1.executeEtlProcess(EtlExecutionManager.java:211)
    com.siebel.etl.etlmanager.EtlExecutionManager$1.run(EtlExecutionManager.java:165)
    java.lang.Thread.run(Thread.java:619)
    33 SEVERE Fri Oct 16 17:22:21 EAT 2009
    *     CLOSING THE CONNECTION POOL DataWarehouse
    34 SEVERE Fri Oct 16 17:22:21 EAT 2009
    *     CLOSING THE CONNECTION POOL SEBL_80
    35 SEVERE Fri Oct 16 17:22:21 EAT 2009
    END OF ETL
    --------------------------------------------

Maybe you are looking for