DAC: Clearing Failed Execution Plans and BAW Tables

Hi all,
Thank you for taking the time to review this post.
Background
Oracle BI Applications 7.9.6 Financial Analytics
OLTP Source: E-Business Suite 11.5.10
Steps Taken
1. In DAC I have create a New Source Container based on Oracle 11.5.10
2. I have updated the parameters in the Source System parameters
3. Then I created a new Execution Plan as a copy of the Financials_Oracle 11.5.10 record and checked Full Load Always
4. Added new Financials Subject Areas so that they have the new Source System
5. Updated the Parameters tab with the new Source System and Generated the Parameters
6. Build a new Execution Plan - Fails
Confirmation for Rerun
I want to confirm that the correct steps to Rerun an Execution Plan are as follows. I want to ensure that the OLTP (BAW) tables are truncated. I am experiencing duplicates in the W_GL_SEGMENTS_D (and DS) table even though there are no duplicates on the EBS.
In DAC under the EXECUTE window do the following:
- Navigate to the 'Current Run' tab.
- Highlight the failed execution plan.
- Right click and seleted 'Mark as completed.'
- Enter the numbers/text in the box.
Then:
- In the top toolbar select Tools --> ETL Management --> Reset Data Sources
- Enter the numbers/text in the boox.
Your assistance is greatly appreciated.
Kind Regards,
Gary.

Hi HTH,
I can confirm that I do not have duplicates on the EBS side.
I got the SQL Statement by:
1. Open Mapping SDE_ORA_GL_SegmentDimension in the SDE_ORA11510_Adaptor.
2. Review the SQL Statement in the Source Qualifier SQ_FND_FLEX_VALUES
3. Run this SQL command against my EBS 11.5.10 source OLTP and the duplicates that are appearing in the W_GL_SEGMENT_DS table to not exist.
SELECT
FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_ID,
FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_NAME,
FND_FLEX_VALUES.FLEX_VALUE,
MAX(FND_FLEX_VALUES_TL.DESCRIPTION),
MAX(FND_FLEX_VALUES.LAST_UPDATE_DATE),
MAX(FND_FLEX_VALUES.LAST_UPDATED_BY),
MAX(FND_FLEX_VALUES.CREATION_DATE),
MAX(FND_FLEX_VALUES.CREATED_BY),
MAX(FND_FLEX_VALUES.START_DATE_ACTIVE),
MAX(FND_FLEX_VALUES.END_DATE_ACTIVE),
FND_FLEX_VALUE_SETS.LAST_UPDATE_DATE LAST_UPDATE_DATE1
FROM
FND_FLEX_VALUES, FND_FLEX_VALUE_SETS, FND_FLEX_VALUES_TL
WHERE
FND_FLEX_VALUES.FLEX_VALUE_SET_ID = FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_ID AND FND_FLEX_VALUES.FLEX_VALUE_ID = FND_FLEX_VALUES_TL.FLEX_VALUE_ID AND
FND_FLEX_VALUES_TL.LANGUAGE = 'US' AND
(FND_FLEX_VALUES.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS') OR
FND_FLEX_VALUE_SETS.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS'))
GROUP BY
FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_ID,
FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_NAME,
FND_FLEX_VALUES.FLEX_VALUE,
FND_FLEX_VALUE_SETS.LAST_UPDATE_DATEHowever, one thing that I noticed was that I wanted to validate what the value of parameter $$LAST_EXTRACT_DATE is being populated with.
My investigation took me along the following route:
Checked what was set up in the DAC (+Dac Build AN 10.1.3.4.1.20090415.0146, Build date: April 15 2009+):
1. Design View -> Source System Parameters -> $$LAST_EXTRACT_DATE = Runtime Variable =@DAC_SOURCE_PRUNED_REFRESH_TIMESTAMP (+Haven't been able to track this Variable down!)+
2. Setup View -> DAC System Properties -> InformaticaParameterFileLocation -> $INFA_HOME/server/infa_shared/SrcFiles
Reviewing one of the log files for my failing Task:
$INFA_HOME/server/infa_shared/SrcFiles/ORA_11_5_10.DWRLY_OLAP.SDE_ORA11510_Adaptor.SDE_ORA_GLSegmentDimension_Full.txt
I noticed that several variables near the bottom (including $$LAST_EXTRACT_DATE) have not been populated. This variable gets populated at Runtime but is there a log file that shows the value that gets populated? I would also have expected that there would have been a Subsitution Variable in the place of a Static Value.
[SDE_ORA11510_Adaptor.SDE_ORA_GLSegmentDimension_Full]
$$ANALYSIS_END=01/01/2011 12:59:00
$$ANALYSIS_END_WID=20110101
$$ANALYSIS_START=12/31/1979 01:00:00
$$ANALYSIS_START_WID=19791231
$$COST_TIME_GRAIN=QUARTER
$$CURRENT_DATE=03/17/2010
$$CURRENT_DATE_IN_SQL_FORMAT=TO_DATE('2010-03-17', 'YYYY-MM-DD')
$$CURRENT_DATE_WID=20100317
$$DATASOURCE_NUM_ID=4
$$DEFAULT_LOC_RATE_TYPE=Corporate
$$DFLT_LANG=US
$$ETL_PROC_WID=21147868
$$FILTER_BY_SET_OF_BOOKS_ID='N'
$$FILTER_BY_SET_OF_BOOKS_TYPE='N'
$$GBL_CALENDAR_ID=WPG_Calendar~Month
$$GBL_DATASOURCE_NUM_ID=4
$$GLOBAL1_CURR_CODE=AUD
$$GLOBAL1_RATE_TYPE=Corporate
$$GLOBAL2_CURR_CODE=GBP
$$GLOBAL2_RATE_TYPE=Corporate
$$GLOBAL3_CURR_CODE=MYR
$$GLOBAL3_RATE_TYPE=Corporate
$$HI_DATE=TO_DATE('3714-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
$$HI_DT=01/01/3714 12:00:00
$$HR_ABSNC_EXTRACT_DATE=TO_DATE('1980-01-01 08:19:00', 'YYYY-MM-DD HH24:MI:SS')
$$HR_WRKFC_ADJ_SERVICE_DATE='N'
$$HR_WRKFC_EXTRACT_DATE=01/01/1970
$$HR_WRKFC_SNAPSHOT_DT=TO_DATE('2004-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
$$HR_WRKFC_SNAPSHOT_TO_WID=20100317
$$Hint1=
$$Hint_Tera_Post_Cast=
$$Hint_Tera_Pre_Cast=
$$INITIAL_EXTRACT_DATE=06/27/2009
$$INVPROD_CAT_SET_ID=27
$$INV_PROD_CAT_SET_ID10=
$$INV_PROD_CAT_SET_ID1=27
$$INV_PROD_CAT_SET_ID2=
$$INV_PROD_CAT_SET_ID3=
$$INV_PROD_CAT_SET_ID4=
$$INV_PROD_CAT_SET_ID5=
$$INV_PROD_CAT_SET_ID6=
$$INV_PROD_CAT_SET_ID7=
$$INV_PROD_CAT_SET_ID8=
$$INV_PROD_CAT_SET_ID9=
$$LANGUAGE=
$$LANGUAGE_CODE=E
$$LAST_EXTRACT_DATE=
$$LAST_EXTRACT_DATE_IN_SQL_FORMAT=
$$LAST_TARGET_EXTRACT_DATE_IN_SQL_FORMAT=
$$LOAD_DT=TO_DATE('2010-03-17 19:27:10', 'YYYY-MM-DD HH24:MI:SS')
$$LOW_DATE=TO_DATE('1899-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
$$LOW_DT=01/01/1899 00:00:00
$$MASTER_CODE_NOT_FOUND=
$$ORA_HI_DATE=TO_DATE('4712-12-31 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
$$PROD_CAT_SET_ID10=
$$PROD_CAT_SET_ID1=2
$$PROD_CAT_SET_ID2=
$$PROD_CAT_SET_ID3=
$$PROD_CAT_SET_ID4=
$$PROD_CAT_SET_ID5=
$$PROD_CAT_SET_ID6=
$$PROD_CAT_SET_ID7=
$$PROD_CAT_SET_ID8=
$$PROD_CAT_SET_ID9=
$$PROD_CAT_SET_ID=2
$$SET_OF_BOOKS_ID_LIST=1
$$SET_OF_BOOKS_TYPE_LIST='NONE'
$$SOURCE_CODE_NOT_SUPPLIED=
$$TENANT_ID=DEFAULT
$$WH_DATASOURCE_NUM_ID=999
$DBConnection_OLAP=DWRLY_OLAP
$DBConnection_OLTP=ORA_11_5_10
$PMSessionLogFile=ORA_11_5_10.DWRLY_OLAP.SDE_ORA11510_Adaptor.SDE_ORA_GLSegmentDimension_Full.logThe following snippet was discovered in the DAC logs for the failed Task (Log file: SDE_ORA_GLSegmentDimension_Full_DETAIL.log):
First error code [7004]
First error message: [TE_7004 Transformation Parse Warning [FLEX_VALUE_SET_ID || '~' || FLEX_VALUE]; transformation continues...]Finally I can confirm that there was a Task Truncate Table W_GL_LINKAGE_INFORMATION_GS that was successfully executed in Task SDE_ORA_GL_LinkageInformation_Extract.
Any further guidance greatly appreciated.
Kind Regards,
Gary.

Similar Messages

  • How to clear out failed Execution Plans?

    Hi everyone,
    This should be an easy thing but I can't seem to figure out how to do it.
    I created a custom execution plan and added every Financial Subject Area into the plan. Then I generated the parameters, clicked build and then ran it. The run failed.
    I noticed that there was a pre-configured execution plan with "Financials_Oracle R12" and decided that what I really wanted to run was Financials_Oracle R12 and not my custom plan. So I clicked build for Financials_Oracle R12 execution plan and once that finished, I clicked "Run Now". Upon clicking run now it told me that my custom execution plan had finished with a failed status and until it completed successfully, no other plans could be ran. I really don't want my custom plan to complete successfully. I want to run the other plan instead.
    How do I delete/clear out my failed run data so I can start a new execution plan?
    Thanks for the help!
    -Joe

    In DAC under the EXECUTE window do the following:
    - Navigate to the 'Current Run' tab.
    - Highlight the failed execution plan.
    - Right click and seleted 'Mark as completed.'
    - Enter the numbers/text in the box.
    Then:
    - In the top toolbar select Tools --> ETL Management --> Reset Data Sources
    - Enter the numbers/text in the boox.
    Now you're ready to start the new execution plan.
    Hope this helps.
    - Austin

  • Same sqlID with different  execution plan  and  Elapsed Time (s), Executions time

    Hello All,
    The AWR reports for two days  with same sqlID with different  execution plan  and  Elapsed Time (s), Executions time please help me to find out what is  reason for this change.
    Please find the below detail 17th  day my process are very slow as compare to 18th
    17th Oct                                                                                                          18th Oct
    221,808,602
    21
    2tc2d3u52rppt
    213,170,100
    72,495,618
    9c8wqzz7kyf37
    209,239,059
    71,477,888
    9c8wqzz7kyf37
    139,331,777
    1
    7b0kzmf0pfpzn
    144,813,295
    1
    0cqc3bxxd1yqy
    102,045,818
    1
    8vp1ap3af0ma5
    128,892,787
    16,673,829
    84cqfur5na6fg
    89,485,065
    1
    5kk8nd3uzkw13
    127,467,250
    16,642,939
    1uz87xssm312g
    67,520,695
    8,058,820
    a9n705a9gfb71
    104,490,582
    12,443,376
    a9n705a9gfb71
    62,627,205
    1
    ctwjy8cs6vng2
    101,677,382
    15,147,771
    3p8q3q0scmr2k
    57,965,892
    268,353
    akp7vwtyfmuas
    98,000,414
    1
    0ybdwg85v9v6m
    57,519,802
    53
    1kn9bv63xvjtc
    87,293,909
    1
    5kk8nd3uzkw13
    52,690,398
    0
    9btkg0axsk114
    77,786,274
    74
    1kn9bv63xvjtc
    34,767,882
    1,003
    bdgma0tn8ajz9
    Not only queries are different but also the number of blocks read by top 10 queries are much higher on 17th than 18th.
    The other big difference is the average read time on two days
    Tablespace IO Stats
    17th Oct
    Tablespace
    Reads
    Av Reads/s
    Av Rd(ms)
    Av Blks/Rd
    Writes
    Av Writes/s
    Buffer Waits
    Av Buf Wt(ms)
    INDUS_TRN_DATA01
    947,766
    59
    4.24
    4.86
    185,084
    11
    2,887
    6.42
    UNDOTBS2
    517,609
    32
    4.27
    1.00
    112,070
    7
    108
    11.85
    INDUS_MST_DATA01
    288,994
    18
    8.63
    8.38
    52,541
    3
    23,490
    7.45
    INDUS_TRN_INDX01
    223,581
    14
    11.50
    2.03
    59,882
    4
    533
    4.26
    TEMP
    198,936
    12
    2.77
    17.88
    11,179
    1
    732
    2.13
    INDUS_LOG_DATA01
    45,838
    3
    4.81
    14.36
    348
    0
    1
    0.00
    INDUS_TMP_DATA01
    44,020
    3
    4.41
    16.55
    244
    0
    1,587
    4.79
    SYSAUX
    19,373
    1
    19.81
    1.05
    14,489
    1
    0
    0.00
    INDUS_LOG_INDX01
    17,559
    1
    4.75
    1.96
    2,837
    0
    2
    0.00
    SYSTEM
    7,881
    0
    12.15
    1.04
    1,361
    0
    109
    7.71
    INDUS_TMP_INDX01
    1,873
    0
    11.48
    13.62
    231
    0
    0
    0.00
    INDUS_MST_INDX01
    256
    0
    13.09
    1.04
    194
    0
    2
    10.00
    UNDOTBS1
    70
    0
    1.86
    1.00
    60
    0
    0
    0.00
    STG_DATA01
    63
    0
    1.27
    1.00
    60
    0
    0
    0.00
    USERS
    63
    0
    0.32
    1.00
    60
    0
    0
    0.00
    INDUS_LOB_DATA01
    62
    0
    0.32
    1.00
    60
    0
    0
    0.00
    TS_AUDIT
    62
    0
    0.48
    1.00
    60
    0
    0
    0.00
    18th Oct
    Tablespace
    Reads
    Av Reads/s
    Av Rd(ms)
    Av Blks/Rd
    Writes
    Av Writes/s
    Buffer Waits
    Av Buf Wt(ms)
    INDUS_TRN_DATA01
    980,283
    91
    1.40
    4.74

    The AWR reports for two days  with same sqlID with different  execution plan  and  Elapsed Time (s), Executions time please help me to find out what is  reason for this change.
    Please find the below detail 17th  day my process are very slow as compare to 18th
    You wrote with different  execution plan, I  think, you saw plans. It is very difficult, you get old plan.
    I think Execution plans is not changed in  different days, if you not added index  or ...
    What say ADDM report about this script?
    As you know, It is normally, different Elapsed Time for same statement in different  day.
    It is depend your database workload.
    It think you must use SQL Access and SQl Tuning advisor for this script.
    You can get solution for slow running problem.
    Regards
    Mahir M. Quluzade

  • How are execution plan created with tables of stale stats

    Hello
    I would like to ask the group
    1. How oracle handels the execution plan with table joins where some tables have stale stats
    2. How would oracle handel execution plan where the table has histogram but the stats are stale.
    Database version 11.1.0.7.0
    Thanks
    Arun

    ALTER SESSION SET EVENTS='10053 trace name context forever, level 1';
    by doing above before executing the SQL, you can see what & how CBO arrives at the actual execution plan

  • Long time creating execution plan on wide table

    When executing a select on a wide table, with a column set and sparse columns, it always takes 12 seconds or longer to create the execution plan, even on an empty
    table. We used the database option ‘SET
    PARAMETERIZATION FORCED’ but it does not resolve the issue. We also tried ‘INDEXED
    VIEWS’ and ‘FILTERED INDEXES’
    without success. A covering index resolves the problem but this cannot be implemented as a general solution. We tested this on SQL Server 2008, 2008 R8, 2012 and 2014.
    The following queries (with their actual execution plans included in picture) all experience the problem:
    SELECT
    TOP (1)
      [ISN], [TIMESTAMP], [FIRMEN_NR], [SATZART], [TAB_SPERR_KZ],
    [DATUM_AEND], [UHRZEIT_AEND], [LFDNR],
      [BEARBEITER], [STEUERUNG], [UMSATZSCHLUESSEL], [US_W_S_KZ],
    [US_G_L_KZ], [VL_KZ], [RABATT_KZ],
      [INPUT_KZ], [ZAST_PFLICHTIGER_US], [BEWEG_SCHL1_1], [BEWEG_SCHL1_2],
    [BEWEG_SCHL1_3], [BEWEG_SCHL1_4],
    [BEWEG_SCHL1_5], [BEWEG_SCHL2_1], [BEWEG_SCHL2_2], [BEWEG_SCHL2_3], [BEWEG_SCHL2_4],
    [BEWEG_SCHL2_5],
      [WERT_KZ_1], [WERT_KZ_2], [WERT_KZ_3], [WERT_KZ_4], [WERT_KZ_5],
    [PREIS_KZ_1], [PREIS_KZ_2], [PREIS_KZ_3],
      [PREIS_KZ_4], [PREIS_KZ_5], [US_BEZ_1],
    [US_BEZ_2], [US_BEZ_3], [US_BEZ2_1], [US_BEZ2_2], [US_BEZ2_3],
      [GEGEN_KONTO], [EIN_AUSZAHLUNGS_KZ], [ABR_DRUCK_KZ], [UPFRONT_FEE],
    [LARG]
    FROM [IAM800_A]
    WHERE [IAM880_KEY1_N3]
    = 0x3033383830303532
    ORDER
    BY [ISN] ASC
    SELECT
    -- now without TOP (1)
      [ISN], [TIMESTAMP], [FIRMEN_NR], [SATZART], [TAB_SPERR_KZ],
    [DATUM_AEND], [UHRZEIT_AEND], [LFDNR],
      [BEARBEITER], [STEUERUNG], [UMSATZSCHLUESSEL], [US_W_S_KZ],
    [US_G_L_KZ], [VL_KZ], [RABATT_KZ],
      [INPUT_KZ], [ZAST_PFLICHTIGER_US], [BEWEG_SCHL1_1], [BEWEG_SCHL1_2],
    [BEWEG_SCHL1_3], [BEWEG_SCHL1_4],
    [BEWEG_SCHL1_5], [BEWEG_SCHL2_1], [BEWEG_SCHL2_2], [BEWEG_SCHL2_3], [BEWEG_SCHL2_4],
    [BEWEG_SCHL2_5],
      [WERT_KZ_1], [WERT_KZ_2], [WERT_KZ_3], [WERT_KZ_4], [WERT_KZ_5],
    [PREIS_KZ_1], [PREIS_KZ_2], [PREIS_KZ_3],
      [PREIS_KZ_4], [PREIS_KZ_5], [US_BEZ_1],
    [US_BEZ_2], [US_BEZ_3], [US_BEZ2_1], [US_BEZ2_2], [US_BEZ2_3],
      [GEGEN_KONTO], [EIN_AUSZAHLUNGS_KZ], [ABR_DRUCK_KZ], [UPFRONT_FEE],
    [LARG]
    FROM [IAM800_A]
    WHERE [IAM880_KEY1_N3]
    = 0x3033383830303532
    ORDER
    BY [ISN] ASC
    SELECT [ISN], [FIRMEN_NR]
    FROM [IAM800_A]
    WHERE [IAM880_KEY1_N3]
    = 0x3033383830303532
    ORDER
    BY [ISN] ASC
    Execution plans for above queries:

    Sorry my mistake. As these definitions fit within the size of a reply, I am posting them directly.
    Do note the use of a filegroup.
    PRINT 'Adding indexes to table [dbo].[IAM800_A]...'
    GO
    SET ANSI_NULLS              ON
    SET ANSI_PADDING            ON
    SET ANSI_WARNINGS           ON
    SET ARITHABORT              ON
    SET CONCAT_NULL_YIELDS_NULL ON
    SET NUMERIC_ROUNDABORT      OFF
    SET QUOTED_IDENTIFIER       ON
    GO
    -- Primary Key (ISN)
    ALTER TABLE [dbo].[IAM800_A] ADD CONSTRAINT [IAM800_A:ISN]
      PRIMARY KEY CLUSTERED ( [ISN] )
      ON [FG001_Data]
    GO
    -- Non-Clustered Indexes (Descriptors)
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM505-KEY1]
      ON [dbo].[IAM800_A] ( [IAM505_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM810-KEY1]
      ON [dbo].[IAM800_A] ( [IAM810_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM820-KEY1]
      ON [dbo].[IAM800_A] ( [IAM820_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM830-KEY1]
      ON [dbo].[IAM800_A] ( [IAM830_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM835-KEY1]
      ON [dbo].[IAM800_A] ( [IAM835_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM840-KEY1]
      ON [dbo].[IAM800_A] ( [IAM840_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM841-KEY1-ALT]
      ON [dbo].[IAM800_A] ( [IAM841_KEY1_ALT], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM842-KEY1]
      ON [dbo].[IAM800_A] ( [IAM842_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM843-KEY1]
      ON [dbo].[IAM800_A] ( [IAM843_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM845-KEY1]
      ON [dbo].[IAM800_A] ( [IAM845_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM850-KEY1]
      ON [dbo].[IAM800_A] ( [IAM850_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM860-KEY1]
      ON [dbo].[IAM800_A] ( [IAM860_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM870-KEY1]
      ON [dbo].[IAM800_A] ( [IAM870_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM875-KEY1]
      ON [dbo].[IAM800_A] ( [IAM875_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM880-KEY1-N3]
      ON [dbo].[IAM800_A] ( [IAM880_KEY1_N3], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM885-KEY1]
      ON [dbo].[IAM800_A] ( [IAM885_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM899-KEY1]
      ON [dbo].[IAM800_A] ( [IAM899_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM899-KEY2]
      ON [dbo].[IAM800_A] ( [IAM899_KEY2], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM899-KEY3]
      ON [dbo].[IAM800_A] ( [IAM899_KEY3], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM800-KEY1]
      ON [dbo].[IAM800_A] ( [IAM800_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM841-KEY1]
      ON [dbo].[IAM800_A] ( [IAM841_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM886-KEY1]
      ON [dbo].[IAM800_A] ( [IAM886_KEY1], [ISN] )
      ON [FG002_Index]
    GO
    CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM876-KEY1]
      ON [dbo].[IAM800_A] ( [IAM876_KEY1], [ISN] )
      ON [FG002_Index]
    GO

  • How to create execution plan in DAC and how to start ETL steps

    Hi,
    For ETL configuration i have installed and configured,
    1. Oracle 10G DB
    2. obiee and obia 7.9.6
    3. informatica server (here i created repository and integration services)
    4. DAC server 10g (setup for DAC is also configured like create warehouse tables etc.,
    same installation done on windows for client.
    Now i'm struck with execution plan creation.
    then
    how to start ETL, where start ETL
    Source : Oracle EBIZ R12.1.1
    target : Datawarehouse (10G DB)
    please help me out the steps till ETL start ?
    Thanks,
    Srik

    Hi Srik
    I am assuming you have followed the steps required before a load in the library for dac config, csv lookups etc...
    Did you check out the example in the documentation?
    Here is the link
    http://download.oracle.com/docs/cd/E14223_01/bia.796/e14217/windows_ic.htm#BABFGIHE
    If you follow those steps and run the execution plan and you can monitor the progress under the current run tab in the dac and the infa workflow monitor.
    Regards
    Nick
    Edited by: Nickho on 2010/06/21 9:24 AM

  • Error in DAC 7.9.4 while building the execution plan

    I'm getting Java exception EXCEPTION CLASS::: java.lang.NullPointerException while building the execution plan. The parameters are properly generated.
    Earlier we used to get the error - No physical database mapping for the logical source was found for :DBConnection_OLAP as used in QUERY_INDEX_CREATION(DBConnection_OLAP->DBConnection_OLAP)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
    We resolved this issue by using the in built connection parameters i.e. DBConnection_OLAP. This connection parameter has to be used because the execution plan cannot be built without OLAP connection.
    We are not using 7.9.4 OLAP data model since we have highly customized 7.8.3 OLAP model. We have imported 7.8.3 tables in DAC.
    We have created all the tasks with syncronzation method, created the task group and subject area. We are using in built DBConnection_OLAP and DBConnection_OLTP parameters and pointed them to relevant databases.
    system set up -
    OBI DAC server - windows server
    Informatica server and repository sever 7.1.4 - installed on local machine and
    provied PATH variables.
    IS this problem regarding the different versions i.e. we are using OBI DAC 7.9.4 and underlying data model is 7.8.3?
    Please help,
    Thanks and regards,
    Ashish

    Hi,
    Can anyone help me here as I have stuck with the following issue................?
    I have created a command task in workflow at Informatica that will execute a script in Unix to purge chache at OBIEE.But I want that workflow to be added as a task in DAC at already existing Plan and should be run at the last one whenever the Incremental load happens.
    I created a Task in DAC with name of Workflow like WF_AUTO_PURGE and added that task as following task at Execution mode,The problem here is,I want to build that task after adding to the plan.I some how stuck here , When I try to build the task It is giving following error !!!!!
    MESSAGE:::Error while loading pre post steps for Execution Plan. CompleteLoad_withDeleteNo physical database mapping for the logical source was found for :DBConnection_INFA as used in WF_AUTO_PURGE (DBConnection_INFA->DBConnection_INFA)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.ExecutionPlanInitializationException
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1317)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
    ::: CAUSE :::
    MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_INFA as used in WF_AUTO_PURGE(DBConnection_INFA->DBConnection_INFA)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
    com.siebel.analytics.etl.execution.ExecutionParameterHelper.substitute(ExecutionParameterHelper.java:208)
    com.siebel.analytics.etl.execution.ExecutionParameterHelper.parameterizeTask(ExecutionParameterHelper.java:139)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.handlePrePostTasks(ExecutionPlanDesigner.java:949)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:790)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
    Regards,
    Arul
    Edited by: 869389 on Jun 30, 2011 11:02 PM
    Edited by: 869389 on Jul 1, 2011 2:00 AM

  • BI Apps DAC error while building execution plan

    While building execution plan in DAC i am getting following error.
    C_MICRO_INCR_LOAD_V1
    MESSAGE:::group TASK_GROUP_Past_Due_Cost for PLP_ARSnapshotInvoiceAging is not found!!!
    EXCEPTION CLASS::: java.lang.NullPointerException
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:818)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
    com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)

    Hi,
    Go To Views -> Design -> Subject Areas tab and select your Subject Area.
    Upon selecting the subject area, in the lower pane you will find Tasks tab. Click on the task tab, there Add/Remove button will appear.
    Click on the Add/Remove button, one dialog box will be shown and in that click on the Query button and enter task group name *"TASK_GROUP_Past_Due_Cost"* and click on go button.
    Once that task appears click on Add button and click on the Save button.
    This will add that particular task group to your subject Area. Once these steps are done build the execution plan and start the DAC load.
    Hope this helps....
    Thanks,
    Navin Kumar Bolla

  • DAC Execution plan build error - for multi-sources

    We are implementing BI apps 7.9.6.1 Financial& HR analytics. We have multiple sources ( Peoplesoft (oracle)8.9 & 9.0, DB2, Flat file ....) and built 4 containers one for each source.We can build the execution plan sucessfully, however getting the below error when trying to reset the sources . This message is very sporadic. We used workaround to run the build and reset sources multiple times. This workaround seems to be working when we have 3 containers in execution plan and It is not working for 4 containers. Does anybody come across this issue.
    Thank you in advance for your help.
    DAC ver 10.1.3.4.1 patch .20100105.0812 Build date: Jan 5 2010
    ANOMALY INFO::: Failure
    MESSAGE:::com.siebel.analytics.etl.execution.NoSuchDatabaseException: No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.ExecutionPlanInitializationException
    com.siebel.analytics.etl.execution.ExecutionPlanDiscoverer.<init>(ExecutionPlanDiscoverer.java:62)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:189)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
    ::: CAUSE :::
    MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
    com.siebel.analytics.etl.execution.ExecutionParameterHelper.substituteNodeTables(ExecutionParameterHelper.java:176)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.retrieveExecutionPlanTasks(ExecutionPlanDesigner.java:420)
    com.siebel.analytics.etl.execution.ExecutionPlanDiscoverer.<init>(ExecutionPlanDiscoverer.java:60)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:189)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)

    Hi, In reference to this message
    MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
    1. I notice that you are using custom DAC Logical Name DBConnection_OLTP_ELM.
    2. While you Generate Parameters before Building execution plan , Can you please verify to which Source System container you are using this logical name DBConnection_OLTP_ELM as a source? and what is the value assigned to it
    3. Are you building Execution Plan with Subject Areas from 4 Containers.? did u Generate Parameters before Building Execution Plan?
    4. Also verify at the DAC Task level for the 4th Container what is the Primary Source value for all the Tasks? (TASK_GROUP_Extract_CodeDimension)

  • How do I create custom execution plan in DAC

    Hi,
    I need to create custom execution plan in DAC to load data into set of tables. actually schema we are building having 2 custom dim tables, 8 OOTB tables (Fact and Dim). So I need to copy existing OOTB execution plan, remove unwanted tables, tasks ect and rebuild. But I don't know exact process to do that Could you please provide step by step details?
    We are using OBIEE 10.1.3.4.1, BI Apps 7.9.6.1 and DAC client is on windows XP.
    Appreciate your help!
    Thanks
    Jay.

    Hi,
    I created new container which is a copy of Siebel 8.1.1 and then created new subject area, in tables child tab added one fact table and then clicked on assemble and clicked on save button. I got the message saying it is successfully assembled. I did checked tasks, and related dim tables cant see any for this new subject area. and another strage thing is I navigated to execution plans and again clicked on design tab now I cant see new subject area. when I try to add it again it is saying subject area with that name already available but I cant see it there. I did this twice but not able to fix the problem.
    Appreciate your help!
    Thanks
    Jay.

  • Query text and execution plan collection from prod DB Oracle 11g

    Hi all,
    I would like to collect query text, query execution plan and other statistics for all queries (mainly select queries) from production database.
    I am doing this by using OEM by click on Top activity link under performance tab but this gives top sql which is recent.
    This approach is helpful only when I need to debug recent queries only. If I need to know slow running queries and their execution plan at the end of day or sometime later then it’s not helpful for me.
    Anybody who has some better idea to do this will really be helpful.

    we did followings:
    1.Used awrextr.sql to export dmp file from production database.(imported snpashot id from 331 to 560)
    2.transfer file to test database.
    3.Used awrload.sql to import it in test database.
    but when we used OEM and went to Automatic Workload Repository link under Server tab
    its not showing snapshots of production database (which we have imported in test database )
    and showing only snapshot which was already there in test database.
    We did not find any error in import/export.
    do we need to perform something else also to display snapshots of production database in test database.

  • How to corret an execution plan that shows wrong number of rows?

    Using Oracle 10gR2 RAC (10.2.0.3) on SUSE Linux 9 (x86_64).
    I have a partition table that has 5 million rows (5,597,831). However an execution plan against the table show that the table has 10 million rows.
    Execution plan:
    SELECT STATEMENT ALL_ROWS Cost : 275,751 Bytes : 443 Cardinality : 1
    3 HASH GROUP BY Cost : 275,751 Bytes : 443 Cardinality : 1
         2 PARTITION RANGE ALL Cost : 275,018 Bytes : 4,430,000,000 Cardinality : *10,000,000* Partition # : 2 Partitions accessed #1 - #6
              1 TABLE ACCESS FULL TRACESALES.TRACE_BUSINESS_AREA Cost : 275,018 Bytes : 4,430,000,000 Cardinality : 10,000,000 Partition # : 2 Partitions accessed #1 - #6
    Plan hash value: 322783426
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 443 | 275K (2)| 00:55:10 | | |
    | 1 | HASH GROUP BY | | 1 | 443 | 275K (2)| 00:55:10 | | |
    | 2 | PARTITION RANGE ALL| | 10M| 4224M| 275K (2)| 00:55:01 | 1 | 6 |
    | 3 | TABLE ACCESS FULL | TRACE_BUSINESS_AREA | 10M| 4224M| 275K (2)| 00:55:01 | 1 | 6 |
    How does one correct the explain plan?
    The problem: Queries against the table are taking hours to complete. The problem started when the table was dropped then recreated with a new partition.
    I have complete the drop and creation against several tables for several years without problems until now.
    I have done the following: Analyzed statistics against the table, flushed buffer cache. Created a materialized view.
    However users queries are taking several hours to complete, where before the addition of the partition the queries where taking 5 minutes to complete.
    Thanks. BL.

    Yes, complete analysis of statistic was complete on indexes and against partitions.
    Table creation statement:
    CREATE TABLE TRACESALES.TRACE_BUSINESS_AREA
    ... *(400 columns)*
    TABLESPACE "trace_OLAPTS"
    PCTUSED 0
    PCTFREE 15
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0
    BUFFER_POOL KEEP
    PARTITION BY RANGE (YEAR)
    PARTITION TRACE_06 VALUES LESS THAN ('2007')
    NOLOGGING
    NOCOMPRESS
    TABLESPACE TRACE_2006
    PCTFREE 15
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    PARTITION TRACE_07 VALUES LESS THAN ('2008')
    NOLOGGING
    NOCOMPRESS
    TABLESPACE TRACE_2007
    PCTFREE 15
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    PARTITION TRACE_08 VALUES LESS THAN ('2009')
    NOLOGGING
    NOCOMPRESS
    TABLESPACE TRACE_2008
    PCTFREE 15
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    PARTITION TRACE_09 VALUES LESS THAN ('2010')
    NOLOGGING
    NOCOMPRESS
    TABLESPACE TRACE_2009
    PCTFREE 15
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    PARTITION TRACE_10 VALUES LESS THAN ('2011')
    NOLOGGING
    NOCOMPRESS
    TABLESPACE TRACE_2010
    PCTFREE 15
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    PARTITION TRACE_11 VALUES LESS THAN (MAXVALUE)
    NOLOGGING
    NOCOMPRESS
    TABLESPACE TRACE_2011
    PCTFREE 15
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    NOCOMPRESS
    CACHE
    PARALLEL ( DEGREE DEFAULT INSTANCES DEFAULT )
    MONITORING;
    *(index statements, constraints, triggers and security)*
    Table caching is on and running in parallel degree 4 instances 1.

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

  • Help in TKPROF Output: Row Source Operation v.s Execution plan confusing

    Hello,
    Working with oracle 10g/widnows, and trying to understand from the TKPROF what is the purpose of the "Row Source operation" section.
    From the "Row Source Operation" section the PMS_ROOM table is showing 16 rows selected, and accessed by an ACCESS FULL, and the following script gives another value.
    select count(*) from pms_folio give the following.
    COUNT(*)
    148184
    But in the execution plan section, the PMS_FOLIO table is accessed by ROW ID after index scan in the JIE_BITMAP_CONVERSION index
    What really means Row Source operation compares to the execution plan and how both information should be read to fully know if the optimizer is not making wrong estimation?
    furthermore readding 13594 buffers to fetch 2 rows, show the SQL Script itself is not sufficient, but the elapsed time is roughly 0.7seconds,but shrinking the # of buffers to be read should probably shrink the response time.
    The following TKPROF output.
    Thanks very much for your help
    SELECT NVL(SUM(NVL(t1.TOTAL_GUESTS, 0)), 0)
    FROM DEV.PMS_FOLIO t1
    WHERE (t1.FOLIO_STATUS <> 'CANCEL'
    AND t1.ARRIVAL_DATE <= TO_DATE(:1, 'SYYYY/MMDDHH24MISS')
    AND t1.DEPARTURE_DATE > TO_DATE(:1, 'SYYYY/MMDDHH24MISS')
    AND t1.PRIMARY_OR_SHARE = 'P' AND t1.IS_HOUSE = 'N')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.12 0.12 0 13594 0 2
    total 5 0.12 0.12 0 13594 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 82 (PMS5000)
    Rows Row Source Operation
    2 SORT AGGREGATE (cr=13594 pr=0 pw=0 time=120165 us)
    16 TABLE ACCESS FULL PMS_FOLIO (cr=13594 pr=0 pw=0 time=121338 us)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    2 SORT (AGGREGATE)
    16 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PMS_FOLIO'
    (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF
    'JIE_BITMAP_CONVERSION' (INDEX)
    <Edited by: user552326 on 8-Apr-2009 12:49 PM
    Edited by: user552326 on 8-Apr-2009 12:52 PM
    Edited by: user552326 on 8-Apr-2009 12:53 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Your query is using bind variables. Explain Plan doesn't work exactly the same way -- it can't handle bind variables.
    See http://www.oracle.com/technology/oramag/oracle/08-jan/o18asktom.html
    In your output, the row source operations listing is the real execution.
    The explain plan listing may well be misleading as Oracle uses cardinality estimates when trying to explain with bind variables.
    Also, it seems that your plan table may be a 9i version, not the 10g PLAN_TABLE created by catplan.sql There are additional columns in the 10g PLAN_TABLE that explain uses well.
    BTW, you start off with a mention of "PMS_ROOM" showing 16 rows, but it doesn't appear in the data you have presented.

  • Two different HASH GROUP BY in execution plan

    Hi ALL;
    Oracle version
    select *From v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - ProductionSQL
    select company_code, account_number, transaction_id,
    decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,
    (last_day(to_date('04/21/2010','MM/DD/YYYY')) - min(z.accounting_date) ) age,sum(z.amount)
    from
         select /*+ PARALLEL(use, 2) */    company_code,substr(account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,use.amount,use.accounting_date
         from financials.unbalanced_subledger_entries use
         where use.accounting_date >= to_date('04/21/2010','MM/DD/YYYY')
         and use.accounting_date < to_date('04/21/2010','MM/DD/YYYY') + 1
    UNION ALL
         select /*+ PARALLEL(se, 2) */  company_code, substr(se.account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,se.amount,se.accounting_date
         from financials.temp2_sl_snapshot_entries se,financials.account_numbers an
         where se.account_number = an.account_number
         and an.subledger_type in ('C', 'AC')
    ) z
    group by company_code,account_number,transaction_id,decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type)
    having abs(sum(z.amount)) >= 0.01explain plan
    Plan hash value: 1993777817
    | Id  | Operation                      | Name                         | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT               |                              |       |       | 76718 (100)|          |        |      |            |
    |   1 |  PX COORDINATOR                |                              |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)          | :TQ10002                     |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | P->S | QC (RAND)  |
    |*  3 |    FILTER                      |                              |       |       |            |          |  Q1,02 | PCWC |            |
    |   4 |     HASH GROUP BY              |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | PCWP |            |
    |   5 |      PX RECEIVE                |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | PCWP |            |
    |   6 |       PX SEND HASH             | :TQ10001                     |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,01 | P->P | HASH       |
    |   7 |        HASH GROUP BY           |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,01 | PCWP |            |
    |   8 |         VIEW                   |                              |    15M|  2055M| 76116   (1)| 00:15:14 |  Q1,01 | PCWP |            |
    |   9 |          UNION-ALL             |                              |       |       |            |          |  Q1,01 | PCWP |            |
    |  10 |           PX BLOCK ITERATOR    |                              |    11 |   539 |  1845   (1)| 00:00:23 |  Q1,01 | PCWC |            |
    |* 11 |            TABLE ACCESS FULL   | UNBALANCED_SUBLEDGER_ENTRIES |    11 |   539 |  1845   (1)| 00:00:23 |  Q1,01 | PCWP |            |
    |* 12 |           HASH JOIN            |                              |    15M|   928M| 74270   (1)| 00:14:52 |  Q1,01 | PCWP |            |
    |  13 |            BUFFER SORT         |                              |       |       |            |          |  Q1,01 | PCWC |            |
    |  14 |             PX RECEIVE         |                              |    21 |   210 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |  15 |              PX SEND BROADCAST | :TQ10000                     |    21 |   210 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |* 16 |               TABLE ACCESS FULL| ACCOUNT_NUMBERS              |    21 |   210 |     2   (0)| 00:00:01 |        |      |            |
    |  17 |            PX BLOCK ITERATOR   |                              |    25M|  1250M| 74183   (1)| 00:14:51 |  Q1,01 | PCWC |            |
    |* 18 |             TABLE ACCESS FULL  | TEMP2_SL_SNAPSHOT_ENTRIES    |    25M|  1250M| 74183   (1)| 00:14:51 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - filter(ABS(SUM(SYS_OP_CSR(SYS_OP_MSR(SUM("Z"."AMOUNT"),MIN("Z"."ACCOUNTING_DATE")),0)))>=.01)
      11 - access(:Z>=:Z AND :Z<=:Z)
           filter(("USE"."ACCOUNTING_DATE"<TO_DATE(' 2010-04-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "USE"."ACCOUNTING_DATE">=TO_DATE(' 2010-04-21 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
      12 - access("SE"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
      16 - filter(("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C'))
      18 - access(:Z>=:Z AND :Z<=:Z)I have few doubts regarding this execution plan and i am sure my questions would get answered here.
    Q-1: Why am i getting two different HASH GROUP BY operations (Operation id 4 & 7) even though there is only a single GROUP BY clause ? Is that due to UNION ALL operator that is merging two different row sources and HASH GROUP BY is being applied on both of them individually ?
    Q-2: What does 'BUFFER SORT' (Operation id 13) indicate ? Some time i got this operation and sometime i am not. For some other queries, i have observing around 10GB TEMP space and high cost against this operation. So just curious about whether it is really helpful ? if no, how to avoid that ?
    Q-3: Under PREDICATE Section, what does step 18 suggest ? I am not using any filter like this ? access(:Z>=:Z AND :Z<=:Z)

    aychin wrote:
    Hi,
    About BUFFER SORT, first of all it is not specific for Parallel Executions. This step in the plan indicates that internal sorting have a place. It doesn't mean that rows will be returned sorted, in other words it doesn't guaranty that rows will be sorted in resulting row set, because it is not the main purpose of this operation. I've previously suggested that the "buffer sort" should really simply say "buffering", but that it hijacks the buffering mechanism of sorting and therefore gets reported completely spuriously as a sort. (see http://jonathanlewis.wordpress.com/2006/12/17/buffer-sorts/ ).
    In this case, I think the buffer sort may be a consequence of the broadcast distribution - and tells us that the entire broadcast is being buffered before the hash join starts. It's interesting to note that in the recent of the two plans with a buffer sort the second (probe) table in the hash join seems to be accessed first and broadcast before the first table is scanned to allow the join to occur.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    +"Science is more than a body of knowledge; it is a way of thinking"+
    +Carl Sagan+                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Maybe you are looking for

  • Did not get Acrobat 9 with my CS5 Design premium download ???

    I have the license key sent by my company for CS5 Design Premium. We upgraded from a previous version of CS3 and I was issued a new laptop to install the upgrade. - I no longer have the computer with the CS3 version or a disk from the past. This is a

  • Mac Pro crashing every few minutes

    One week old Mac Pro 8 core. Had been working fine until I found the machine "on" (white light on on front of machine) this morning but no video. I rebooted and the machine came back, ran for about 10 minutes and the monitor went black again. I switc

  • Mac Help does not work

    When I am in the finder, and select Mac Help from the Help menu, the help window appears for a few seconds, does not allow me to use it, and then closes again Hence I do not have access the the Mac Help database what to do?

  • BOM Explosion with order finish date

    Hi, In transaction OPPQ it is possible to configure the BOM explosion date used by MRP and creation of a process order (see SAP NOTE 506345). I suppose this setting is also considered when re-reading master data during release of an order. The field

  • IP pool to mac address

    is it possible in a remote access VPN to give the same IP to a client as they connect and pool the address from th IP pool? I was thinking like a DHCP server where you can tie a mac to an address, is it possible with a pool?