Disabled execution plan still being picked up

I already disabled the bad execution plan in the SQL baselines (ENABLED = NO), but still being picked up by the SQL. Why is that so?

Hi Ramin,
These are the results of show parameter optimizer
SQL> show parameter optimizer;
NAME                                 TYPE        VALUE
optimizer_capture_sql_plan_baselines boolean     FALSE
optimizer_dynamic_sampling           integer     4
optimizer_features_enable            string      11.2.0.2
optimizer_index_caching              integer     0
optimizer_index_cost_adj             integer     100
optimizer_mode                       string      ALL_ROWS
optimizer_secure_view_merging        boolean     TRUE
optimizer_use_invisible_indexes      boolean     FALSE
optimizer_use_pending_statistics     boolean     FALSE
optimizer_use_sql_plan_baselines     boolean     TRUE

Similar Messages

  • Reintalled OS, but disk space is still being used from old files?

    I tried reinstalling the OS after my macbook would freeze when i tried to log in. I first tried to archive and install and then it said it couldn't complete that.I then put in the 2nd OS disc and it created a new user profile and my computer is working fine. The only thing is that a few alias are still being picked up from the old users and there is only 60 GB of space left on the hard drive. None of the information on the old user files is totally important so I can do without it, but I would like all my hard drive space back

    if you don't need the old user go to system preferences->accounts and delete the old user. also, check to see if you have a Previous System,s folder at the top level of the hard drive. delete that too and empty trash.

  • CBO not picking the right execution plan

    Database: Oracle 9.2.0.6 EE
    OS:Solaris 9
    I am trying to tune a query that is generated via Siebel Analytics. I am seeing a behaviour which is puzzling me but hopefully would be 'elementary' for someone like JPL.
    The query is based on a total of 7 tables. If I comment out any 2 dimension tables, the query picks up the right index on the fact table. However, the moment I add another table to the query, the plan goes awry.
    The query with 5 tables is as below:
    select count(distinct decode( T30256.HEADER_FLG , 'N' , T30256.ROW_WID ) ) as c1,
    T352305.DAY_DT as c2,
    case  when T44643.PRODUCT_CLASS_NAME = 'MobileSubscription' then T40081.ATTR15_CHAR_VAL else 'Unspecified' end  as c3,
    T352305.ROW_WID as c5
    from
                   W_PRODUCT_D T30955,
                   W_PRDATTRNM_D T44643,                         
                   W_DAY_D T352305,                 
                   W_ORDERITEM_F T30256,              
                   W_PRDATTR_D T40081                         
    where  ( T30955.ROW_WID = T44643.ROW_WID
    and T30256.LAST_UPD_DT_WID = T352305.ROW_WID
    and T30256.PROD_ATTRIB_WID = T40081.ROW_WID 
    and T30256.PROD_WID = T30955.ROW_WID
    and T30955.PROD_NAME = 'Mobile Subscription'
    and (case  when T44643.PRODUCT_CLASS_NAME = 'MobileSubscription' then T40081.ATTR15_CHAR_VAL else 'Unspecified' end  in ('BT150BB-18M', 'BT250BB-18M', 'BT50BB-18M', 'BT600BB-18M'))
    and T352305.DAY_DT between TO_DATE('2008-09-27' , 'YYYY-MM-DD') - 7 and TO_DATE('2008-09-27' , 'YYYY-MM-DD') - 1
    group by
    T352305.ROW_WID, T352305.DAY_DT,
    case  when T44643.PRODUCT_CLASS_NAME = 'MobileSubscription' then T40081.ATTR15_CHAR_VAL else 'Unspecified' end
    ;And the execution plan is as below:
    | Id  | Operation                        |  Name                | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                 |                      |   269 | 25824 | 18660   (3)|
    |   1 |  SORT GROUP BY                   |                      |   269 | 25824 | 18660   (3)|
    |   2 |   NESTED LOOPS                   |                      |   269 | 25824 | 18658   (3)|
    |   3 |    NESTED LOOPS                  |                      |  6826 |   579K|  4734   (3)|
    |   4 |     MERGE JOIN CARTESIAN         |                      |     8 |   544 |     6  (17)|
    |   5 |      NESTED LOOPS                |                      |     1 |    54 |     4  (25)|
    |   6 |       TABLE ACCESS BY INDEX ROWID| W_PRODUCT_D          |     1 |    37 |     3  (34)|
    |*  7 |        INDEX RANGE SCAN          | W_PRODUCT_D_M2       |     1 |       |     2  (50)|
    |   8 |       TABLE ACCESS BY INDEX ROWID| W_PRDATTRNM_D        |     1 |    17 |     2  (50)|
    |*  9 |        INDEX UNIQUE SCAN         | W_PRDATTRNM_D_P1     |     1 |       |            |
    |  10 |      BUFFER SORT                 |                      |     8 |   112 |     4   (0)|
    |  11 |       TABLE ACCESS BY INDEX ROWID| W_DAY_D              |     8 |   112 |     3  (34)|
    |* 12 |        INDEX RANGE SCAN          | W_DAY_D_M39          |     8 |       |     2  (50)|
    |  13 |     TABLE ACCESS BY INDEX ROWID  | W_ORDERITEM_F        |   849 | 16131 |   592   (3)|
    |* 14 |      INDEX RANGE SCAN            | W_ORDERITEM_F_INDX9  |   852 |       |     4  (25)|
    |* 15 |    INDEX RANGE SCAN              | W_PRDATTR_D_M29_T1   |     1 |     9 |     3  (34)|
    ----------------------------------------------------------------------------------------------Note how the dimension tables W_PRODUCT_D & W_DAY_D are joined using cartesian join before joining to the fact table W_ORDERITEM_F using the composite index 'W_ORDERITEM_F_INDX9'. This index consists of LAST_UPD_DT_WID, PROD_WID and ACTION_TYPE_WID, which are foreign keys to the dimension tables.
    Now if I add one more table to the query:
    select count(distinct decode( T30256.HEADER_FLG , 'N' , T30256.ROW_WID ) ) as c1,
                  T352305.DAY_DT as c2,
                   case  when T44643.PRODUCT_CLASS_NAME = 'MobileSubscription' then T40081.ATTR15_CHAR_VAL else 'Unspecified' end  as c3,
                   T30371.X_BT_DLR_GROUP as c4,
                   T352305.ROW_WID as c5
              from                W_PRODUCT_D T30955,
                   W_PRDATTRNM_D T44643,                         
                   W_DAY_D T352305,                 
                   W_ORDERITEM_F T30256,              
                   W_ORDER_D T30371,                                            
                   W_PRDATTR_D T40081                         
              where  ( T30955.ROW_WID = T44643.ROW_WID
              and T30256.LAST_UPD_DT_WID = T352305.ROW_WID
              and T30256.PROD_ATTRIB_WID = T40081.ROW_WID
              and T30256.PROD_WID = T30955.ROW_WID
              and T30256.ORDER_WID = T30371.ROW_WID
              and T30955.PROD_NAME = 'Mobile Subscription'
              and T30371.STATUS_CD = 'Complete'
              and T30371.ORDER_TYPE = 'Sales Order' 
              and (case  when T44643.PRODUCT_CLASS_NAME = 'MobileSubscription' then T40081.ATTR15_CHAR_VAL else 'Unspecified' end  in ('BT150BB-18M', 'BT250BB-18M', 'BT50BB-18M', 'BT600BB-18M'))
              and T352305.DAY_DT between TO_DATE('2008-09-27' , 'YYYY-MM-DD') - 7 and TO_DATE('2008-09-27' , 'YYYY-MM-DD') - 1
              group by T30371.X_BT_DLR_GROUP, T352305.ROW_WID, T352305.DAY_DT,
              case  when T44643.PRODUCT_CLASS_NAME = 'MobileSubscription' then T40081.ATTR15_CHAR_VAL else 'Unspecified' end;I have added a single table W_ORDER_D to the query, and the execution plan is:
    | Id  | Operation                          |  Name               | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                   |                     |    44 |  6336 | 78695   (3)|
    |   1 |  SORT GROUP BY                     |                     |    44 |  6336 | 78695   (3)|
    |   2 |   NESTED LOOPS                     |                     |    44 |  6336 | 78694   (3)|
    |   3 |    NESTED LOOPS                    |                     |   269 | 27707 | 78145   (3)|
    |*  4 |     HASH JOIN                      |                     |  6826 |   626K| 64221   (3)|
    |   5 |      TABLE ACCESS BY INDEX ROWID   | W_DAY_D             |     8 |   112 |     4  (25)|
    |*  6 |       INDEX RANGE SCAN             | W_DAY_D_M39         |     1 |       |     3  (34)|
    |   7 |      TABLE ACCESS BY INDEX ROWID   | W_ORDERITEM_F       | 86886 |  2206K| 64197   (3)|
    |   8 |       NESTED LOOPS                 |                     | 87004 |  6797K| 64200   (3)|
    |   9 |        NESTED LOOPS                |                     |     1 |    54 |     4  (25)|
    |  10 |         TABLE ACCESS BY INDEX ROWID| W_PRODUCT_D         |     1 |    37 |     3  (34)|
    |* 11 |          INDEX RANGE SCAN          | W_PRODUCT_D_M2      |     1 |       |     2  (50)|
    |  12 |         TABLE ACCESS BY INDEX ROWID| W_PRDATTRNM_D       |     1 |    17 |     2  (50)|
    |* 13 |          INDEX UNIQUE SCAN         | W_PRDATTRNM_D_P1    |     1 |       |            |
    |* 14 |        INDEX RANGE SCAN            | W_ORDERITEM_F_N6    | 86886 |       |   212  (18)|
    |* 15 |     INDEX RANGE SCAN               | W_PRDATTR_D_M29_T1  |     1 |     9 |     3  (34)|
    |* 16 |    INDEX RANGE SCAN                | W_ORDER_D_N6        |     1 |    41 |     3  (34)|
    -----------------------------------------------------------------------------------------------Now CBO doesn't choose the composite index and the cost also has increased to 78695. But if I simply add an /*+ORDERED*/ hint to the above query, so that it should join the dimension tables before joining to fact table, then the cost drops to 20913. This means that CBO is not choosing the plan with the lowest cost. I tried increasing the optimizer_max_permutations to 80000, setting session level optimizer_dynamic_sampling to 8 (just to see if it works), but no success.
    Could you please advise how to overcome this problem?
    Many thanks.

    joshic wrote:
    Database: Oracle 9.2.0.6 EE
    OS:Solaris 9
    I am trying to tune a query that is generated via Siebel Analytics. I am seeing a behaviour which is puzzling me but hopefully would be 'elementary' for someone like JPL.
    The query is based on a total of 7 tables. If I comment out any 2 dimension tables, the query picks up the right index on the fact table. However, the moment I add another table to the query, the plan goes awry.
    I have added a single table W_ORDER_D to the query, and the execution plan is:
    Now CBO doesn't choose the composite index and the cost also has increased to 78695. But if I simply add an /*+ORDERED*/ hint to the above query, so that it should join the dimension tables before joining to fact table, then the cost drops to 20913. This means that CBO is not choosing the plan with the lowest cost. I tried increasing the optimizer_max_permutations to 80000, setting session level optimizer_dynamic_sampling to 8 (just to see if it works), but no success.Back to the original question:
    * Can you force the index usage of the composite index on W_ORDERITEM_F in the second query using an INDEX hint (instead of the ORDERED hint)? If yes, what does the plan look like, particularly what cost is reported?
    * Could you post the plans including the "Predicate Information" section below the plan output?
    * What is the definition of the index W_ORDERITEM_F_N6 on W_ORDERITEM_F?
    * Are the cardinalities reported in the execution plans close to reality or way off? The best way to verify this would be to run your query with SQL tracing enabled and generate a tkprof output. If you do so please post the tkprof output here as well.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Disabled pop-up blocker but pop-ups are still being blocked. How can I remedy this?

    I have disabled the pop-up blocker, I've added the site to the exceptions list but pop-ups are still being blocked. I have an email program that requires pop-ups in order to compose an email.

    Are you sure that you don't another popup blocker installed?
    Like maybe as a feature of a security suite?

  • Wrong Execution Plan getting picked on some occassions

     
    Hi,
    The following query sometimes runs efficiently; gives quick results, CPU and IO utilization is also up to the mark. But sometimes the query takes very long time to execute.
    At that time I do not observe any blocks as well. Also the CPU and IO is too huge for running this query. It seems the engine picks up wrong execution plan some times.
    This query is fired through some product. So there is no scope to change the query. Any suggestions for the engine to pick up right execution plan?
    select i . bpd_instance_id as instanceId , i . instance_name as instanceName , bpd . name as bpdName , istatus . name as instanceStatus , t . subject as taskSubject , tpriority . name as taskPriority , t . due_date as taskDueDate , t . attached_form_ref as taskAttachedInfoPathFormRef , t . attached_ext_activity_ref as taskAttachedExtActivityRef , t . task_id as taskId , tstatus . name as taskStatus , tuser . user_name as assignedToUser , tpriority . ranking as taskPriorityRanking from msadmin.lsw_task t with ( nolock ) inner join msadmin.lsw_bpd_instance i with ( nolock ) on t . bpd_instance_id = i . bpd_instance_id left join msadmin.lsw_task_status_codes tstatus on t . status = tstatus . status_value left join msadmin.lsw_bpd_status_codes istatus on i . execution_status = istatus . status_id left join msadmin.lsw_priority tpriority on t . priority_id = tpriority . priority_id left join msadmin.lsw_bpd bpd on i . cached_bpd_version_id = bpd . version_id left join msadmin.lsw_usr_xref tuser on t . user_id = tuser . user_id where ( t . status in ( '11','12' ) and ( t . user_id = 5909 or t . task_id in (
    select t . task_id
    from msadmin.lsw_task t with ( NOLOCK )
    inner join msadmin.lsw_usr_grp_mem_xref m with ( NOLOCK )
    on t . group_id = m . group_id
    where m . user_id = 5909
    and t . user_id = -1
    ) or t . task_id in (
    select t . task_id
    from msadmin.lsw_task t with ( NOLOCK )
    inner join msadmin.lsw_grp_grp_mem_exploded_xref x with ( NOLOCK )
    on t . group_id = x . container_group_id
    inner join msadmin.lsw_usr_grp_mem_xref m with ( NOLOCK )
    on m . group_id = x . group_id
    where m . user_id = 5909
    and t . user_id = -1
    ) ) ) order by taskDueDate , taskPriorityRanking , instanceId , taskId

    Is it a good practive to update a stat nightly on all the tables with FULL SCAN? will it help to boost the performance?
    Maybe. But it may also be overkill, particularly if you do not have the hours. A somewhat better option may be to run with FULLSCAN, INDEX to only update the statistics for indexes. But note that if you rebuild an index, that updates the statistics for that
    index with FULLSCAN anyway.
    But overall, there are rarely any magic button to performance tuning. In the end, you will need to do the hard work to find the painful queries that need better indexes or needs to be rewritten.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • How to force a execution plan

    Hello ,
    I am working on oracle 11g R2 on AIX.
    One query was performing good around 20 sec. but suddenly it took more then 15 min.
    We check that the sql executoin plan changes , it showing that order of operation changed like order of using indexes is different.
    Now the new plan is not good.
    we want to force the old plan of sql to use in future.
    I read about sql plan management , it shows a manual method to create baseline and evolve the all plan. In one texample we found that
    first query execution plan was created using with out index and then with index So, second plan was good and accepted.
    But in this case we do not need to change any thing ,query is performing bad may be becasue changes order of operation ..
    One other way to use hint , but for this we need to change sqls , which is not possiable in production now.
    The issue is
    For this we need to run the sql again and oracle may not create plan like old one.So we will not be having old good plan to accept.
    All 2 execution plan are already in cache .
    I am looking for a way using that we can set sql plan hash value ( of good plan) or any other id of that sql plan to force to use that plan only.
    any idea how to do it ..

    Stored Outlines are deprecated.
    OP:
    To fix a specific plan you have two choices:
    1. SQL Plan Baselines - assuming the "good" plan is in AWR still then the steps are along the lines of load old plan from AWR into sql tuning set using DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY and DBMS_SQLTUNE.LOAD_SQLSET then load plans from sqlset into sql plan baseline using DBMS_SPM.LOAD_PLANS_FROM_SQLSET.
    2. Using SQL Profiles to fix the outline hints - so similar to a stored outline but using the sql profile mechanism - using the coe_xfr_sql_profile.sql script, part of an approach in Oracle support doc id 215187.1
    But +1 for Nikolay's recommendation of understanding whether there is a root cause to this problem instability (plan instability being "normal", but flip flopping between "good" and "bad" being a problem). Cardinality feedback is an obvious possible influence, different peeked binds another, stat changes, etc.

  • Execution Plan Failed for Full Load

    Hi Team,
    When i run the full Load the execution plan failed and verified the log and found below information from SEBL_VERT_8_1_1_FLATFILE.DATAWAREHOUSE.SIL_Vert.SIL_InsertRowInRunTable
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [SEBL_VERT_8_1_1_FLATFILE.DATAWAREHOUSE.SIL_Vert.SIL_InsertRowInRunTable.log] for session parameter:[$PMSessionLogFile].
    DIRECTOR> VAR_27028 Use override value [1] for mapping parameter:[mplt_SIL_InsertRowInRunTable.$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27028 Use override value [21950495] for mapping parameter:[MPLT_GET_ETL_PROC_WID.$$ETL_PROC_WID].
    DIRECTOR> TM_6014 Initializing session [SIL_InsertRowInRunTable] at [Mon Sep 26 15:53:45 2011].
    DIRECTOR> TM_6683 Repository Name: [infa_rep]
    DIRECTOR> TM_6684 Server Name: [infa_service]
    DIRECTOR> TM_6686 Folder: [SIL_Vert]
    DIRECTOR> TM_6685 Workflow: [SIL_InsertRowInRunTable] Run Instance Name: [] Run Id: [8]
    DIRECTOR> TM_6101 Mapping name: SIL_InsertRowInRunTable [version 1].
    DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
    DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
    DIRECTOR> TM_6827 [H:\Informatica901\server\infa_shared\Storage] will be used as storage directory for session [SIL_InsertRowInRunTable].
    DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6703 Session [SIL_InsertRowInRunTable] is run by 32-bit Integration Service [node01_eblnhif-czc80685], version [9.0.1 HotFix2], build [1111].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [ASCII]
    MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6151 The session sort order is [Binary].
    MAPPING> TM_6156 Using low precision processing.
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6187 Session target-based commit interval is [10000].
    MAPPING> TM_6307 DTM error log disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> DBG_21075 Connecting to database [Connect_to_OLAP], user [OBAW]
    MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
    MAPPING> CMN_1022 Database driver error...
    CMN_1022 [
    Database driver error...
    Function Name : Logon
    ORA-12154: TNS:could not resolve service name
    Database driver error...
    Function Name : Connect
    Database Error: Failed to connect to database using user [OBAW] and connection string [Connect_to_OLAP].]
    MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
    MAPPING> CMN_1076 ERROR creating database connection.
    MAPPING> DBG_21520 Transform : LKP_W_PARAM_G_Get_ETL_PROC_WID, connect string : Relational:DataWarehouse
    MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
    MAPPING> TE_7017 Internal error. Failed to initialize transformation [MPLT_GET_ETL_PROC_WID.LKP_ETL_PROC_WID]. Contact Informatica Global Customer Support.
    MAPPING> CMN_1761 Timestamp Event: [Mon Sep 26 15:53:45 2011]
    MAPPING> TM_6006 Error initializing DTM for session [SIL_InsertRowInRunTable].
    MANAGER> PETL_24005 Starting post-session tasks. : (Mon Sep 26 15:53:45 2011)
    MANAGER> PETL_24029 Post-session task completed successfully. : (Mon Sep 26 15:53:45 2011)
    MAPPING> TM_6018 The session completed with [0] row transformation errors.
    MANAGER> PETL_24002 Parallel Pipeline Engine finished.
    DIRECTOR> PETL_24013 Session run completed with failure.
    DIRECTOR> TM_6022
    SESSION LOAD SUMMARY
    ================================================
    DIRECTOR> TM_6252 Source Load Summary.
    DIRECTOR> CMN_1740 Table: [SQ_FILE_DUAL] (Instance Name: [SQ_FILE_DUAL])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6253 Target Load Summary.
    DIRECTOR> TM_6023
    ===================================================
    DIRECTOR> TM_6020 Session [SIL_InsertRowInRunTable] completed at [Mon Sep 26 15:53:46 2011].
    I checked physical datasource connection in DAC and workflow manager and tested and verified but still facing same issue. I connected through oracle merant ODBC driver.
    Pls. let me know the solution for this error
    Regards,
    VSR

    Hi,
    Did you try using Oracle 10g/11g drivers as datasource? If not I think you should try that but before that ensure that from your DAC server box you are able to tnsping the OLAP database. Hope this helps
    If this is helpful mark is helpful
    Regards,
    BI Learner

  • Execution plan with Concatenation

    Hi All,
    Could anyone help in finding why concatenation is being used by optimizer and how can i avoid it.
    Oracle Version : 10.2.0.4
    select * from
                               select distinct EntityType, EntityID, DateModified, DateCreated, IsDeleted
                               from ife.EntityIDs i
                               join (select orgid from equifaxnormalize.org_relationships where orgid is not null and related_orgid is not null
                                  and ((Date_Modified >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and Date_Modified < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
                                        OR (Date_Created >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and Date_Created < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
                     ) r on(r.orgid= i.entityid)
                               where EntityType = 1
                and ((DateModified >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and DateModified < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
                                              OR (DateCreated >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and DateCreated < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
                               and ( IsDeleted = 0)
                               and IsDistributable = 1
                               and EntityID >= 0
                               order by EntityID
                               --order by NLSSORT(EntityID,'NLS_SORT=BINARY')
                             where rownum <= 10;
    Execution Plan
    Plan hash value: 227906424
    | Id  | Operation                                 | Name                          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT                          |                               |    10 |   570 |    39   (6)| 00:00:01 |       |       |
    |*  1 |  COUNT STOPKEY                            |                               |       |       |            |          |       |       |
    |   2 |   VIEW                                    |                               |    56 |  3192 |    39   (6)| 00:00:01 |       |       |
    |*  3 |    SORT ORDER BY STOPKEY                  |                               |    56 |  3416 |    39   (6)| 00:00:01 |       |       |
    |   4 |     HASH UNIQUE                           |                               |    56 |  3416 |    38   (3)| 00:00:01 |       |       |
    |   5 |      CONCATENATION                        |                               |       |       |            |          |       |       |
    |*  6 |       TABLE ACCESS BY INDEX ROWID         | ORG_RELATIONSHIPS             |     1 |    29 |     1   (0)| 00:00:01 |       |       |
    |   7 |        NESTED LOOPS                       |                               |    27 |  1647 |    17   (0)| 00:00:01 |       |       |
    |   8 |         TABLE ACCESS BY GLOBAL INDEX ROWID| ENTITYIDS                     |    27 |   864 |     4   (0)| 00:00:01 | ROWID | ROWID |
    |*  9 |          INDEX RANGE SCAN                 | UX_TYPE_MOD_DIST_DEL_ENTITYID |    27 |       |     2   (0)| 00:00:01 |       |       |
    |* 10 |         INDEX RANGE SCAN                  | IX_EFX_ORGRELATION_ORGID      |     1 |       |     1   (0)| 00:00:01 |       |       |
    |* 11 |       TABLE ACCESS BY INDEX ROWID         | ORG_RELATIONSHIPS             |     1 |    29 |     1   (0)| 00:00:01 |       |       |
    |  12 |        NESTED LOOPS                       |                               |    29 |  1769 |    20   (0)| 00:00:01 |       |       |
    |  13 |         PARTITION RANGE ALL               |                               |    29 |   928 |     5   (0)| 00:00:01 |     1 |     3 |
    |* 14 |          TABLE ACCESS BY LOCAL INDEX ROWID| ENTITYIDS                     |    29 |   928 |     5   (0)| 00:00:01 |     1 |     3 |
    |* 15 |           INDEX RANGE SCAN                | IDX_ENTITYIDS_ETYPE_DC        |    29 |       |     4   (0)| 00:00:01 |     1 |     3 |
    |* 16 |         INDEX RANGE SCAN                  | IX_EFX_ORGRELATION_ORGID      |     1 |       |     1   (0)| 00:00:01 |       |       |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=10)
       3 - filter(ROWNUM<=10)
       6 - filter(("DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13
                  14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')) AND "RELATED_ORGID" IS NOT NULL)
       9 - access("I"."ENTITYTYPE"=1 AND "I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0 AND "I"."DATEMODIFIED"<=TO_DATE(' 2011-06-13 14:00:00',
                  'syyyy-mm-dd hh24:mi:ss'))
           filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0)
      10 - access("ORGID"="I"."ENTITYID")
           filter("ORGID" IS NOT NULL AND "ORGID">=0)
      11 - filter(("DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13
                  14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')) AND "RELATED_ORGID" IS NOT NULL)
      14 - filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND (LNNVL("I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
                  'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("I"."DATEMODIFIED"<=TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
                  "I"."ENTITYID">=0)
      15 - access("I"."ENTITYTYPE"=1 AND "I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "I"."DATECREATED"<=TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      16 - access("ORGID"="I"."ENTITYID")
           filter("ORGID" IS NOT NULL AND "ORGID">=0)ife.entityids table has been range - partitioned on data_provider column.
    Is there any better way to rewrite this sql OR is there any way to eliminate concatenation ?
    Thanks

    We cant use data_provider in the given query. We need to pull data irrespective of data_provider and it should be based on ENTITYID.
    Yes table has only three partitions...
    Not sure issue is due to concatenation....but we are in process to create desired indexes which will help for this sql.
    In development we have created multicolumn index and below is the execution plan.....Also in development it takes just 4-5 seconds to execute. But in production it takes more than 8-9 minutes.
    Below is the execution plan from Dev which seems to perform fast:
    Execution Plan
    Plan hash value: 3121857971
    | Id  | Operation                                 | Name                          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT                          |                               |     1 |    57 |   353   (1)| 00:00:05 |       |       |
    |*  1 |  COUNT STOPKEY                            |                               |       |       |            |          |       |       |
    |   2 |   VIEW                                    |                               |     1 |    57 |   353   (1)| 00:00:05 |       |       |
    |*  3 |    SORT ORDER BY STOPKEY                  |                               |     1 |    58 |   353   (1)| 00:00:05 |       |       |
    |   4 |     HASH UNIQUE                           |                               |     1 |    58 |   352   (1)| 00:00:05 |       |       |
    |   5 |      CONCATENATION                        |                               |       |       |            |          |       |       |
    |*  6 |       TABLE ACCESS BY INDEX ROWID         | ORG_RELATIONSHIPS             |     1 |    26 |     3   (0)| 00:00:01 |       |       |
    |   7 |        NESTED LOOPS                       |                               |     1 |    58 |   170   (1)| 00:00:03 |       |       |
    |   8 |         PARTITION RANGE ALL               |                               |    56 |  1792 |    16   (0)| 00:00:01 |     1 |     3 |
    |*  9 |          TABLE ACCESS BY LOCAL INDEX ROWID| ENTITYIDS                     |    56 |  1792 |    16   (0)| 00:00:01 |     1 |     3 |
    |* 10 |           INDEX RANGE SCAN                | IDX_ENTITYIDS_ETYPE_DC        |    56 |       |     7   (0)| 00:00:01 |     1 |     3 |
    |* 11 |         INDEX RANGE SCAN                  | EFX_ORGID                     |     2 |       |     2   (0)| 00:00:01 |       |       |
    |* 12 |       TABLE ACCESS BY INDEX ROWID         | ORG_RELATIONSHIPS             |     1 |    26 |     3   (0)| 00:00:01 |       |       |
    |  13 |        NESTED LOOPS                       |                               |     1 |    58 |   181   (0)| 00:00:03 |       |       |
    |  14 |         PARTITION RANGE ALL               |                               |    57 |  1824 |    10   (0)| 00:00:01 |     1 |     3 |
    |* 15 |          INDEX RANGE SCAN                 | UX_TYPE_MOD_DIST_DEL_ENTITYID |    57 |  1824 |    10   (0)| 00:00:01 |     1 |     3 |
    |* 16 |         INDEX RANGE SCAN                  | EFX_ORGID                     |     2 |       |     2   (0)| 00:00:01 |       |       |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=10)
       3 - filter(ROWNUM<=10)
       6 - filter("RELATED_ORGID" IS NOT NULL AND ("DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
                  'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')))
       9 - filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0)
      10 - access("I"."ENTITYTYPE"=1 AND "I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "I"."DATECREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      11 - access("ORGID"="I"."ENTITYID")
           filter("ORGID" IS NOT NULL AND "ORGID">=0)
      12 - filter("RELATED_ORGID" IS NOT NULL AND ("DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
                  'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')))
      15 - access("I"."ENTITYTYPE"=1 AND "I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0 AND "I"."DATEMODIFIED"<TO_DATE(' 2011-06-13 14:00:00',
                  'syyyy-mm-dd hh24:mi:ss'))
           filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND (LNNVL("I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00',
                  'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("I"."DATECREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
                  "I"."ENTITYID">=0)
      16 - access("ORGID"="I"."ENTITYID")
           filter("ORGID" IS NOT NULL AND "ORGID">=0)Thanks

  • Effect of RLS policy (VPD) on execution plan of a query

    Hi
    I have been working on tuning of few queries. A RLS policy is defined on most of the tables which appends an extra where condition (something like AREA_CODE=1). I am not able to understand the effect of this extra where clause on the execution plan of the query. In the execution plan there is no mention of the clause added by VPD. In 10046 trace it does show the policy function being executed but nothing after that.
    Can someone shed some light on the issue that has VPD any effect on the execution plan of the query ? Also would it matter whether the column on which VPD is applied, was indexed or non-indexed ?
    Regards,
    Amardeep Sidhu

    Amardeep Sidhu wrote:
    I have been working on tuning of few queries. A RLS policy is defined on most of the tables which appends an extra where condition (something like AREA_CODE=1). I am not able to understand the effect of this extra where clause on the execution plan of the query. In the execution plan there is no mention of the clause added by VPD. In 10046 trace it does show the policy function being executed but nothing after that.
    VPD is supposed to be invisible - which is why you get minimal information about security predicates in the standard trace file. However, if you reference a table with a security preidcate in your query, the table is effectively replaced by an inline view of the form: "select * from original_table where {security_predicate}", and the result is then optimised. So the effects of the security predicate is just the same as you writing the predicate into the query.
    Apart from your use of v$sql_plan to show the change in plan and the new predicates, you can see the effects of the predicates by setting event 10730 with 10046. In current versions of Oracle this causes the substitute view being printed in the trace file.
    Bear in mind that security predicates can be very complex - including subqueries - so the effect isn't just that of including the selectivity of "another simple predicate".
    Can someone shed some light on the issue that has VPD any effect on the execution plan of the query ? Also would it matter whether the column on which VPD is applied, was indexed or non-indexed ?
    Think of the effect of changing the SQL by hand - and how you would need to optimise the resultant query. Sometimes you do need to modify your indexing to help the security predicates, sometimes it won't make enough difference to matter.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • DAC Execution plan for financial analytics keeps running (OBIA 7.9.4)

    Hi Gurus,
    I have problem when tried to run full load on financial analytics execution plan. I used OBIA 7.9.4, Informatica 8.6.1 and DAC 10.1.3.4.1.patch.20100901.0520. This runs on Windows Server 2008 Service Pack 2 with 6 GB of memory. I set $$INITIAL_EXTRACT_DATE='01/01/2011' so it should pull data greater than $$INITIAL_EXTRACT_DATE, which I believed not more than 3 years data and should be not more than 2 days for loading this data. But execution plan keeps running and not yet completed even after more than 2 days. I have already checked for DAC execution plan log, But nothing I can found from the log.
    Here is the log file:
    12  INFO  Thu Jun 27 13:52:12 SGT 2013  DAC Version: Dac Build AN 10.1.3.4.1.patch.20100901.0520
    13  INFO  Thu Jun 27 13:52:12 SGT 2013  The System properties are: java.runtime.name Java(TM) SE Runtime Environment
    sun.boot.library.path C:\orahome\10gR3_1\jdk\jre\bin
    java.vm.version 10.0-b23
    java.vm.vendor Sun Microsystems Inc.
    java.vendor.url http://java.sun.com/
    path.separator ;
    ETL_EXECUTION_MODE ETL_SERVER
    java.vm.name Java HotSpot(TM) Server VM
    file.encoding.pkg sun.io
    sun.java.launcher SUN_STANDARD
    user.country US
    sun.os.patch.level Service Pack 2
    java.vm.specification.name Java Virtual Machine Specification
    user.dir C:\orahome\10gR3_1\bifoundation\dac
    java.runtime.version 1.6.0_07-b06
    java.awt.graphicsenv sun.awt.Win32GraphicsEnvironment
    java.endorsed.dirs C:\orahome\10gR3_1\jdk\jre\lib\endorsed
    os.arch x86
    java.io.tmpdir C:\Users\ADMINI~1\AppData\Local\Temp\2\
    line.separator
    java.vm.specification.vendor Sun Microsystems Inc.
    user.variant
    os.name Windows Server 2008
    REPOSITORY_STAMP 4E8A758D24E553DBB74B325E89718660
    sun.jnu.encoding Cp1252
    java.library.path C:\orahome\10gR3_1\jdk\bin;.;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Informatica\PowerCenter8.6.1\server\bin;C:\OracleBI\server\Bin;C:\OracleBI\web\bin;C:\OracleBI\web\catalogmanager;C:\OracleBI\SQLAnywhere;C:\Program Files (x86)\Java\jdk1.5.0_22\bin;C:\app\oracle\product\11.2.0\dbhome_1\bin;C:\Program Files\HP\NCU;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Program Files\OmniBack\bin\;C:\Windows\System32\WindowsPowerShell\v1.0\
    java.specification.name Java Platform API Specification
    java.class.version 50.0
    sun.management.compiler HotSpot Tiered Compilers
    os.version 6.0
    user.home C:\Users\Administrator
    user.timezone Asia/Singapore
    java.awt.printerjob sun.awt.windows.WPrinterJob
    file.encoding Cp1252
    java.specification.version 1.6
    java.class.path .\lib\msbase.jar;.\lib\mssqlserver.jar;.\lib\msutil.jar;.\lib\sqljdbc.jar;.\lib\ojdbc6.jar;.\lib\ojdbc5.jar;.\lib\ojdbc14.jar;.\lib\db2java.zip;.\lib\terajdbc4.jar;.\lib\log4j.jar;.\lib\teradata.jar;.\lib\tdgssjava.jar;.\lib\tdgssconfig.jar;.\DAWSystem.jar;.;
    user.name Administrator
    java.vm.specification.version 1.0
    java.home C:\orahome\10gR3_1\jdk\jre
    sun.arch.data.model 32
    user.language en
    java.specification.vendor Sun Microsystems Inc.
    awt.toolkit sun.awt.windows.WToolkit
    java.vm.info mixed mode
    java.version 1.6.0_07
    java.ext.dirs C:\orahome\10gR3_1\jdk\jre\lib\ext;C:\Windows\Sun\Java\lib\ext
    sun.boot.class.path C:\orahome\10gR3_1\jdk\jre\lib\resources.jar;C:\orahome\10gR3_1\jdk\jre\lib\rt.jar;C:\orahome\10gR3_1\jdk\jre\lib\sunrsasign.jar;C:\orahome\10gR3_1\jdk\jre\lib\jsse.jar;C:\orahome\10gR3_1\jdk\jre\lib\jce.jar;C:\orahome\10gR3_1\jdk\jre\lib\charsets.jar;C:\orahome\10gR3_1\jdk\jre\classes
    java.vendor Sun Microsystems Inc.
    file.separator \
    java.vendor.url.bug http://java.sun.com/cgi-bin/bugreport.cgi
    sun.io.unicode.encoding UnicodeLittle
    sun.cpu.endian little
    sun.desktop windows
    sun.cpu.isalist pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 i86
    14  SEVERE  Thu Jun 27 13:52:12 SGT 2013
    START OF ETL
    15  SEVERE  Thu Jun 27 13:52:36 SGT 2013  Unable to evaluate method getNamedSourceIdentifier for class com.siebel.analytics.etl.etltask.InformaticaTask
    16  SEVERE  Thu Jun 27 13:52:36 SGT 2013  Unable to evaluate method getNamedSource for class com.siebel.analytics.etl.etltask.InformaticaTask
    17  SEVERE  Thu Jun 27 13:52:36 SGT 2013  Unable to evaluate method getNamedSourceIdentifier for class com.siebel.analytics.etl.etltask.PauseTask
    18  SEVERE  Thu Jun 27 13:52:36 SGT 2013  Unable to evaluate method getNamedSource for class com.siebel.analytics.etl.etltask.PauseTask
    19  SEVERE  Thu Jun 27 13:52:40 SGT 2013  Unable to evaluate method getNamedSourceIdentifier for class com.siebel.analytics.etl.etltask.TaskPrecedingActionScriptTask
    20  SEVERE  Thu Jun 27 13:52:40 SGT 2013  Unable to evaluate method getNamedSource for class com.siebel.analytics.etl.etltask.TaskPrecedingActionScriptTask
    21  SEVERE  Thu Jun 27 13:52:46 SGT 2013  Starting ETL Process.
    22  SEVERE  Thu Jun 27 13:52:47 SGT 2013  Informatica Status Poll Interval new value : 20000(milli-seconds)
    24  SEVERE  Thu Jun 27 13:52:52 SGT 2013  C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\FLAT FILE specified is not a currently existing directory
    25  SEVERE  Thu Jun 27 13:52:52 SGT 2013  Request to start workflow : 'SILOS:SIL_InsertRowInRunTable' has completed with error code 0
    26  SEVERE  Thu Jun 27 13:53:12 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_InsertRowInRunTable.txt SIL_InsertRowInRunTable
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    27  SEVERE  Thu Jun 27 13:53:13 SGT 2013  Number of running sessions : 13
    28  SEVERE  Thu Jun 27 13:53:13 SGT 2013  Number of running sessions : 12
    29  SEVERE  Thu Jun 27 13:53:14 SGT 2013  Number of running sessions : 11
    30  SEVERE  Thu Jun 27 13:53:15 SGT 2013  Number of running sessions : 10
    32  SEVERE  Thu Jun 27 13:53:18 SGT 2013  C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\ORACLE specified is not a currently existing directory
    33  SEVERE  Thu Jun 27 13:53:18 SGT 2013  C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\Oracle specified is not a currently existing directory
    34  SEVERE  Thu Jun 27 13:53:18 SGT 2013  C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\oracle specified is not a currently existing directory
    35  SEVERE  Thu Jun 27 13:53:18 SGT 2013  C:\orahome\10gR3_1\bifoundation\dac\Informatica\parameters\input\ORACLE (THIN) specified is not a currently existing directory
    36  SEVERE  Thu Jun 27 13:53:19 SGT 2013  Request to start workflow : 'SILOS:SIL_CurrencyTypes' has completed with error code 0
    37  SEVERE  Thu Jun 27 13:53:19 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_ValueSetHier_Extract_Full' has completed with error code 0
    38  SEVERE  Thu Jun 27 13:53:19 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_GLAccount_SegmentConfig_Extract' has completed with error code 0
    39  SEVERE  Thu Jun 27 13:53:19 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_LocalCurrency_Temporary' has completed with error code 0
    40  SEVERE  Thu Jun 27 13:53:19 SGT 2013  Request to start workflow : 'SILOS:SIL_Parameters_Update' has completed with error code 0
    41  SEVERE  Thu Jun 27 13:53:19 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_GLAccountDimension_FinSubCodes' has completed with error code 0
    42  SEVERE  Thu Jun 27 13:53:19 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_ValueSet_Extract' has completed with error code 0
    43  SEVERE  Thu Jun 27 13:53:19 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GLSegmentDimension_Full' has completed with error code 0
    44  SEVERE  Thu Jun 27 13:53:19 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_InternalOrganizationDimension_BalanceSegmentValue_LegalEntity' has completed with error code 0
    45  SEVERE  Thu Jun 27 13:53:39 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_CurrencyTypes.txt SIL_CurrencyTypes
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    46  SEVERE  Thu Jun 27 13:53:39 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_ValueSetHier_Extract_Full.txt SDE_ORA_Stage_ValueSetHier_Extract_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    47  SEVERE  Thu Jun 27 13:53:40 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_GLAccount_SegmentConfig_Extract.txt SDE_ORA_Stage_GLAccount_SegmentConfig_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    48  SEVERE  Thu Jun 27 13:53:40 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_LocalCurrency_Temporary.txt SDE_ORA_LocalCurrency_Temporary
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    49  SEVERE  Thu Jun 27 13:53:40 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SILOS.SIL_Parameters_Update.txt SIL_Parameters_Update
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    50  SEVERE  Thu Jun 27 13:53:40 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_GLAccountDimension_FinSubCodes.txt SDE_ORA_Stage_GLAccountDimension_FinSubCodes
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    51  SEVERE  Thu Jun 27 13:53:40 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_ValueSet_Extract.txt SDE_ORA_Stage_ValueSet_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    52  SEVERE  Thu Jun 27 13:53:40 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GLSegmentDimension_Full.txt SDE_ORA_GLSegmentDimension_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    53  SEVERE  Thu Jun 27 13:53:40 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_InternalOrganizationDimension_BalanceSegmentValue_LegalEntity.txt SDE_ORA_InternalOrganizationDimension_BalanceSegmentValue_LegalEntity
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    54  SEVERE  Thu Jun 27 13:53:40 SGT 2013  Number of running sessions : 10
    55  SEVERE  Thu Jun 27 13:53:40 SGT 2013  Number of running sessions : 10
    56  SEVERE  Thu Jun 27 13:53:40 SGT 2013  Number of running sessions : 11
    57  SEVERE  Thu Jun 27 13:53:40 SGT 2013  Number of running sessions : 11
    58  SEVERE  Thu Jun 27 13:53:40 SGT 2013  Number of running sessions : 11
    59  SEVERE  Thu Jun 27 13:53:41 SGT 2013  Number of running sessions : 11
    60  SEVERE  Thu Jun 27 13:53:41 SGT 2013  Number of running sessions : 11
    61  SEVERE  Thu Jun 27 13:53:41 SGT 2013  Number of running sessions : 11
    62  SEVERE  Thu Jun 27 13:53:41 SGT 2013  Number of running sessions : 11
    63  SEVERE  Thu Jun 27 13:53:42 SGT 2013  Number of running sessions : 10
    64  SEVERE  Thu Jun 27 13:53:42 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GLBalanceFact_Full' has completed with error code 0
    65  SEVERE  Thu Jun 27 13:53:46 SGT 2013  Request to start workflow : 'SILOS:SIL_Stage_GroupAccountNumberDimension_FinStatementItem' has completed with error code 0
    66  SEVERE  Thu Jun 27 13:53:46 SGT 2013  Request to start workflow : 'SILOS:SIL_GlobalCurrencyGeneral_Update' has completed with error code 0
    67  SEVERE  Thu Jun 27 13:53:46 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_GroupAccountNumberDimension' has completed with error code 0
    68  SEVERE  Thu Jun 27 13:53:46 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_ARSubType_Extract' has completed with error code 0
    69  SEVERE  Thu Jun 27 13:53:46 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_GLRevenueTypeExtract' has completed with error code 0
    70  SEVERE  Thu Jun 27 13:53:46 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_ValueSetHier_Flatten' has completed with error code 0
    71  SEVERE  Thu Jun 27 13:53:47 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_ExchangeRateGeneral_Full' has completed with error code 0
    72  SEVERE  Thu Jun 27 13:53:47 SGT 2013  Request to start workflow : 'SILOS:SIL_ListOfValuesGeneral_Unspecified' has completed with error code 0
    73  SEVERE  Thu Jun 27 13:54:06 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_Stage_GroupAccountNumberDimension_FinStatementItem.txt SIL_Stage_GroupAccountNumberDimension_FinStatementItem
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    74  SEVERE  Thu Jun 27 13:54:07 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SILOS.SIL_GlobalCurrencyGeneral_Update.txt SIL_GlobalCurrencyGeneral_Update
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    75  SEVERE  Thu Jun 27 13:54:07 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_GroupAccountNumberDimension.txt SDE_ORA_Stage_GroupAccountNumberDimension
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    76  SEVERE  Thu Jun 27 13:54:07 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_GLRevenueTypeExtract.txt SDE_ORA_Stage_TransactionTypeDimension_GLRevenueTypeExtract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    77  SEVERE  Thu Jun 27 13:54:07 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_ARSubType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_ARSubType_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    78  SEVERE  Thu Jun 27 13:54:07 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\DataWarehouse.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_ValueSetHier_Flatten.txt SDE_ORA_Stage_ValueSetHier_Flatten
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    79  SEVERE  Thu Jun 27 13:54:07 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_ExchangeRateGeneral_Full.txt SDE_ORA_ExchangeRateGeneral_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    80  SEVERE  Thu Jun 27 13:54:07 SGT 2013  Number of running sessions : 11
    81  SEVERE  Thu Jun 27 13:54:07 SGT 2013  Number of running sessions : 10
    82  SEVERE  Thu Jun 27 13:54:07 SGT 2013  Number of running sessions : 10
    83  SEVERE  Thu Jun 27 13:54:07 SGT 2013  Number of running sessions : 10
    84  SEVERE  Thu Jun 27 13:54:08 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_ListOfValuesGeneral_Unspecified.txt SIL_ListOfValuesGeneral_Unspecified
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    85  SEVERE  Thu Jun 27 13:54:08 SGT 2013  Number of running sessions : 10
    86  SEVERE  Thu Jun 27 13:54:08 SGT 2013  Number of running sessions : 10
    87  SEVERE  Thu Jun 27 13:54:08 SGT 2013  Number of running sessions : 10
    88  SEVERE  Thu Jun 27 13:54:09 SGT 2013  Number of running sessions : 10
    89  SEVERE  Thu Jun 27 13:54:12 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GLJournals_Full' has completed with error code 0
    90  SEVERE  Thu Jun 27 13:54:13 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_APTransactionFact_Payment_Full' has completed with error code 0
    91  SEVERE  Thu Jun 27 13:54:13 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_APType_Extract' has completed with error code 0
    92  SEVERE  Thu Jun 27 13:54:13 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_APTransactionFact_PaymentSchedule_Full' has completed with error code 0
    93  SEVERE  Thu Jun 27 13:54:13 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_APTransactionFact_Distributions_Full' has completed with error code 0
    94  SEVERE  Thu Jun 27 13:54:13 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_GLRevenueSubType_Extract' has completed with error code 0
    95  SEVERE  Thu Jun 27 13:54:13 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_APSubType_Extract' has completed with error code 0
    96  SEVERE  Thu Jun 27 13:54:14 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_GLCOGSSubType_Extract' has completed with error code 0
    97  SEVERE  Thu Jun 27 13:54:15 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_ValueSetHier_DeriveRange' has completed with error code 0
    98  SEVERE  Thu Jun 27 13:54:33 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_APTransactionFact_Payment_Full.txt SDE_ORA_APTransactionFact_Payment_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    99  SEVERE  Thu Jun 27 13:54:33 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_APType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_APType_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    100  SEVERE  Thu Jun 27 13:54:33 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_APTransactionFact_PaymentSchedule_Full.txt SDE_ORA_APTransactionFact_PaymentSchedule_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    101  SEVERE  Thu Jun 27 13:54:33 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_APTransactionFact_Distributions_Full.txt SDE_ORA_APTransactionFact_Distributions_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    102  SEVERE  Thu Jun 27 13:54:34 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_GLRevenueSubType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_GLRevenueSubType_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    103  SEVERE  Thu Jun 27 13:54:34 SGT 2013  Number of running sessions : 12
    104  SEVERE  Thu Jun 27 13:54:34 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_APSubType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_APSubType_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    105  SEVERE  Thu Jun 27 13:54:34 SGT 2013  Number of running sessions : 11
    106  SEVERE  Thu Jun 27 13:54:34 SGT 2013  Number of running sessions : 10
    107  SEVERE  Thu Jun 27 13:54:34 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_GLCOGSSubType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_GLCOGSSubType_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    108  SEVERE  Thu Jun 27 13:54:34 SGT 2013  Number of running sessions : 11
    109  SEVERE  Thu Jun 27 13:54:34 SGT 2013  Number of running sessions : 10
    110  SEVERE  Thu Jun 27 13:54:34 SGT 2013  Number of running sessions : 10
    111  SEVERE  Thu Jun 27 13:54:35 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\DataWarehouse.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_ValueSetHier_DeriveRange.txt SDE_ORA_Stage_ValueSetHier_DeriveRange
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    112  SEVERE  Thu Jun 27 13:54:36 SGT 2013  Number of running sessions : 10
    113  SEVERE  Thu Jun 27 13:54:36 SGT 2013  Number of running sessions : 9
    114  SEVERE  Thu Jun 27 13:54:36 SGT 2013  Number of running sessions : 8
    115  SEVERE  Thu Jun 27 13:54:40 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_GLCOGSType_Extract' has completed with error code 0
    116  SEVERE  Thu Jun 27 13:54:40 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_ExpenditureCategory' has completed with error code 0
    117  SEVERE  Thu Jun 27 13:54:40 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_InternalOrganizationDimensionHierarchy_HROrgsTemporary_LatestVersion' has completed with error code 0
    118  SEVERE  Thu Jun 27 13:54:40 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_ARType_Extract' has completed with error code 0
    119  SEVERE  Thu Jun 27 13:54:40 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GL_AP_LinkageInformation_Extract_Full' has completed with error code 0
    120  SEVERE  Thu Jun 27 13:54:41 SGT 2013  Request to start workflow : 'SILOS:SIL_TimeOfDayDimension' has completed with error code 0
    121  SEVERE  Thu Jun 27 13:55:00 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_GLCOGSType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_GLCOGSType_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    122  SEVERE  Thu Jun 27 13:55:00 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_TransactionTypeDimension_ExpenditureCategory.txt SDE_ORA_TransactionTypeDimension_ExpenditureCategory
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    123  SEVERE  Thu Jun 27 13:55:00 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_ARType_Extract.txt SDE_ORA_Stage_TransactionTypeDimension_ARType_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    124  SEVERE  Thu Jun 27 13:55:00 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_InternalOrganizationDimensionHierarchy_HROrgsTemporary_LatestVersion.txt SDE_ORA_InternalOrganizationDimensionHierarchy_HROrgsTemporary_LatestVersion
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    125  SEVERE  Thu Jun 27 13:55:00 SGT 2013  Number of running sessions : 7
    126  SEVERE  Thu Jun 27 13:55:00 SGT 2013  Number of running sessions : 6
    127  SEVERE  Thu Jun 27 13:55:01 SGT 2013  Number of running sessions : 6
    128  SEVERE  Thu Jun 27 13:55:01 SGT 2013  Number of running sessions : 10
    129  SEVERE  Thu Jun 27 13:55:01 SGT 2013  Number of running sessions : 9
    130  SEVERE  Thu Jun 27 13:55:01 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SILOS.SIL_TimeOfDayDimension.txt SIL_TimeOfDayDimension
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    131  SEVERE  Thu Jun 27 13:55:02 SGT 2013  Number of running sessions : 9
    132  SEVERE  Thu Jun 27 13:55:06 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_GLRevenueDerive' has completed with error code 0
    133  SEVERE  Thu Jun 27 13:55:06 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_APDerive' has completed with error code 0
    134  SEVERE  Thu Jun 27 13:55:06 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_GLOther' has completed with error code 0
    135  SEVERE  Thu Jun 27 13:55:06 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_Stage_TransactionTypeDimension_ARSubType_ExtractApplication' has completed with error code 0
    136  SEVERE  Thu Jun 27 13:55:06 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionTypeDimension_GLCOGSDerive' has completed with error code 0
    137  SEVERE  Thu Jun 27 13:55:08 SGT 2013  Request to start workflow : 'SILOS:SIL_HourOfDayDimension' has completed with error code 0
    138  SEVERE  Thu Jun 27 13:55:21 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GL_AP_LinkageInformation_Extract_Full.txt SDE_ORA_GL_AP_LinkageInformation_Extract_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    139  SEVERE  Thu Jun 27 13:55:21 SGT 2013  Number of running sessions : 9
    140  SEVERE  Thu Jun 27 13:55:27 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GL_PO_LinkageInformation_Extract_Full' has completed with error code 0
    141  SEVERE  Thu Jun 27 13:55:27 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_Stage_TransactionTypeDimension_ARSubType_ExtractApplication.txt SDE_ORA_Stage_TransactionTypeDimension_ARSubType_ExtractApplication
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    142  SEVERE  Thu Jun 27 13:55:27 SGT 2013  Number of running sessions : 8
    143  SEVERE  Thu Jun 27 13:55:28 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SILOS  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\DataWarehouse.DataWarehouse.SILOS.SIL_HourOfDayDimension.txt SIL_HourOfDayDimension
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    144  SEVERE  Thu Jun 27 13:55:29 SGT 2013  Number of running sessions : 7
    145  SEVERE  Thu Jun 27 13:55:47 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GL_PO_LinkageInformation_Extract_Full.txt SDE_ORA_GL_PO_LinkageInformation_Extract_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    146  SEVERE  Thu Jun 27 13:55:47 SGT 2013  Number of running sessions : 7
    147  SEVERE  Thu Jun 27 13:55:53 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GL_AR_REV_LinkageInformation_Extract_Full' has completed with error code 0
    148  SEVERE  Thu Jun 27 13:55:53 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GLJournals_Full.txt SDE_ORA_GLJournals_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    149  SEVERE  Thu Jun 27 13:56:00 SGT 2013
    ANOMALY INFO::: DataWarehouse:SELECT COUNT(*) FROM W_ORA_GLRF_DERV_F_TMP
    ORA-00942: table or view does not exist
    MESSAGE:::ORA-00942: table or view does not exist
    EXCEPTION CLASS::: java.sql.SQLSyntaxErrorException
    oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:439)
    oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:395)
    oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:802)
    oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:436)
    oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186)
    oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:521)
    oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:194)
    oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:853)
    oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1145)
    oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1267)
    oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1469)
    oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:389)
    com.siebel.etl.database.cancellable.CancellableStatement.executeQuery(CancellableStatement.java:69)
    com.siebel.etl.database.DBUtils.executeQuery(DBUtils.java:150)
    com.siebel.etl.database.WeakDBUtils.executeQuery(WeakDBUtils.java:242)
    com.siebel.analytics.etl.etltask.CountTableTask.doExecute(CountTableTask.java:99)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:411)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:307)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:214)
    com.siebel.etl.engine.core.Session.getTargetTableRowCounts(Session.java:3375)
    com.siebel.etl.engine.core.Session.run(Session.java:3251)
    java.lang.Thread.run(Thread.java:619)
    150  SEVERE  Thu Jun 27 13:56:00 SGT 2013
    ANOMALY INFO::: Error while executing : COUNT TABLE:W_ORA_GLRF_DERV_F_TMP
    MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DataWarehouse:SELECT COUNT(*) FROM W_ORA_GLRF_DERV_F_TMP
    ORA-00942: table or view does not exist
    EXCEPTION CLASS::: java.lang.Exception
    com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:450)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:307)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:214)
    com.siebel.etl.engine.core.Session.getTargetTableRowCounts(Session.java:3375)
    com.siebel.etl.engine.core.Session.run(Session.java:3251)
    java.lang.Thread.run(Thread.java:619)
    ::: CAUSE :::
    MESSAGE:::DataWarehouse:SELECT COUNT(*) FROM W_ORA_GLRF_DERV_F_TMP
    ORA-00942: table or view does not exist
    EXCEPTION CLASS::: com.siebel.etl.database.IllegalSQLQueryException
    com.siebel.etl.database.DBUtils.executeQuery(DBUtils.java:160)
    com.siebel.etl.database.WeakDBUtils.executeQuery(WeakDBUtils.java:242)
    com.siebel.analytics.etl.etltask.CountTableTask.doExecute(CountTableTask.java:99)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:411)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:307)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:214)
    com.siebel.etl.engine.core.Session.getTargetTableRowCounts(Session.java:3375)
    com.siebel.etl.engine.core.Session.run(Session.java:3251)
    java.lang.Thread.run(Thread.java:619)
    ::: CAUSE :::
    MESSAGE:::ORA-00942: table or view does not exist
    EXCEPTION CLASS::: java.sql.SQLSyntaxErrorException
    oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:439)
    oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:395)
    oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:802)
    oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:436)
    oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186)
    oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:521)
    oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:194)
    oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:853)
    oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1145)
    oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1267)
    oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1469)
    oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:389)
    com.siebel.etl.database.cancellable.CancellableStatement.executeQuery(CancellableStatement.java:69)
    com.siebel.etl.database.DBUtils.executeQuery(DBUtils.java:150)
    com.siebel.etl.database.WeakDBUtils.executeQuery(WeakDBUtils.java:242)
    com.siebel.analytics.etl.etltask.CountTableTask.doExecute(CountTableTask.java:99)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:411)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:307)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:214)
    com.siebel.etl.engine.core.Session.getTargetTableRowCounts(Session.java:3375)
    com.siebel.etl.engine.core.Session.run(Session.java:3251)
    java.lang.Thread.run(Thread.java:619)
    151  SEVERE  Thu Jun 27 13:56:12 SGT 2013  Number of running sessions : 10
    152  SEVERE  Thu Jun 27 13:56:12 SGT 2013  Number of running sessions : 10
    153  SEVERE  Thu Jun 27 13:56:13 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GL_AR_REV_LinkageInformation_Extract_Full.txt SDE_ORA_GL_AR_REV_LinkageInformation_Extract_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    154  SEVERE  Thu Jun 27 13:56:13 SGT 2013  Number of running sessions : 10
    155  SEVERE  Thu Jun 27 13:56:17 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_APTermsDimension_Full' has completed with error code 0
    156  SEVERE  Thu Jun 27 13:56:18 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_PartyContactStaging_Full' has completed with error code 0
    157  SEVERE  Thu Jun 27 13:56:18 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_MovementTypeDimension' has completed with error code 0
    158  SEVERE  Thu Jun 27 13:56:18 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionSourceDimension_AP_LKP_Extract_Full' has completed with error code 0
    159  SEVERE  Thu Jun 27 13:56:19 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_TransactionSourceDimension_AP_CC_Extract' has completed with error code 0
    160  SEVERE  Thu Jun 27 13:56:38 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_APTermsDimension_Full.txt SDE_ORA_APTermsDimension_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    161  SEVERE  Thu Jun 27 13:56:38 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_PartyContactStaging_Full.txt SDE_ORA_PartyContactStaging_Full
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    162  SEVERE  Thu Jun 27 13:56:38 SGT 2013  Number of running sessions : 10
    163  SEVERE  Thu Jun 27 13:56:38 SGT 2013  Number of running sessions : 9
    164  SEVERE  Thu Jun 27 13:56:39 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1212_Flatfile.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_TransactionSourceDimension_AP_CC_Extract.txt SDE_ORA_TransactionSourceDimension_AP_CC_Extract
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    165  SEVERE  Thu Jun 27 13:56:39 SGT 2013  Number of running sessions : 8
    166  SEVERE  Thu Jun 27 13:56:44 SGT 2013  Request to start workflow : 'SDE_ORAR1212_Adaptor:SDE_ORA_GL_COGS_LinkageInformation_Extract_Full' has completed with error code 0
    167  SEVERE  Thu Jun 27 13:57:04 SGT 2013  pmcmd startworkflow -sv INT_AFDROBI -d Domain_AFDRBI -u Administrator -p **** -f SDE_ORAR1212_Adaptor  -lpf C:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\ORA_R1211.DataWarehouse.SDE_ORAR1212_Adaptor.SDE_ORA_GL_COGS_LinkageInformation_Extract_Full.txt SDE_ORA_GL_COGS_LinkageInformation_Extract_Full
    Status Desc : Succeeded
    After certain tasks I found Informatica workflow did not run. In my case after 190 of 403 tasks but sometime 183 of 403 tasks so it is intermittent.
    This sample tasks that is still running.
    SDE_ORA_EmployeeDimension
    SDE_ORA_GeoCountryDimension
    SDE_ORA_MovementTypeDimension
    SDE_ORA_Project
    SDE_ORA_StatusDimension_ProjectStatus
    SDE_ORA_TransactionSourceDimension_AP_LKP_Extract
    SDE_ORA_TransactionTypeDimension_APDerive
    SDE_ORA_TransactionTypeDimension_GLCOGSDerive
    In Informatica Monitor I found this
    The Integration Service INT_AFDROBI is alive. Ping time is 0 ms.
    The Integration Service INT_AFDROBI is alive. Ping time is 0 ms.
    The Integration Service INT_AFDROBI is alive. Ping time is 0 ms.
    The Integration Service INT_AFDROBI is alive. Ping time is 0 ms.
    Is it mean something?
    I have tried to bounce the server bounce the dac server and start ETL but problem still happen.
    Did I miss something on DAC or Informatica configuration?
    Thanks
    Joni

    Hi Naga,
    NagarajuCool Jul 8, 2013 6:49 AM (in response to JonHi Naga
    1)Check once Relational connections of workflow manager.
    Same with tns name
    2)Name should be same in workflow manager,ODBC and DAC client.
    in DAC I set same with SID -->connection test successful
    3)Have you mentioned custom properties in informatica integration service
    overrideMpltVarwithMapVar  Yes
    4)stop infa services and copy pmcmd  file from server to client and start sevrices.
    Done
    5)And check once all DAC configuration once.
    Done
    After applied above setting DAC Execution Plan for Financial Analytics still take long time. Anything I should try?
    Thanks,
    Joni
    Branch
    Report Abuse
    Like (0)
    Reply

  • Another tax determination code being picked up along with right tax determination code during creation of BBP

    Hi Experts,
    There is an issue I am facing while creating BBP, the slices in table DFKKMOP are having another tax determination code being picked up along with the right tax detemination code.
    E.g. -> In EK02 for the BBP I have configured tax determination code ZA for main 0051 and sub 0050 for all the company codes, but it is also picking ZX  which I have configured for main 100 - sub 202,203 , main - 200 sub 202,203 , main 300- sub 202,203.
    Strangely In the table DFKKMOP the tax determination code ZX is coming in the line which has main 0051 and sub 0050.
    I dont understand why ZX is being shown in the field BB grouping - Budget Billing: Grouping Key for Tax Determination Code.
    I have double checked the customizing in EK02 and I dont find this tax determination code being defined for main 0051/sub 0050, then why is it being picked up for BBP line item in table DFKKMOP.
    Any suggestions are appreciated.

    Well the issue was during creation of BBP items in DFKKMOP the VAT corresponding to Main/Sub 100/ 3xx was being picked up and when I adapted the tax determination code there. The items was created without any split.
    For the second one PA issue, well in this case the tax determination in the standard during the creation of PA invoice via EA27 is happening on the date of execution and not on the selection date or posting date. This is because in SPRO the config. of invoicing <<Define Basic Settings for Invoicing>>
    has <<Date type for tax determination in Invoicing for Part. Bills.>> set to <<tax determination in invoicing based on entry date>> that is why it takes the execution date into  consideration.So even when we do a posting for SOLLDAT in future to take another tax code it doesnt take that SOLLDAT instead takes todays execution date for which the new tax code is not valid.
    To overcome this issue either we need to make the new tax code valid from date in past ie. before the date of execution or we need to change this parameter in customizing <<Define Basic Settings for Invoicing>> to <<tax determination in invoicing based on Posting date>>.
    Voilà!

  • What to look for in Execution plans ?

    Hi Pals,
    Assuming the query execution is slow , I have collected the xml plan. What should I look for in the actual execution plan . What are the top 10 things I need to watch out for?
    I know this is a broad and generic question but I am looking for anyone who is experienced in query tuning to answer this question.
    Thanks in Advance.

    Reading execution plans is a bit of an art. But a couple of things:
    1) Download and install SQL Sentry Plan Explorer (www.sqlsentry.net). This is a free tool that in msny cases gives a better experience to look at execution plans, particularly big and ugly ones.
    2) I usually look for the thickest arrows, as thickness indicates the number of rows being processed.
    3) Look for deviations between estimates and actual values, as these deviations are often the source for bad performance. In Plan Explorer, you can quickly flip between the too. In SSMS you need to look at the popup for every operator. (But again, it is
    the operator with fat arrows that are of most interest - and those before them.
    4) The way to read the plan is that the left-most operator asks the operators it is connected to for data. The net effect is that data flows from right to left, and the right-hand side if often more interesting.
    5) Don't pay much attention to the percentages about operator cost. These percentages are estimates only,
    not actual values. They are only reliable if the estimates are correct all the way through - and when you have a bad plan that is rarely the case.
    This was the overall advice. Then there is more specific one: are indexes being used when expected? Note that scans are necessarily not bad. Sometimes your problem is that you have a loop join + index seek, when you should have had two scans and a hash join.
    Try to get a grip of how you would process the query, if you had to do it manually. Does the plan match that idea?
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Question for SQL execution plan, strange..

    HI, all
    We meet a strange problem for a execution plan. First, we have a StoreProc, it runs around less than 20 secs. One day, it runs more than 5 mins (Execution plan 1). I checked all the index (fragmentation is low and Scan density is high), and re-run the update
    statistics, but the performance still have no any improvement.
    This is very strange for me, why the query becomes slow suddenly on one day (we will know the performance immedidately because it is a daily function, user will report to us if the performance is low, we only got complain today.)?
    Finally, we tested in two ways.
    1. We run the below update statistic command with one index - "update statistics dbo.PCMS_CARD_TRANS MGM_IDX_PCMS_TRANS_TIME_PROCESSING"
    Result: It runs under 20 secs.
    2. We create a new index - "MGM_IDX_Test_TRANS_TIME"
    Result: It runs around 1 second, but I checked the execution plan (Execution plan 2) didn't change and the new index is not use for it...we feel strange again on it......
    All information was included in below:
    Table - PCMS_CARD_TRANS, PCMS_CARD_PROFILE.
    VIEW - PCMS_V_CR_CARD_STOCK
    StoreProc:  PCMS_CardRoom_GetProcessedCardStock
    Index: All zipped.

    USE [PCMS]
    GO
    /****** Object: View [dbo].[PCMS_V_CR_CARD_STOCK] Script Date: 4/9/2014 4:30:54 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE VIEW [dbo].[PCMS_V_CR_CARD_STOCK]
    AS
    SELECT dbo.PCMS_CARD_PROFILE.CARD_ID, dbo.PCMS_CARD_PROFILE.TYPE_CODE, dbo.PCMS_CARD_PROFILE.COLOR_SET_CODE,
    dbo.PCMS_CARD_PROFILE.VENDOR_ID, dbo.PCMS_CARD_PROFILE.CARD_STATUS, dbo.PCMS_CARD_PROFILE.LOC, dbo.PCMS_CARD_PROFILE.LOC_ID,
    dbo.PCMS_CARD_PROFILE.LAST_TRANS_NO, dbo.PCMS_CARD_COLOR_SET_MASTER.COLOR_SET_DESC, dbo.PCMS_CARD_TYPE.TYPE_DESC,
    dbo.PCMS_CARD_PROFILE.DECK_AMOUNT, dbo.PCMS_CARD_PROFILE.USAGE_TYPE, dbo.PCMS_CARD_PROFILE.STORAGE_ID,
    dbo.PCMS_CARD_PROFILE.STORAGE, dbo.PCMS_CARD_TYPE.CARD_TYPE_GROUP
    FROM dbo.PCMS_CARD_PROFILE INNER JOIN
    dbo.PCMS_CARD_COLOR_SET_MASTER ON
    dbo.PCMS_CARD_PROFILE.COLOR_SET_CODE = dbo.PCMS_CARD_COLOR_SET_MASTER.COLOR_SET_CODE INNER JOIN
    dbo.PCMS_CARD_TYPE ON dbo.PCMS_CARD_PROFILE.TYPE_CODE = dbo.PCMS_CARD_TYPE.TYPE_CODE
    WHERE (dbo.PCMS_CARD_COLOR_SET_MASTER.ENABLED = 1) AND (dbo.PCMS_CARD_TYPE.ENABLED = 1)
    GO
    EXEC sys.sp_addextendedproperty @name=N'MS_DiagramPane1', @value=N'[0E232FF0-B466-11cf-A24F-00AA00A3EFFF, 1.00]
    Begin DesignProperties =
    Begin PaneConfigurations =
    Begin PaneConfiguration = 0
    NumPanes = 4
    Configuration = "(H (1[40] 4[20] 2[20] 3) )"
    End
    Begin PaneConfiguration = 1
    NumPanes = 3
    Configuration = "(H (1 [50] 4 [25] 3))"
    End
    Begin PaneConfiguration = 2
    NumPanes = 3
    Configuration = "(H (1 [50] 2 [25] 3))"
    End
    Begin PaneConfiguration = 3
    NumPanes = 3
    Configuration = "(H (4 [30] 2 [40] 3))"
    End
    Begin PaneConfiguration = 4
    NumPanes = 2
    Configuration = "(H (1 [56] 3))"
    End
    Begin PaneConfiguration = 5
    NumPanes = 2
    Configuration = "(H (2 [66] 3))"
    End
    Begin PaneConfiguration = 6
    NumPanes = 2
    Configuration = "(H (4 [50] 3))"
    End
    Begin PaneConfiguration = 7
    NumPanes = 1
    Configuration = "(V (3))"
    End
    Begin PaneConfiguration = 8
    NumPanes = 3
    Configuration = "(H (1[56] 4[18] 2) )"
    End
    Begin PaneConfiguration = 9
    NumPanes = 2
    Configuration = "(H (1 [75] 4))"
    End
    Begin PaneConfiguration = 10
    NumPanes = 2
    Configuration = "(H (1[66] 2) )"
    End
    Begin PaneConfiguration = 11
    NumPanes = 2
    Configuration = "(H (4 [60] 2))"
    End
    Begin PaneConfiguration = 12
    NumPanes = 1
    Configuration = "(H (1) )"
    End
    Begin PaneConfiguration = 13
    NumPanes = 1
    Configuration = "(V (4))"
    End
    Begin PaneConfiguration = 14
    NumPanes = 1
    Configuration = "(V (2))"
    End
    ActivePaneConfig = 0
    End
    Begin DiagramPane =
    Begin Origin =
    Top = 0
    Left = 0
    End
    Begin Tables =
    Begin Table = "PCMS_CARD_PROFILE"
    Begin Extent =
    Top = 6
    Left = 38
    Bottom = 265
    Right = 212
    End
    DisplayFlags = 280
    TopColumn = 0
    End
    Begin Table = "PCMS_CARD_COLOR_SET_MASTER"
    Begin Extent =
    Top = 6
    Left = 250
    Bottom = 121
    Right = 426
    End
    DisplayFlags = 280
    TopColumn = 0
    End
    Begin Table = "PCMS_CARD_TYPE"
    Begin Extent =
    Top = 6
    Left = 464
    Bottom = 121
    Right = 637
    End
    DisplayFlags = 280
    TopColumn = 0
    End
    End
    End
    Begin SQLPane =
    End
    Begin DataPane =
    Begin ParameterDefaults = ""
    End
    Begin ColumnWidths = 13
    Width = 284
    Width = 1500
    Width = 1500
    Width = 1500
    Width = 1500
    Width = 1500
    Width = 1500
    Width = 1500
    Width = 1500
    Width = 1500
    Width = 1500
    Width = 1500
    Width = 1500
    End
    End
    Begin CriteriaPane =
    Begin ColumnWidths = 11
    Column = 1440
    Alias = 900
    Table = 1170
    Output = 720
    Append = 1400
    NewValue = 1170
    SortType = 1350
    SortOrder = 1410
    GroupBy = 1350
    Filter = 1350
    Or = 1350
    Or = 1350
    Or = 1350
    End
    End
    End
    ' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'VIEW',@level1name=N'PCMS_V_CR_CARD_STOCK'
    GO
    EXEC sys.sp_addextendedproperty @name=N'MS_DiagramPaneCount', @value=1 , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'VIEW',@level1name=N'PCMS_V_CR_CARD_STOCK'
    GO
    USE [PCMS]
    GO
    /****** Object: Index [IDX_PCMS_CARD_PROFILE_LOC] Script Date: 4/9/2014 4:39:51 PM ******/
    CREATE NONCLUSTERED INDEX [IDX_PCMS_CARD_PROFILE_LOC] ON [dbo].[PCMS_CARD_PROFILE]
    [LOC] ASC,
    [LOC_ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
    GO
    USE [PCMS]
    GO
    /****** Object: Index [IDX_PCMS_CARD_PROFILE_LOC_USAGE] Script Date: 4/9/2014 4:39:56 PM ******/
    CREATE NONCLUSTERED INDEX [IDX_PCMS_CARD_PROFILE_LOC_USAGE] ON [dbo].[PCMS_CARD_PROFILE]
    [LOC] ASC,
    [USAGE_TYPE] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
    GO
    USE [PCMS]
    GO
    /****** Object: Index [IDX_PCMS_CARD_PROFILE_STORAGE] Script Date: 4/9/2014 4:40:01 PM ******/
    CREATE NONCLUSTERED INDEX [IDX_PCMS_CARD_PROFILE_STORAGE] ON [dbo].[PCMS_CARD_PROFILE]
    [STORAGE] ASC,
    [STORAGE_ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
    GO
    USE [PCMS]
    GO
    /****** Object: Index [MGM_IDX_PCMS_CARD_PROFILE_CARD_ID] Script Date: 4/9/2014 4:40:07 PM ******/
    CREATE NONCLUSTERED INDEX [MGM_IDX_PCMS_CARD_PROFILE_CARD_ID] ON [dbo].[PCMS_CARD_PROFILE]
    [CARD_ID] ASC,
    [STORAGE] ASC,
    [CARD_STATUS] ASC,
    [LOC] ASC
    INCLUDE ( [TYPE_CODE],
    [COLOR_SET_CODE],
    [LOC_ID]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    GO
    USE [PCMS]
    GO
    /****** Object: Index [MGM_IDX_PCMS_CARD_PROFILE_CARD_STATUS] Script Date: 4/9/2014 4:40:12 PM ******/
    CREATE NONCLUSTERED INDEX [MGM_IDX_PCMS_CARD_PROFILE_CARD_STATUS] ON [dbo].[PCMS_CARD_PROFILE]
    [CARD_STATUS] ASC,
    [VENDOR_ID] ASC,
    [LOC] ASC
    INCLUDE ( [CARD_ID],
    [TYPE_CODE],
    [COLOR_SET_CODE],
    [USAGE_TYPE],
    [DECK_AMOUNT],
    [LAST_TRANS_NO]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    GO
    USE [PCMS]
    GO
    /****** Object: Index [PK_PCMS_CARD_PROFILE] Script Date: 4/9/2014 4:40:18 PM ******/
    ALTER TABLE [dbo].[PCMS_CARD_PROFILE] ADD CONSTRAINT [PK_PCMS_CARD_PROFILE] PRIMARY KEY CLUSTERED
    [CARD_ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
    GO

  • Bad execution plans when using parameters, subquery and partition by

    We have an issue with the following...
    We have this table:
    CREATE TABLE dbo.Test (
    Dt date NOT NULL,
    Nm int NOT NULL,
    CONSTRAINT PK_Test PRIMARY KEY CLUSTERED (Dt)
    This table can have data but it will not matter for this topic.
    Consider this query (thanks Tom Phillips for simplifying my question):
    declare @date as date = '2013-01-01'
    select *
    from (
    select Dt,
    row_number() over (partition by Dt order by Nm desc) a
    from Test
    where Dt = @date
    ) x
    go
    This query generates an execution plan with a clustered seek.
    Our queries however needs the parameter outside of the sub-query:
    declare @date as date = '2013-01-01'
    select *
    from (
    select Dt,
    row_number() over (partition by Dt order by Nm desc) a
    from Test
    ) x
    where Dt = @date
    go
    The query plan now does a scan followed by a filter on the date.
    This is extremely inefficient.
    This query plan is the same if you change the subquery into a CTE or a view (which is what we use).
    We have this kind of setup all over the place and the only way to generate a good plan is if we use dynamic sql to select data from our views.
    Is there any kind of solution for this?
    We have tested this (with the same result) on SQL 2008R2, 2012 SP1, 2014 CU1.
    Any help is appreciated.

    Hi Tom,
    Parameter sniffing is a different problem. A query plan is made based on the value which may be really off in the data distribution. We do have this problem as well e.g. when searching for today's data when the statistics think the table only has yesterday's
    data (a problem we have a lot involves the lack of sufficient amount of buckets in statistics objects).
    This problem is different.
    - It doesn't matter what the parameter value is.
    - You can reproduce this example with a fresh table. I.e. without statistics.
    - No matter what the distribution of data (or even if the statistics are correct), there is absolutely no case possible (in my example) where doing a scan followed by a filter should have better results than doing a seek.
    - You can't change the behavior with hints like "option(recompile)" or "optimize for".
    This problem has to do with the specific combination of "partition by" and the filter being done outside of the immediate query. Try it out, play with the code, e.g. move the where clause inside the subquery. You'll see what I mean.
    Thanks for responding.
    Michel

  • Subquery execution plan issue

    Hi All,
    Oracle v11.2.0.2
    I have a SELECT query which executes in less than a second and selects few records.
    Now, if I put this SELECT query in IN clause of a DELETE command, that takes ages (even when DELETE is done using its primary key).
    See below query and execution plan.
    Here is the SELECT query
    SQL> SELECT   ITEM_ID
      2                         FROM   APP_OWNER.TABLE1
      3                        WHERE   COLUMN1 = 'SomeValue1234'
      4                                OR (COLUMN1 LIKE 'SomeValue1234%'
      5                                    AND REGEXP_LIKE (
      6                                          COLUMN1,
      7                                          '^SomeValue1234[A-Z]{3}[0-9]{5}$'
      8  ));
       ITEM_ID
      74206192
    1 row selected.
    Elapsed: 00:00:40.87
    Execution Plan
    Plan hash value: 3153606419
    | Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |             |     2 |    38 |     7   (0)| 00:00:01 |
    |   1 |  CONCATENATION     |             |       |       |            |          |
    |*  2 |   INDEX RANGE SCAN | PK_TABLE1   |     1 |    19 |     4   (0)| 00:00:01 |
    |*  3 |   INDEX UNIQUE SCAN| PK_TABLE1   |     1 |    19 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("COLUMN1" LIKE 'SomeValue1234%')
           filter("COLUMN1" LIKE 'SomeValue1234%' AND  REGEXP_LIKE
                  ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$'))
       3 - access("COLUMN1"='SomeValue1234')
           filter(LNNVL("COLUMN1" LIKE 'SomeValue1234%') OR LNNVL(
                  REGEXP_LIKE ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$')))
    Statistics
              0  recursive calls
              0  db block gets
              8  consistent gets
              0  physical reads
              0  redo size
            348  bytes sent via SQL*Net to client
            360  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedNow see the DELETE command. ITEM_ID is the primary key for TABLE2
    SQL> delete from TABLE2 where ITEM_ID in (
      2  SELECT   ITEM_ID
      3                         FROM   APP_OWNER.TABLE1
      4                        WHERE   COLUMN1 = 'SomeValue1234'
      5                                OR (COLUMN1 LIKE 'SomeValue1234%'
      6                                    AND REGEXP_LIKE (
      7                                          COLUMN1,
      8                                          '^SomeValue1234[A-Z]{3}[0-9]{5}$'
      9  ))
    10  );
    1 row deleted.
    Elapsed: 00:02:12.98
    Execution Plan
    Plan hash value: 173781921
    | Id  | Operation               | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT        |                             |     4 |   228 | 63490   (2)| 00:12:42 |
    |   1 |  DELETE                 | TABLE2                      |       |       |            |          |
    |   2 |   NESTED LOOPS          |                             |     4 |   228 | 63490   (2)| 00:12:42 |
    |   3 |    SORT UNIQUE          |                             |     1 |    19 | 63487   (2)| 00:12:42 |
    |*  4 |     INDEX FAST FULL SCAN| I_TABLE1_3                  |     1 |    19 | 63487   (2)| 00:12:42 |
    |*  5 |    INDEX RANGE SCAN     | PK_TABLE2                   |     7 |   266 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("COLUMN1"='SomeValue1234' OR "COLUMN1" LIKE 'SomeValue1234%' AND
                  REGEXP_LIKE ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$'))
       5 - access("ITEM_ID"="ITEM_ID")
    Statistics
              1  recursive calls
              5  db block gets
         227145  consistent gets
         167023  physical reads
            752  redo size
            765  bytes sent via SQL*Net to client
           1255  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
              1  rows processedWhat can be the issue here?
    I tried NO_UNNEST hint, which made difference, but still the DELETE was taking around a minute (instead of 2 minutes), but that is way more than that sub-second response.
    Thanks in advance

    rahulras wrote:
    SQL> delete from TABLE2 where ITEM_ID in (
    2  SELECT   ITEM_ID
    3                         FROM   APP_OWNER.TABLE1
    4                        WHERE   COLUMN1 = 'SomeValue1234'
    5                                OR (COLUMN1 LIKE 'SomeValue1234%'
    6                                    AND REGEXP_LIKE (
    7                                          COLUMN1,
    8                                          '^SomeValue1234[A-Z]{3}[0-9]{5}$'
    9  ))
    10  );
    The optimizer will transform this delete statement into something like:
    delete from table2 where rowid in (
        select t2.rowid
        from
            table2 t2,
            table1 t1
        where
                t1.itemid = t2.itemid  
        and     (t1.column1 =  etc.... )
    )With the standalone subquery against t1 the optimizer has been a little clever with the concatenation operation, but it looks as if there is something about this transformed join that makes it impossible for the concatenation mechanism to be used. I'd also have to guess that something about the way the transformation has happened has made Oracle "lose" the PK index. As I said in another thread a few minutes ago, I don't usually look at 10053 trace files to solve optimizer problems - but this is the second one today where I'd start looking at the trace if it were my problem.
    You could try rewriting the query in this explicit join and select rowid form - that way you could always force the optimizer into the right path through table1. It's probably also possible to hint the original to make the expected path appear, but since the thing you hint and the thing that Oracle optimises are so different it might turn out to be a little difficult. I'd suggest raising an SR with Oracle.
    Regards
    Jonathan Lewis

Maybe you are looking for

  • Uploading an image

    Hello I would like to write a program where I can upload an image from my computer and have it displayed in the program. I understand that I need to use JFileChooser to do this but before I start writing code I want to know is it possible to achieve

  • Free goods issue field to add in KE30 report

    Dear All, I need to add one colum under the lead column FREE GOODS ISSUE in KE30 report. In current report product wise sales report is like below in KE30. Lead Column                                     amount Gross Sales Valume                     

  • Create Tor anonymising middlebox osx

    Hello, i would like to create an tor anonymising middlebox (mac mini running mavericks and internet-sharing enabled). I don't know if this is the correct way to do it, so i hope you guys can give me some helpfull advise here. The goal is to redirect

  • Download from i-pod classic to i-tunes

    All my i-tune songs have hidden themselves in my system. I have an appointment at an apple store this friday. The tech I talked to said worse case scenario I could get a third party program to download from my i-pod classic back to i-tunes. Any sugge

  • WD 750GB HDD won't erase or repartition, using external HDD enclosure, what am I doing wrong?

    I have the late 2008 aluminum macbook running Mountain Lion with the original 250gb hard drive, and the hard drive's been sitting at over 90% full for months. I've been needing to upgrade it for a while, so I bought a 2.5" WD Scorpio Black 7200 rpm 7