Gather Schema Statistics improve the performance of the R12 application?

Hi All,
If we run “Gather Schema Statistics” program, it will improve the performance of the R12 application?
Platform Linux and DB version 10.2.0.4.
Thanks & Regards,
Tharun

Hi Tharun,
If we ruer n “Gather Schema Statistics” program, it will improve the performance of the R12 application?
Yes, it will speed up as it ensures to have an up to date statistics.
Please refer notes:
Concurrent Processing - How To Gather Statistics On Oracle Applications Release 11i and/or Release 12 - Concurrent Process,Temp Tables, Manually [ID 419728.1]
How Often Should Gather Schema Statistics Program be Run? [ID 168136.1]
Why Stats Gather?
Stats gathering must be set as a routine job and is recommended to be scheduled. Even though this program is available from the fronted in the form of submitting a concurrent program basically it performs a DB level enhancement and ensures that you have an up to date optimizer statistics.Because the objects in a database can be constantly changing, statistics must be regularly updated so that they accurately describe these database objects.
For indepth understand as to why it should be run, please refer doc:
Managing Optimizer Statistics
Thanks &
Best Regards,

Similar Messages

  • Can we track the progress of Gather Schema Statistics?

    Is there a way that we can track the progress of gather schema statistics other than querying dba_tables?
    I am trying to find a better option. Please suggest.
    Thanks,
    Suneel

    SRV008 wrote:
    Is there a way that we can track the progress of gather schema statistics other than querying dba_tables?
    I am trying to find a better option. Please suggest.
    Thanks,
    SuneelNo, it's either the request log file or dba_tables (last_analyzed column).
    Thanks,
    Hussein

  • Need help in improving the performance for the sql query

    Thanks in advance for helping me.
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
    Any suggestions or solutions for improving performance are appreciated
    SQL query:
    update targettable tt
    set mnop = 'G',
    where ( x,y,z ) in
    select a.x, a.y,a.z
    from table1 a
    where (a.x, a.y,a.z) not in (
    select b.x,b.y,b.z
    from table2 b
    where 'O' = b.defg
    and mnop = 'P'
    and hijkl = 'UVW';

    987981 wrote:
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
    The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
    The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
    The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
    Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
    From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
    That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
    So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both?

  • How to improve the performance of the query

    Hi,
    Help me by giving tips how to improve the performance of the query. Can I post the query?
    Suresh

    Below is the formatted query and no wonder it is taking lot of time. Will give you a list of issues soon after analyzing more. Till then understand the pitfalls yourself from this formatted query.
    SELECT rt.awb_number,
           ar.activity_id as task_id,
           t.assignee_org_unit_id,
           t.task_type_code,
           ar.request_id
    FROM activity_task ar,
         request_task rt,
         task t
    WHERE ar.activity_id =t.task_id
    AND ar.request_id = rt.request_id
    AND ar.complete_status != 'act.stat.closed'
    AND t.assignee_org_unit_id in (SELECT org_unit_id
                                   FROM org_unit
                                   WHERE org_unit_id in (SELECT oo.org_unit_id
                                                         FROM org_unit oo
                                                         WHERE oo.org_unit_id='3'
                                                         OR oo.parent_id ='3'
                                   OR parent_id in (SELECT oo.org_unit_id
                                                    FROM org_unit oo
                                                    WHERE oo.org_unit_id='3'
                                                    OR oo.parent_id ='3'
                                   AND has_queue=1
    AND ar.parent_task_id not in (SELECT tt.task_id
                                  FROM task tt
                                  WHERE tt.assignee_org_unit_id in (SELECT org_unit_id
                                                                    FROM org_unit
                                                                    WHERE org_unit_id in (SELECT oo.org_unit_id
                                                                                          FROM org_unit oo
                                                                                          WHERE oo.org_unit_id='3'
                                                                                          OR oo.parent_id ='3'
                                                                     OR parent_id in (SELECT oo.org_unit_id
                                                                                      FROM org_unit oo     
                                                                                      WHERE oo.org_unit_id='3'
                                                                                      OR oo.parent_id ='3'
                                                                     AND has_queue=1
    AND rt.awb_number is not null
    ORDER BY rt.awb_numberCheers
    Sarma.

  • Improve the performance of the code

    Hi All,
    Just check the below report and give some hits to improve the performance of the report.
      LOOP AT WBS_TAB.
        CLEAR: HOLD_PO_NBR,                                    
               HOLD_PO_LINE.                                   
        SELECT * FROM EKKN
                 WHERE PS_PSP_PNR = WBS_TAB-INTNO
                 AND EBELN IN S_PO
                 AND KOKRS = P_KOKRS.
          CHECK EKKN-LOEKZ <> 'X'.
          CLEAR GC_PROFL.
          WRITE WBS_TAB-PSPHI TO LC_PSPID.
    *select direct projects only if p_direct = 'X'.
          IF P_DIRECT = 'X'.                        
            SELECT SINGLE PROFL FROM PROJ INTO GC_PROFL        
                   WHERE PSPID = LC_PSPID.          
            IF GC_PROFL+4(3) = '002'                
            OR GC_PROFL+4(3) = '007'.               
              CONTINUE.                            
            ENDIF.                                  
          ENDIF.
    select indirect projects and cost centers only when p_indir = 'X'.
          IF P_INDIR = 'X'.                         
            SELECT SINGLE PROFL FROM PROJ INTO GC_PROFL        
                   WHERE PSPID = LC_PSPID.          
            IF GC_PROFL+4(3) NE '002'               
            AND GC_PROFL+4(3) NE '007'.             
              CONTINUE.                             
            ENDIF.                                  
          ENDIF.                                   
          CLEAR: GC_EINDT,                          
                 GC_PRCTR.                          
          GC_PRCTR = WBS_TAB-PRCTR.                 
          PERFORM PROCESS_MAIN2.
        ENDSELECT.
      ENDLOOP.
    FORM PROCESS_MAIN2.
      SELECT SINGLE * FROM EKKO WHERE EBELN = EKKN-EBELN.
      CHECK EKKO-LIFNR IN VEND_NO.
      CHECK EKKO-AEDAT IN S_POCDAT.                            
      SELECT SINGLE * FROM EKPO WHERE EBELN = EKKN-EBELN AND
                                      EBELP = EKKN-EBELP.
      CHECK EKPO-LOEKZ <> 'L' AND
            EKPO-LOEKZ <> 'S'.                                 
      SELECT SINGLE EINDT FROM EKET INTO GC_EINDT   
        WHERE EBELN = EKPO-EBELN                    
        AND EBELP   = EKPO-EBELP                    
        AND EINDT IN S_EINDT.                       
      IF NOT S_EINDT IS INITIAL.                   
        CHECK GC_EINDT IN S_EINDT.                  
      ENDIF.                                        
      CLEAR ITAB.
      ITAB-PO_CREATE = EKKO-AEDAT.                             
      ITAB-DEL_DAT   = GC_EINDT.                   
      ITAB-PO_NO     = EKKO-EBELN.
      ITAB-CURR_PO   = EKKO-WAERS.
      ITAB-CURR_CO   = TKA01-WAERS.
      ITAB-LINE      = EKPO-EBELP.
      ITAB-GR_SW     = EKPO-WEPOS.
      ITAB-GR_NON_VAL_SW = EKPO-WEUNB.
      IF EKPO-WEUNB = 'X'.                 "Goods Receipt, Non-Value
        ITAB-GR_SW = ''.        "switch evaluation of commitment to IR
      ENDIF.
      ITAB-TEXT       = EKPO-TXZ01.
      ITAB-QTY_ORD    = EKPO-MENGE.
      ITAB-UOM        = EKPO-MEINS.
      ITAB-AMOUNT_DOC = EKPO-EFFWR.
      ITAB-WBS = EKKN-PS_PSP_PNR.
      ITAB-COST_ELEM = EKKN-SAKTO.                             
      ITAB-KOSTL = EKKN-KOSTL.                                 
      ITAB-PRCTR = GC_PRCTR.
      ITAB-ANLN1 = EKKN-ANLN1.                                 
      ITAB-AUFNR = EKKN-AUFNR.                                
      ITAB-ZEKKN = EKKN-ZEKKN.                                 
      IF EKKN-VPROZ <> 0.
        PERCENT = EKKN-MENGE / EKPO-MENGE.
        ITAB-AMOUNT_DOC = ITAB-AMOUNT_DOC * PERCENT .
        ITAB-QTY_ORD = ITAB-QTY_ORD * PERCENT .
      ENDIF.
      IF ITAB-CURR_CO NE ITAB-CURR_PO.
      itab-amount_co = itab-amount_doc * ekko-wkurs.
           Convert from PO curr to CO currency
        GS_CONV_AMT = ITAB-AMOUNT_DOC / GS_CORATE.
        PERFORM GET_CURR_RATE USING ITAB-CURR_PO EKKO-AEDAT
                              CHANGING GS_RATE.
        ITAB-AMOUNT_CO =  GS_CONV_AMT * GS_RATE.
      ELSE.
        ITAB-AMOUNT_CO = ITAB-AMOUNT_DOC.
      ENDIF.
           Convert from PO curr to CO Code currency
      CLEAR: ITAB-CURR_LOC, GS_RATE.
      PERFORM GET_COCD_RATE USING EKKO-BUKRS
                            CHANGING ITAB-CURR_LOC GS_RATE.
      IF ITAB-CURR_LOC NE ITAB-CURR_PO.
                    Get company code/local currency and rate.
        GS_CONV_AMT = ITAB-AMOUNT_DOC / GS_RATE.
        PERFORM GET_CURR_RATE USING ITAB-CURR_PO ITAB-PO_CREATE
                              CHANGING GS_RATE.
        ITAB-AMOUNT_LOC =  GS_CONV_AMT * GS_RATE.
      ELSE.
        ITAB-AMOUNT_LOC = ITAB-AMOUNT_DOC.
      ENDIF.
      ITAB-VEND_NO   = EKKO-LIFNR.
      APPEND ITAB.
      CLEAR ITAB.
    ENDFORM.
    Thanks,
    Subbu.

    Hi,
    In your code instead of using
    SELECT * FROM EKKN
    WHERE PS_PSP_PNR = WBS_TAB-INTNO
    AND EBELN IN S_PO
    AND KOKRS = P_KOKRS.
    ENSELECT
    USe
    DATA:
    TYPES: BEGIN OF t_ekkn.
            INCLUDE STRUCTURE ekkn.
    TYPES: END OF t_ekkn.
    DATA: t_ekkn TYPE TABLE OF t_ekkn,
          t_ekkn_wa TYPE t_ekkn.
    FIELD-SYMBOLS: <t_ekkn> TYPE t_ekkn.
    SELECT * FROM EKKN
    INTO TABLE t_ekkn
    WHERE PS_PSP_PNR = WBS_TAB-INTNO
    AND EBELN IN S_PO
    AND KOKRS = P_KOKRS.
    Loop at t_ekkn assigning <t_ekkn>.
    Copy your existing logic between SELECT & ENDSELECT of EKKN
    Endloop.
    I hope this helps,
    Regards
    Raju Chitale

  • Improve the performance of the data activation on DSO.

    Hi,
    I would like to improve the performance of the data activation on DSO. 1000 records should be uploaded by 10 minutes.
    Now it takes 10 minutes to activate data in DSO. And it is strange that it takes long time ( about 9 minutes) before starting activation.
    ex :
    in the request task log of the activation:
    data activation    15:10:00
    main process     15:10:01
    technical status  15:10:21
    But activation is started at 15:01:00 here. (this is started via process chain).
    I have no idea why it takes so long time and how I can improve.
    Kind regards,
    Masaaki

    Hi,
    Try this also.
    1. Remove Bex Reporting check box in ODS if not required.
    2. Goto manage ODS -> activate -> activate in parallel -> increase the number of processes from there.For direct access try TCode RSODSO_SETTINGS. Not sure this is possible in your case.
    thanks,
    JituK

  • Options to improve the performance of the Job

    Hi Team,
    As part of the CRM Upgrade requirement, we are planning to use Account Life cycle functionality to reflect the status of an account.
    As per the SAP recommendations( Note 1113330) we are currently executing the program CRM_BUPA_USERSTATUS_CONV2ROLE to convert user status master data to BP roles. We have noticed that this program is taking more time even when we run this for single business partner. We are trying to explore the options on how to improve the performance of the job. Incase if anyone have done this kind of exercise in any of their previous assignments or have information on this, request to provide your feedback on the below points.
    1) Total Volume of Customer Master Data
    2) How many records did we consider for one execution of the conversion program
    3) How much time did it toke for one execution ?Did we do any performance tuning
    4) When we are running the program in back ground mode..we are not getting the spool
    showing the log information. Was there any custom report developed to view the log when
    we execute the program in background mode..if so can you share us the technical details
    6) Any information on how many work processors that were available for executing the jobs 
    Appreciate your help.
    Regards,
    Varun

    Hello Udaya ,
    Could you please tryy providing a range of BPs as per note 1121015? This can help in improving the performance .
    Thanks & regards,
    Krishnen

  • APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기

    제품 : AOL
    작성날짜 : 2003-12-02
    APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
    ================================================
    PURPOSE
    APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
    Explanation
    Gather Schema Statistics 수행에 대한 정해진 주기는 없다. 일부 System은 매주 수행할 필요가 있을수 있고, 또 다른 System은 1개월 주기로 수행할 수 있다. 수행 주기는 data 양과 형태 그리고 얼마나 자주 변경되느냐에 따라 결정된다.
    가장 효과적인 수행 주기를 결정하기 위하여 다른 Schedule 로 수행하여 Monitoring 이 필요하다.
    일반적으로 다음과 같이 수행한다.
    1) 많은 양의 Data 및 Data 내용이 변경된 후 수행
    2) Data import 후 에 수행
    3) Performance의 저하가 발생한경우.
    11i 에서는 ANALYZE command 와 DBMS_STATS packages가 지원되지 않으므로 FND_STATS 을 사용하여 한다.
    Gather Schema Statistics 는 FND_STATS 을 사용한다.
    Example
    Reference Documents
    Note 168136.1 - How Often Should Gather Schema Statistics Program be
    Run?

    john
    you can do these things
    1. gather schema statistics regularly weekly once -full
    2. gather schema statistics daily - atleast 10%
    3. rebuild the fragmented indexes regularly - 15 days
    4. coalesce the tablespaces monthly once
    5. purge the unwanted data weekly once
    6. pin the db objects into SGA with dbms_shared_pool package
    7. find the objects which have become invalid and then validate them .
    8. purge workflow runtime data
    and there are still some more that as system administrator you should keep a watch on....
    but do the above you , your job is best done
    any help post here
    regards
    sdsreenivas

  • Does concurrent Gather Schema Statistics generate histograms?

    Hello:
    Does anybody know if concurrent Gather Schema Statistics generates histograms?
    Thank you.
    Alex.

    Alex,
    When Gather Schema Statistics is executed, it reads FND_HISTOGRAM_COLS and builds the histograms.
    Performance Tuning the Apps Database Layer
    http://blogs.oracle.com/stevenChan/2007/05/performance_tuning_the_apps_da.html
    Regards,
    Hussein

  • Gather Schema Statistics Report taking more than 13 hours to complete is it normal?

    I have run Gather Schema Statistics Report at 9 pm and it completed on 11am next morning. It almost took more than 13 hours, is this behavior normal.
    I have used the following parameter.
    Schema name: ALL
    Estimate percent:50
    Backup Flag :NOBACKUP
    History Mode :LASTRUN
    Gather Option:GATHER
    Invalidate Dependent Cursor : Y
    My database size is about 250 GB.
    Please reply

    Gather schema stastics is erroring out when i'm using the GATHER_AUTO option with 10%.
    Here is the log file
    +---------------------------------------------------------------------------+
    Application Object Library: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    FNDGSCST module: Gather Schema Statistics
    +---------------------------------------------------------------------------+
    Current system time is 13-AUG-2013 10:42:12
    +---------------------------------------------------------------------------+
    **Starts**13-AUG-2013 10:42:12
    ORACLE error 20001 in FDPSTP
    Cause: FDPSTP failed due to ORA-20001: SYS_NTGNSVL1S+OCZGRAAHKD9MYG== is an invalid identifier
    ORA-06512: at "APPS.FND_STATS", line 774
    ORA-06512: at line 1
    The SQL statement being executed at the time of the error was: SE
    +---------------------------------------------------------------------------+
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 8 internal_flag= NOBACKUP
    ORA-20001: SYS_NTGNSVL1S+OCZGRAAHKD9MYG== is an invalid identifier
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    Executing request completion options...
    Finished executing request completion options.
    +---------------------------------------------------------------------------+
    Concurrent request completed
    Current system time is 13-AUG-2013 10:43:29
    +---------------------------------------------------------------------------+
    I have used the following parameters
    Schema name: ALL
    Estimate percent:10
    Backup Flag :NOBACKUP
    History Mode :LASTRUN
    Gather Option:GATHER_AUTO
    Invalidate Dependent Cursor : Y

  • How often we need to run gather schema statistics etc.. ??

    HI,
    Am on 11.5.10.2
    RDBMS 9.2.0.6
    How often we need to run the following requests in Production...
    1.Gather schema statistics
    2.Gather Column statistics
    3.Gather Table statistics
    4.Gather All Column statisitics
    Thanks

    Hi;
    We discussed here before about same issue. Please check below thread which could be helpful about your issue:
    How often we need to run gather schema statistics
    Re: Gather schema stats run
    How we can collect custom schema information wiht gather statistics
    gather schema stats for EBS 11.5.10
    gather schema stats conc. program taking too long time
    Re: gather schema stats conc. program taking too long time
    How it runs
    Gather Schema Statistics
    http://oracle-apps-dba.blogspot.com/2007/07/gather-statistics-for-oracle.html
    gather statistict collect which informations
    Gather Schema Statistics...
    Regard
    Helios

  • FNDGSCST module: Gather Schema Statistics - ERROR: While GATHER_TABLE_STATS

    Hi
    I ran gather schema Statistics
    In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 4 internal_flag= NOBACKUP
    the error are:
    stats on table AQ$_WF_CONTROL_P is locked
    stats on table FND_CP_GSM_IPC_AQTBL is locked
    Error #1: ERROR: While GATHER_TABLE_STATS:
    object_name=AP.JE_FR_DAS_010***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
    Error #2: ERROR: While GATHER_TABLE_STATS:
    object_name=AP.JE_FR_DAS_010_NEW***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
    Error #3: ERROR: While GATHER_TABLE_STATS:
    object_name=AP.JG_ZZ_SYS_FORMATS_ALL_B***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
    I ran this while ago.
    anyone can help me to fix it
    Thanks

    Please see old threads which discuss the same issue -- http://forums.oracle.com/forums/search.jspa?threadID=&q=Gather+AND+Schema+AND+Statistics+AND+ORA-20001&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • Gather Schema Statistics

    Hi All,
              When I run gather schema statistics, it is completing status normal .  But it took only 45 min.
              EBS - 12.1.3
                 DB - 11.2.0.3
                 OS - RHEL  64
              As per my understanding , it will take 2-3 hrs. After DB upgraded to 11.2.0.3, got an completd with error and followd the note: Gather Schema Statistics fails with Ora-20001 errors after 11G database upgrade [ID 781813.1]
              After that it completd with normal. But it took time only 45 min max in every run. with percentage 10 and 40. My DB size is 90 GB
    Request Log file:
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    In GATHER_SCHEMA_STATS , schema_name= ALL percent= 40 degree = 8 internal_flag= NOBACKUP
    stats on table FND_CP_GSM_IPC_AQTBL is locked
    stats on table FND_SOA_JMS_IN is locked
    stats on table FND_SOA_JMS_OUT is locked
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 8 internal_flag= NOBACKUP
    stats on table FND_CP_GSM_IPC_AQTBL is locked
    stats on table FND_SOA_JMS_IN is locked
    stats on table FND_SOA_JMS_OUT is locked
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    Now I querey the below :
    SQL> select column_name, nvl(hsize,254) hsize from FND_HISTOGRAM_COLS where table_name = 'JE_BE_LINE_TYPE_MAP' order by column_name;
    COLUMN_NAME                         HSIZE
    SOURCE                                254
    a)   Is this an expected behavior ?
    b)   If not, please suggest, to fix
    Thanks
    SAL

    Did the concurrent program fail? -- Gather Schema Statistics Fails With Error For APPLSYS Schema (Doc ID 1393184.1)
    Please check the LAST_ANALYZED column of DBA_TABLES and DBA_INDEXES views (Note: 166346.1 - How to Determine When a Table Was Last Analyzed By the Gather Schema Statistics Program).
    Thanks,
    Hussein

  • Gather Schema Statistics issue?

    Hi
    Actually, we have a custom schema in our EBS R12.0.6 instance database. But i have observed that, 'Gather Schema Statistics' program is not picking-up this schema. why? May be something wrong with database schema registration but since 1 & half year the interface associated with this schema is running fine. I do not know,how to resolve this issue?
    I can manually run 'Gather Table Statistics' program against all tables.
    Regards

    Hi;
    Actually, we have a custom schema in our EBS R12.0.6 instance database. But i have observed that, 'Gather Schema Statistics' program is not picking-up this schema. why? May be something wrong with database schema registration but since 1 & half year the interface associated with this schema is running fine. I do not know,how to resolve this issue?For can run hather stat for custom schema please check
    gather schema stats for EBS 11.5.10
    gather schema stats for EBS 11.5.10
    I can manually run 'Gather Table Statistics' program against all tables. Please see:
    How To Gather Statistics On Oracle Applications 11.5.10(and above) - Concurrent Process,Temp Tables, Manually [ID 419728.1]
    Also see:
    How to work Gather stat
    Gather Schema Statistics
    http://oracle-apps-dba.blogspot.com/2007/07/gather-statistics-for-oracle.html
    Regard
    Helios

  • Gather Schema Statistics - GATHER AUTO option failing to gather stats

    Hi ,
    We recently upgraded to 10g DB and 11.5.10 version of Oracle EBS. I want to employ GATHER AUTO option while running Gather Schema Statistics.
    To test the working, I created a test table with 1 million rows. Then, stats were gathered for this table alone by using Gather Table Stats. Now, I deleted ~12% of rows & issued commit. The table all_tab_statistics shows that the table has stale statistics (stale stats column = YES). After that I ran Gather Schema Stats for that particular schema. But the request did not pick the test table to be gathered.
    What is the criterion on which Oracle chooses which all tables to be gather statistics for under Gather Auto option? I am aware of the 10% change in data, but how is this 10% calculated? Is it only based on (insert + update + delete)?
    Also, what is the difference between Gather Auto and Gather Stale ?
    Any help is appreciated.
    Thanks,
    Jithin

    Randalf,
    FYI.. this is what happens inside the concurrent progarm call.. there are a few additional parameters for output/ error msgs:
    procedure GATHER_SCHEMA_STATS(errbuf out varchar2,
    retcode out varchar2,
    schemaname in varchar2,
    estimate_percent in number,
    degree in number ,
    internal_flag in varchar2,
    request_id in number,
    hmode in varchar2 default 'LASTRUN',
    options in varchar2 default 'GATHER',
    modpercent in number default 10,
    invalidate in varchar2 default 'Y'
    is
    exist_insufficient exception;
    bad_input exception;
    pragma exception_init(exist_insufficient,-20000);
    pragma exception_init(bad_input,-20001);
    l_message varchar2(1000);
    Error_counter number := 0;
    Errors Error_Out;
    -- num_request_id number(15);
    conc_request_id number(15);
    degree_parallel number(2);
    begin
    -- Set the package body variable.
    stathist := hmode;
    -- check first if degree is null
    if degree is null then
    degree_parallel:=def_degree;
    else
    degree_parallel := degree;
    end if;
    l_message := 'In GATHER_SCHEMA_STATS , schema_name= '|| schemaname
    || ' percent= '|| to_char(estimate_percent) || ' degree = '
    || to_char(degree_parallel) || ' internal_flag= '|| internal_flag ;
    FND_FILE.put_line(FND_FILE.log,l_message);
    BEGIN
    FND_STATS.GATHER_SCHEMA_STATS(schemaname, estimate_percent,
    degree_parallel, internal_flag, Errors, request_id,stathist,
    options,modpercent,invalidate);
    exception
    when exist_insufficient then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    when bad_input then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    when others then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    END;
    FOR i in 0..MAX_ERRORS_PRINTED LOOP
    exit when Errors(i) is null;
    Error_counter:=i+1;
    FND_FILE.put_line(FND_FILE.log,'Error #'||Error_counter||
    ': '||Errors(i));
    -- added to send back status to concurrent program manager bug 2625022
    errbuf := sqlerrm ;
    retcode := '2';
    END LOOP;
    end;

Maybe you are looking for

  • Windows Vista Won't Recognize Internet Networks

    I just installed Windows Vista Home Basic on my MacBook Pro with Boot Camp. I have Road Runner wireless internet and my Mac side recognizes and connects all the time, but my Windows side doesn't recognize it or any other network. What's the problem a

  • How to fetch Junk values and its columns dynamically

    Hello, Can anyone help me in writing a procedure/dynamic SQL to fetch the column where the junk values appears and its value. Eg: If emp table contains ID and Name columns, and ID column contains junk values, the result should be the Id column and th

  • Workshop problem in building relationship

    Hi there, It seems that workshop (comes with WebLogic 8.1) has problem building relationship. It keeps saying that it cannot retrieve column info. from my data source, when in fact creating entity beans from the exact data source/table works fine. Am

  • Credit card removal

    How do I remove my credit card from my iPad? I want to avoid unwanted purchases.

  • Question regarding SSD

    Do excuse my ignorance if this is not the right section to post this question. I am planning to buy a SSD for my Macbook Pro 17" (model A1212 late 2006). Only doubt I have is whether it can accept a SATA-II SSD (or just SATA SSD) If someone knows the