DBMS Gather Stats using ESTIMATE, gives varying number of rows

I am a little intrigued by the gather_table_stats results for number of rows on user_tables.
A table with no maintenance on it, has 242115 rows.
When I gather stats COMPUTE, num_rows from user_tables = 242115.
However, when I ESTIMATE the figure changes, without apparent reason:
10% - 240710
25% - 240564
50% - 242904
99% - 242136
Using ESTAIMTE, I would expect the number of rows that are inspected to change, but not the resulting number of rows on User_tables!
I wonder, why is this?
Thanks

Thank you for that amusing analogy!
However, it would be interesting to know where it
gets this idea from. Why does it decide sometimes
more, sometimes less, what basis?Actually I'm not the person that knows the precise algorithm, I don't know also any links to docs handy that describes how it is done. But one scenario would be to enble trace and check what sql oracle is issuing to gather stats. Of course it won't be all the algorithm, but probably you'll get some insight.
I mean, If I knew I had not removed or added any
socks to the wardrobe or even if I was unsure anyone
else had, I would use the previous count as my
starting point.AFAIK Oracle doesn't have any previous knowledge i.e. to be more precise Oracle doesn't use it. Because you as a person probably know something more how the table was or wasn't changed, but Oracle doesn't know and/or use such information at least for stats gathering.
Gints Plivna
http://www.gplivna.eu

Similar Messages

  • ADOBE Form Using Table with dynamic number of rows

    Hi All
    First some information about our infrastructure:
    - AdobeDesigner 7.1 in the Developerstudio
    - SAP-Portal 7.0 SP15
    I have a View with tabstrips and behind the tabs i have defined an event. On one Tab I included a ADOBE-Form with Table. The Data for the PDF sould only filled in the context for the Form when i jump to this Tab. I created the Form by using this documentation [https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0859ad1-53aa-2a10-78ae-99e41c407669].
    To fill the tablecontext I use the following Code:
    IPrivateAnlegenBANFView.IPositionenElement position = null;
    IPrivateAnlegenBANFView.IPositionenNode posNode = wdContext.nodePositionen();
    int NUM_5_TIMES = 5;
    for (int i = 0; i < NUM_5_TIMES; i) {
    IPrivateAnlegenBANFView.IPositionenElement posElement = wdContext.createPositionenElement();
    +posElement.setMaterial("" + i);+
    +posElement.setKurztext("Test" + i);+
    +posElement.setWarengruppe("Warengr" + i);+
    posNode.addElement(posElement);
    If i put this code in the wdDoInit method it  works fine and shows me 5 Rows. But if I put the code in the Action of the tabstrip it shows me only one row. I checked the entries of the context and there are 5 entries (showed them in a WD-Table).
    Can someone tell me what im doing wrong?
    Thanks for a answer and kind regards
    Pascal

    Hi All
    finally i found the solution for the problem.
    When you define the interactive Form in the view do not define the property "dataSource" of UI-Element Interactive Form it seems, that the binding is static and not dynamic.
    Add the following source to the viewCotroller
    Global Part of the Source:
    private static IWDInteractiveForm form = null;
    Method wdDoModify:
    if (firstTime) {
      form = (IWDInteractiveForm) view.getElement("InteractiveForm");
    When you have an Event where you fill your Contextnode which you want to display in the table of an Adobe Form Use this code:
    Action:
    public void onActionFillTab(com.sap.tc.webdynpro.progmodel.api.IWDCustomEvent wdEvent ){
      //@@begin onActionFillTab(ServerEvent)
         * Code to fill the Node for AdobeForm Table
        form.bindDataSource(wdContext.nodeTabelle().getNodeInfo());
      //@@end
    If you want to clear your table and show it directly use in the action the following code:
    wdContext.nodeTabelle().invalidate();
    form.bindDataSource(wdContext.nodeTabelle().getNodeInfo());
    Kind regards
    pascal

  • Gather Stats on Newly Partitioned Table

    I partitioned an existing table containing 92 million rows. The method was using dbms_redefinition, whereby I started the redef and then added the indexes and constraints last. After partitioning, I did not gather stats on any of the partitions that were created and I did not analyze any of the indexes. Then I loaded an additional 4 million records into on of the partitions of the newly partitioned table. I ran dbms gather stats on this particular partition and it took over 15 hours. Normally it only takes 4 hours to run dbms gather stats on the individual partitions, so I stopped it after 15 hours. When I monitored it while it was running, it looked like it was taking a really long time gathering stats on the indexes. Is this normal for a newly partitioned table? Is there something I can to prevent it from taking so long when I run gather stats? Oracle Version 10.2.0.4

    -- Gather PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_partition_name, estimate_percent => 20, cascade=> FALSE,granularity => 'PARTITION');
    -- Gather GLOBAL INDEX Statistics
    for i in (select * from sys.dba_indexes where table_owner = upper(v_table_owner)
    and table_name = upper(v_table_name) and partitioned = 'NO'
    order by index_name)
    loop
    SYS.DBMS_STATS.gather_index_stats(ownname => upper(v_table_owner), indname => i.index_name,
    estimate_percent => 20, degree => NULL);
    end loop;
    -- Gather SUB-PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_subpartition_name, estimate_percent => 20, cascade=> TRUE,granularity => 'ALL');

  • Count number of rows for TWO QUERIES USING MINUS OPERATOR

    I have the following piece of sql, i would like to know how could i programatically using plsql count the number of rows returned by the following statement. I know that %ROWCOUNT returns the rowcount for the last execute INSERT, UPDATE, DELETE and SELECT INTO statement. Any help is much appreciated thanks.
    select *
    from admt1m4.usr@tcprod u
    where u.authoriztion = 'Omf99FullUsage'
    and u.obid in (select right from admt1m4.usrtogrp@TCPROD ug where ug.left in
    (select obid from admt1m4.usrgrp@tcprod g 
    where g.participant not in ('super user grp', 't1_WbsPSAnalystGrp')))
    and u.activeuser = '+'
    MINUS
    select *
    from admt1m4.usr@TCPROD u
    where u.authoriztion = 'Omf99FullUsage'
    and u.obid in (select right from admt1m4.usrtogrp@TCPROD ug
    where ug.usrgrpname in ('super user grp', 't1_WbsPSAnalystGrp'))
    and u.activeuser = '+'

    Hi,
    Have you tried
    SELECT COUNT (1) cnt
      FROM (SELECT *
              FROM admt1m4.usr@tcprod u
             WHERE u.authoriztion = 'Omf99FullUsage'
               AND u.obid IN (
                      SELECT RIGHT
                        FROM admt1m4.usrtogrp@tcprod ug
                       WHERE ug.LEFT IN (
                                SELECT obid
                                  FROM admt1m4.usrgrp@tcprod g
                                 WHERE g.participant NOT IN
                                          ('super user grp', 't1_WbsPSAnalystGrp')))
               AND u.activeuser = '+'
            MINUS
            SELECT *
              FROM admt1m4.usr@tcprod u
             WHERE u.authoriztion = 'Omf99FullUsage'
               AND u.obid IN (
                      SELECT RIGHT
                        FROM admt1m4.usrtogrp@tcprod ug
                       WHERE ug.usrgrpname IN
                                         ('super user grp', 't1_WbsPSAnalystGrp'))
               AND u.activeuser = '+')or your requirement is something else...
    *009*

  • Number of rows in a data view

    Hi,
    I have a Data View that is linked to a table in an MS Access database.
    How can I determine the number of rows that are in the data view?
    Thanks,
    Chris

    That is what I feared.  I've tried isEOF in a loop but it just kept on going.
    I then found that you needed to ...setAttribute("stayBOF", "bofAction") and ..setAttribute("stayEOF", "eofAction").  I added this but the code bombed on these lines and failed to set the bofAction and eofAction.  That is the subject of another current discussion but unfortunately I haven't received any responses to that.
    So I've now put together a small pdf (attached) that shows what I'm trying to achieve (I hope) and the problems that I'm having.
    I'm trying to populate a pdf table with records from an external table (on varying number of rows.  To run you will need to create an ODBC Data Source to the enclosed Access mdb.  Mine was called NRSP.
    Hopefully you will have the same problems that I'm having.  If not then the errors could be with my computer or setup.  Unfortunately I don't have access to another computer with LifeCycle Designer (I'm using 8.0) to test it on.
    PLEASE have a look at the pdf and let me know where I'm going wrong.  I'm tearing my hair out trying to get this to work.
    Thanks,
    Chris

  • How to count the number of Rows to be Updated before Update takes place..

    Hi all,
    I have a requirement, where i have to count the number of rows to be updated before updating it. SQL%ROWCOUNT gives the no. of rows updated ( after update takes place). How do i get to know the count of no. of rows to be updated/inserted/ deleted. I was looking for a simple solution, as above SQL%Rowcount. But i couldn't find any. I can use a Function and Return the value which will give me number of rows to be updated, But is there any Simple Logic other than this.. or any count function. Your Help is Appreciated. Thanks!

    If you really want to do this (I have no clue why you would need it), then you can piggy back on any existing pessimistic locking you may already have in place.
    However, it would require two loops through the records of which you want to know the count before you update, and a second pass to update them.
    I would really re-think the need for this, though.
    SQL> create table t0304(c number);
    Table created.
    SQL> insert into t0304 select rownum from all_objects where rownum <= 10;
    10 rows created.
    SQL> commit;
    Commit complete.
    SQL> select * from t0304;
             C
             1
             2
             3
             4
             5
             6
             7
             8
             9
            10
    10 rows selected.
    SQL> declare
      2    cursor mycursor is select * from t0304 where mod(c,2) = 0 for update;
      3    i number := 0;
      4  begin
      5    for r in mycursor loop
      6      i := i + 1;
      7    end loop;
      8    dbms_output.put_line(i);
      9    for r in mycursor loop
    10      update t0304 set c = c + 20 where current of mycursor;
    11    end loop;
    12  end;
    13  /
    5
    PL/SQL procedure successfully completed.
    SQL> commit;
    Commit complete.
    SQL> select * from t0304;
             C
             1
            22
             3
            24
             5
            26
             7
            28
             9
            30
    10 rows selected.
    SQL>Edited by: Steve Howard on Mar 4, 2011 5:57 PM

  • Number of rows returned for a report

    I want to create reports on serveral tables, the number of rows in these tables varies a lot (5, 5000, and the other one can have 10000+ rows).
    In the Reports Attributes page, is there a way to set the max number of rows return to the number of rows of the table? For example, for a table that has 10000 rows now, may grow to 20000 rows in the near future. If I specify the "Max Row Count" to 20000, the number may be outgrown yet again soon. If I can specify "Max Row Count" to "Current number of rows in the table" then this problem will not happen. Can it be done?

    that "Max Row Count" attribute is used to limit the number of rows returned by a htmldb report region. in your case it sounds as if you want to show all available rows all the time. in that case you'd be fine to just put a very large number into that field like 4million. that way you'd always show all your rows.
    hope this helps,
    raj

  • Interactive report - default number of rows

    I am using APEX 3.1.1 and can not find a way to change the default of 15 for the "number of rows" returned when initially displaying an interactive query. Have I missed something?
    It is certainly easy enough using "Layout and Pagination - Number of Rows" for a non-interactive report region.
    thanks Peter

    Hi Peter,
    In the report definition, there is a section headed Default Report Settings. This shows the instruction:
    To create default report settings, run the report as a developer, modify the settings (like hiding columns, adding filters, etc.), select Save Report from the Actions Menu and then save As Default Report Settings.
    The default row count is one of those settings - make sure you Run the report before saving.
    Andy

  • Number of rows inserted is different in bulk insert using select statement

    I am facing a problem in bulk insert using SELECT statement.
    My sql statement is like below.
    strQuery :='INSERT INTO TAB3
    (SELECT t1.c1,t2.c2
    FROM TAB1 t1, TAB2 t2
    WHERE t1.c1 = t2.c1
    AND t1.c3 between 10 and 15 AND)' ....... some other conditions.
    EXECUTE IMMEDIATE strQuery ;
    These SQL statements are inside a procedure. And this procedure is called from C#.
    The number of rows returned by the "SELECT" query is 70.
    On the very first time call of this procedure, the number rows inserted using strQuery is *70*.
    But in the next time call (in the same transaction) of the procedure, the number rows inserted is only *50*.
    And further if we are repeating calling this procedure, it will insert sometimes 70 or 50 etc. It is showing some inconsistency.
    On my initial analysis it is found that, the default optimizer is "ALL_ROWS". When i changed the optimizer mode to "rule", this issue is not coming.
    Anybody faced these kind of issues?
    Can anyone tell what would be the reason of this issue..? any other work around for this...?
    I am using Oracle 10g R2 version.
    Edited by: user13339527 on Jun 29, 2010 3:55 AM
    Edited by: user13339527 on Jun 29, 2010 3:56 AM

    You have very likely concurrent transactions on the database:
    >
    By default, Oracle Database permits concurrently running transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.
    >
    If you want to make sure that the same query always retrieves the same rows in a given transaction you need to use transaction isolation level serializable instead of read committed which is the default in Oracle.
    Please read http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10471/adfns_sqlproc.htm#ADFNS00204.
    You can try to run your test with:
    set  transaction isolation level  serializable;If the problem is not solved, you need to search possible Oracle bugs on My Oracle Support with keywords
    like:
    wrong results 10.2Edited by: P. Forstmann on 29 juin 2010 13:46

  • SUBMIT statement using variant

    Hi Gurus,
    i am trying to use execute a variant , and that variant has to be submitted in memory.(i.e i have three variants A,B,C whenever i run the report using variant A then that fields only has to be passed to memory) , but instead of that , even if i am using varinat B  , but by defaul it is taking another variant which contains all the fields.I think my issue is clear.i am using this submit statement.
    SUBMIT (sy-repid)
             AND RETURN
             EXPORTING LIST TO MEMORY
             WITH SELECTION-TABLE i_selscr.
    pleasehelp me regarding this ASAP.
    Thanks and regards,
    Rajeshwar

    Hi Rajeshwar
    You can submit it by using variant name also:
    Submit <Report> USING SELECTION-SET variant .
    Check this link:
    http://sapdevelopment.co.uk/reporting/rep_submit.htm
    http://help.sap.com/abapdocu/en/ABAPSUBMIT_SELSCREEN_PARAMETERS.htm
    http://help.sap.com/abapdocu/en/ABAPSUBMIT_SELSCREEN_PARAMETERS.htm
    Need information about SUBMIT statement
    Regards
    Neha
    Edited by: Neha Shukla on Nov 30, 2008 12:46 PM
    Edited by: Neha Shukla on Nov 30, 2008 3:22 PM

  • Can we use two costing variant for standard cost estimate

    Hi,
    Can we use two costing variant for standard cost estimate of two different materials in the same period ? e.g. Costing variant Z001 for Material code 1000 and Costing Variant Z002 for Material code 2000.
    Here the system is not allowing to change the costing variant in Marking Allowance (t code CK24) for marking and release of Material cost 2000 if the standard cost for Material code 1000 is already marked and released.
    Thanks,
    Bijay

    For a material in a period only one price can be released. Though you cn have two separate costing variants and then calculate standard estimate with that. U can release based on one variant only for a month. Or use MR21 and update the price as per the other variant
    Thanks and Regards

  • Should (must )i give port number  Multicast / broadcast using JMF ?

    hi,
    i transmit jmf webcam usng RTP to unicast and multi-unicast ..
    now i would like to multicast .. and broadcast ..
    do i must give port number while multicast, broadcast ...
    i came to multicast to avoid give port number same in receive and transmit
    is there any way just transmit and receive webcam without giving port number ( multicat or broad cast or any other way)
    plz give exact difference between multicast and broad cast ..
    plzz...

    Broadcast is link level, so LAN level. You transmit to all others host in the subnet using the broadcast addrees of the subnet(lan).
    Multicast is over udp, so over ip. You can transmit over internet, if the router supports multicast. You need to use multicast ip addresses and of course 2 ports(udp) for each session.

  • Issues with using SV Time-Varying Loudness.vi

    Hi,
      I am rather new to Labview and am having issues getting my Time-Varying Loudness calcuations to come out correctly. Attached is my current VI and a .csv file of a set of my data (calibrated in Pa. Sample rate = 50kHz for 5 seconds). When i run the VI it does not output what I would expect. Here are my questions:
    Time-Varying Loudness:
    1) What are there so many signals? There are hundreds of lines, but I only expected one.
    2) Why is my x-axis not a range of 1 to 24 like Barks should be?
    1/3 Octave Band Analsysis:
    1) Why does it look incorrect? Their shouldnt be a linear increase as a function of frequency...
    The data is of a part being squeezed and making a squeek. So it is a transient noise that happens around 2 seconds. Additionaly, how would I go about making a waterfall (Time(s),Frequency(Hz),Amplidtude(sones)) of this signal? Is there any easy way? or do I have to make all three signals and add them to make one plot?
    Thank you for your help! Let me know if you need more infromation,
    -Troy
    p.s. i included a picture of the results as well.
    Solved!
    Go to Solution.
    Attachments:
    TimeVaryingIssue.zip ‏668 KB

    Hi Troy,
    1) The SV Specific Loudness VI "chunks" data into 2 ms blocks, and then returns these as individual Specific Loudness vs. Sone plots.  The colored lines you are seeing on your graph are representative of the 2,500 2 ms time periods within your five-second acquisition.  Each of these plots contains 241 points, however they should occur in ten sample "steps", one for each sone.  The digital filter's buffer takes a bit to fill and kick in properly, however, so you may want to give your first few rows( 0-.02 seconds or so) a closer look before using them- you may see a number of unexpected zeroes around the lowest sones.
    2) Since you are using a chart, the x-axis will increment with every subsequent run (the previous data is retained.)  If you only want to display the most recent data, I would recommend switching to a graph, which can also be found in the graph controls palette.  Also, see above (#1) for why the x-axis is longer than 24 points.
    3) Remember that dB is a unitless measure, and can only be used in reference to another value. A typical reference for sound is 20 uPa, however you will likely need to equalize your input data and determine/set a dB reference.  You may want to take a look at SVL Scale voltage to EU.vi (EU stands for equalized units) and SVL Set dB reference.vi.  In your case, I believe the Octave plot is showing your dB relative to a default value of 1.  The values should be accurate with respect to one another, but you will need to provide a reference value to calibrate the scale.
    4) I think that this VI serves as a good example of how to go about creating a waterfall plot:
    Waterfall Display for Octave (DAQmx)
    http://zone.ni.com/devzone/cda/epd/p/id/5562
    You will need to break up your waveform into chunks, much like the specific loudness VI, but the basic concept is roughly the same (take waveform chunk, take octave measurement of chunk, append octave measurement output cluster to array of clusters, repeat for remainder of waveform, display)
    Phew!  That was quite a bit of information.  Let me know if that makes sense, and don't be intimidated by the detail and/or unfamiliar functions or methods  - you're definitely on the right track, or at the very least asking the right questions.
    Tom L.

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • Find the Column Name which gives Invalid Number Error

    Hi,
    There are about 150 columns in a table and data to this table is from a external source like flat file. when these data are loaded to the table for a particular column it gives Invalid Number Error. So need to find for Which Numeric Column a String Value is about to be inserted.
    since we are sure not whether the proper Values are coming from Source in Front End we pass the Value within ' quotes.
    So how do we get the Column for which the error is raised :-)

    If you are using SQL*Loader, the log will tell you which row and column has the error.
    Otherwise you may need to code your own debugging statements.

Maybe you are looking for

  • Sorry, a serious error has occured that requires adobe premiere elements to shut down. we will attem

    So I've had premiere elements 9 for a while and it always worked just fine, but one day right in the middle of working on a project i get this error message: sorry, a serious error has occured that requires adobe premiere elements to shut down. Why d

  • I phone is not recognized by computer

    My iphone charges in car and on charger, but when I connect it to the computer it is not recognized at all. I have reset my phone, uninstalled my itunes and reinstalled it, updated all software, tried every cord in the house, tried new and old mac an

  • User profiles being created on the CAS

    After a recent disk space warning i noticed that under the C:\Users folder there are a large number of user profiles in there. There is really nothing in the profiles except for shortcuts that show up for any account created. Now i know there should

  • Deploy problem - CSS and overall look different on development machine

    Hi. My ADF app looks nice on my development machine. However, when I deploy it to weblogic server, CSS or something seems to be missing (for example, I lost nice textfield look) Did I forget to turn on some deployment setting? BR Edited by: user50980

  • LSO - Tracking external learning progress XI

    Hi All, I was wondering if anyone had any experience on how to invoke the tracking progress web service for external WBT training in LSO via XI. Currently the client is using multiple external vendors that host external content for training. These ve