Implications of not gathering statistics

Hi,
I have a scenario that is provoking vigorous debate in my workplace and may do so here.
We have a table with about 70 million rows in it. It grows at the rate of about 3-4 million rows a month, the rows being loaded daily rather than in one lot at the end of the month. Gathering statistics takes about 6 hours and takes a significant part of our 9 hour window for completion of various batch jobs, including loading the new rows.
The new rows do have quite different values in certain columns (some indexed) to the existing data. However, as these rows are processed over the course of a week they will come to look just like the existing rows.
The action that we're considering is to stop gathering statistics after every large data load and instead to do this every few months instead on the basis that the new data should not skew the balance of the existing data significantly. However, this has divided opinions.
The database is running on Oracle 10g R2.
Any thoughts?
Russell.

Oracle's default collection may or may not be the best for you given the size.
For a table this large you should have the partitioning option. If you do then you should write your own stats collection to only look at active partitions and, if possible, set archival data partition's tablespaces to READ ONLY.
Then with respect to collection ... do some research on what estimate percentage gives you stats good enough to support a good plan. In 10gR2 this may be a very high number. With 11g and above the default collection seems sufficiently improved you can trust it.
SB advises "let Oracle be Oracle" and I agree. Right up until doing that hurts performance and interferes with the primary purpose of the database: Supporting your organization and its customers.

Similar Messages

  • Best practices for gathering statistics in 10g

    I would like to get some opinions on what is considered best practice for gathering statistics in 10g. I know that 10g has auto statistics gathering, but that doesn't seem to be very effective as I see some table stats are way out of date.
    I have recommended that we have at least a weekly job that generates stats for our schema using DBMS_STATS (DBMS_STATS.gather_schema_stats). Is this the right approach to generate object stats for a schema and keep it up to date? Are index stats included in that using CASCADE?
    Is it also necessary to gather system stats? I welcome any thoughts anyone might have. Thanks.

    Hi,
    Is this the right approach to generate object stats for a schema and keep it up to date? The choices of executions plans made by the CBO are only as good as the statistics available to it. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. As we may know, the CBO uses object statistics to choose the best execution plan for all SQL statements.
    I spoke with Andrew Holsworth of Oracle Corp SQL Tuning group, and he says that Oracle recommends taking a single, deep sample and keep it, only re-analyzing when there is a chance that would make a difference in execution plans (not the default 20% re-analyze threshold).
    I have my detailed notes here:
    http://www.dba-oracle.com/art_otn_cbo.htm
    As to system stats, oh yes!
    By measuring the relative costs of sequential vs. scattered I/O, the CBO can make better decisons. Here are the data items collected by dbms_stats.gather_system_stats:
    No Workload (NW) stats:
    CPUSPEEDNW - CPU speed
    IOSEEKTIM - The I/O seek time in milliseconds
    IOTFRSPEED - I/O transfer speed in milliseconds
    I have my notes here:
    http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm
    Hope this helps. . . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Gathering Statistics in 10.7 NCA with 8i Database

    In preparation for the 11i upgrade we are first migrating our database to 8i
    (strongly recommended Category 1 pre-upgrade step 5).
    I have contacted Oracle Support on whether it is recommended to gather
    statistics in a 10.7 NCA environment (optimizer_mode remains RULE). Oracle
    Support says NOT to gather statistics in a 10.7 environment.
    This is contradictory information to several documents I have found.
    Furthermore, Oracle has provided ADXANLYZ.sql and FND_STATS for the 10.7
    environment.
    The following sources recommend gathering statistics in a 10.7 environment:
    1) 10.7 Installation Manual (A47542-1) page A-16
    2) 11i Upgrade Manual (A69411-01) page 2-5
    3) Metalink note 1065584.6
    Can somebody please clarify? Your feedback is much appreciated.
    Thank you,
    Rich
    null

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Rich Cisneros:
    We will be running 10.7 NCA in a server partitioned mode for 6-8 month using 7.3.4.4 with 8.1.6.2.
    Should I gather statistics as part of the 8i database upgrade (still 10.7) or part of the 11i Application upgrade?<HR></BLOCKQUOTE>
    Rich,
    Gather Statistics is only relevant to databases running with optimiser mode = COST or CHOOSE. Apps 10.7 runs with optimiser mode = RULE so you don't need to gather statistics until you start your actual upgrade to 11i, which will run with CBO.
    Hope this makes it clear.
    Steve

  • Gathering statistics on partitioned and non-partitioned tables

    Hi all,
    My DB is 11.1
    I find that gathering statistics on partitioned tables are really slow.
    TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE LAST_ANALYZED PARTITIONED COMPRESSION
    O_FCT_BP1                        112123170     843140    11212317 8/30/2011 3:5            NO                    DISABLED
    LEON_123456                      112096060     521984    11209606 8/30/2011 4:2           NO                   ENABLED
    O_FCT                           115170000     486556      115170 8/29/2011 6:3            YES        
    SQL> SELECT COUNT(*)  FROM user_tab_subpartitions
      2  WHERE table_name =O_FCT'
      3  ;
      COUNT(*)
           112I used the following script:
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS(ownname          => user,
                                    tabname          => O_FCT',
                                    method_opt       => 'for all columns size auto',
                                    degree           => 4,
                                    estimate_percent =>10,
                                    granularity      => 'ALL',
                                    cascade          => false);
    END;
    /It costs 2 mins for the first two tables to gather the statistics respectively, but more than 10 mins for the partitioned table.
    The time of collecting statistics accounts for a large part of total batch time.
    And most jobs of the batch are full load in which case all partitions and subpartitions will be affected and we can't just gather specified partitions.
    Does anyone have some experiences on this subject? Thank you very much.
    Best regards,
    Leon
    Edited by: user12064076 on Aug 30, 2011 1:45 AM

    Hi Leon
    Why don't you gather stats at partition level? If your partitions data is not going to change after a day (date range partition for ex), you can simply do at partition level
    GRANULARITY=>'PARTITION' for partition level and
    GRANULARITY=>'SUBPARTITION' for subpartition level
    You are gathering global stats every time which you may not require.
    Edited by: user12035575 on 30-Aug-2011 01:50

  • How important is gathering statistics on SYS objects?

    Hi,
    How important is gathering statistics on Data dictionary tables and other X$ tables in SYS schema. Is it bad to keep the statistics. Recently our Sr.DBA has deleted all the SYS schema stats telling that it will inversely affect the DB performance. Is it true?
    Regards
    Satish

    Hi Satish,
    *10g:*
    A new DBA task in Oracle Database 10g is to generate statistics on data dictionary objects which are contained in the SYS schema. The stored procedures dbms_stats.gather_database_stats and dbms_stats.gather_schema_stats can be used to gather the SYS schema stats. Here is an example of using dbms_stats.gather_schema_stats to gather data dictionary statistics:
    EXEC dbms_stats.gather_schema_stats(’SYS’);
    *9i*
    While it is supported in 9.2 to gather statistics on the data dictionary and fixed views, doing so isn't the norm.
    There is a bug fixed only in 10gR2 (not expected to be back-ported to 9.2) that caused this error. The fix is – don’t generate statistics against SYS – especially not the Fixed tables.
    For this query, let's see if we can get a better plan by removing statistics or by getting better statistics, or if we need to do something else to tune it. Take the SYS statistics as before, but with gather_fixed => false.
    I would like for you to test first by deleting the statistics on these two X$ tables and see how the query runs (elapsed time, plan).
    delete_table_stats('SYS','X$KQLFXPL');
    delete_table_stats('SYS','X$KGLOB');
    Then you can take statistics on them using gather_table_stats and check again (elapsed time, plan).
    gather_table_stats('SYS','X$KQLFXPL');
    gather_table_stats('SYS','X$KGLOB');
    The issue with this is that the contents of these fixed views, particularly x$kqlfxpl, can change dramatically. Gathering fixed object statistics may help now and cause problems later as the contents change.
    Warning, this is a bit dangerous due to latch contention, see the following note. I've supported a couple of very busy systems that were completely halted for a time due to latch contention on x$kglob due to monitoring queries (particularly on v$open_cursors).
    Note.4339128.8 Ext/Pub Bug 4339128 - Heavy latch contention from queries against library cache views.
    Hope this answers your question . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference":
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Gathering statistics on interMedia indexes and tables

    Has anyone found any differences (like which one is better or worse) between using the ANALYZE sql command, dbms_utility package, or dbms_stats package to compute or estimate statistics for interMedia text indexes and tables for 8.1.6? I've read the documentation on the subject, but it is still unclear as to which method should be used. The interMedia text docs say the ANALYZE command should be used, and the dbms_stats docs say that dbms_stats should be used.
    Any help or past experience will be grateful.
    Thanks,
    jj

    According to the Support Document "Using statistics with Oracle Text" (Doc ID 139979.1), no:
    Q. Should we gather statistics on the underlying DR$/DR# tables? If yes/no, why?
    A. The recommendation is NO. All internal recursive queries have hints to fix the plans that are deemed most optimal. We have seen in the past that statistics on the underlying DR$ tables may cause query plan changes leading to serious query performance problems.
    Q. Should we gather statistics on Text domain indexes ( in our example above, BOOKS_INDEX)? Does it have any effect?
    A: As documented in the reference manual, gathering statistics on Text domain index will help CBO to estimate selectivity and costs for processing a CONTAINS() predicate. If the Text index does not have statistics collected, default selectivity and cost will be used.
    So 'No' on the DR$ tables and indexes, 'yes' on the user table being indexed.

  • Adobe flash player does not load statistics on Google Public Data

    Adobe flash player does not load statistics on Google Public Data and displays a white page. However If I put the same URL in an other browser the page loads seamlessly. Is it a Firefox or a Adobe flash player bug ?

    I looked on the Google Public Data site, but couldn't find any Flash videos.
    Can you provide a link where you say the problem is occurring?

  • Help,why brconnect do not collect statistics for mseg table?

    I found "MSEG" table`s statistics is too old.
    so i check logs in db13,and the schedule job do not collect statistics for "MSEG".
    Then i execute manually: brconnect -c -u system/system -f stats -t mseg  -p 4
    this command still do not collect for mseg.
    KS1DSDB1:oraprd 2> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
    BR0801I BRCONNECT 7.00 (46)
    BR0154E Unexpected option value 'u2013f' found at position 8
    BR0154E Unexpected option value 'collect' found at position 9
    BR0806I End of BRCONNECT processing: ceenwjre.log 2010-11-12 08.41.38
    BR0280I BRCONNECT time stamp: 2010-11-12 08.41.38
    BR0804I BRCONNECT terminated with errors
    KS1DSDB1:oraprd 3> brconnect -c -u system/system -f stats -t mseg -p 4
    BR0801I BRCONNECT 7.00 (46)
    BR0805I Start of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.04
    BR0484I BRCONNECT log file: /oracle/PRD/sapcheck/ceenwjse.sta
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.11
    BR0813I Schema owners found in database PRD: SAPPRD*, SAPPRDSHD+
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.12
    BR0807I Name of database instance: PRD
    BR0808I BRCONNECT action ID: ceenwjse
    BR0809I BRCONNECT function ID: sta
    BR0810I BRCONNECT function: stats
    BR0812I Database objects for processing: MSEG
    BR0851I Number of tables with missing statistics: 0
    BR0852I Number of tables to delete statistics: 0
    BR0854I Number of tables to collect statistics without checking: 0
    BR0855I Number of indexes with missing statistics: 0
    BR0856I Number of indexes to delete statistics: 0
    BR0857I Number of indexes to collect statistics: 0
    BR0853I Number of tables to check (and collect if needed) statistics: 1
    Owner SAPPRD: 1
    MSEG     
    BR0846I Number of threads that will be started in parallel to the main thread: 4
    BR0126I Unattended mode active - no operator confirmation required
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0817I Number of monitored/modified tables in schema of owner SAPPRD: 1/1
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0877I Checking and collecting table and index statistics...
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0879I Statistics checked for 1 table
    BR0878I Number of tables selected to collect statistics after check: 0
    BR0880I Statistics collected for 0/0 tables/indexes
    BR0806I End of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.16
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.17
    BR0802I BRCONNECT completed successfully
    the log says:
    Number of tables selected to collect statistics after check: 0
    Could you give some advices?  thanks a lot.

    Hello,
    If you would like to force the creation of that stats for table MSEG you need to use the -f (force) switch.
    If you leave out the -f switch the parameter from stats_change_threshold is taken like you said correctly:
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm]
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm]
    You have tried to do this in your second example :
    ==> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
    Therefore you received:
    BR0154E Unexpected option value 'u2013f' found at position 8
    BR0154E Unexpected option value 'collect' found at position 9
    This is the correct statement, however the hyphen in front of the f switch is not correct.
    Try again with the following statement (-f in stead of u2013f) you will see that it will work:
    ==> brconnect -c -u system/system -f stats -t mseg -f collect -p 4
    I hope this can help you.
    Regards.
    Wim

  • Warning ALSB Statistics Manager BEA-473007 Aggregator did not receive statistics from ...

    Hi,
    I am using cluster with osb_server1 and osb_server2. While starting the servers, I am facing below error on Managed Server(osb_server2) but only warning on Managed Server(osb_server1).
    Warning on managed server1(osb_server1)
    <Warning> <ALSB Statistics Manager> <BEA-473007> <Aggregator did not receive statistics from [osb_server2] for the aggregation performed for tick 1855320.>
    Error on managed server2(osb_server2)
    <Nov 24, 2011 11:23:00 AM UTC> <Error> <ALSB Statistics Manager> <BEA-473003> <Aggregation Server Not Available. Failed to get remote aggregator
    java.rmi.UnknownHostException: Could not discover URL for server 'osb_server1'
    at weblogic.protocol.URLManager.findURL(URLManager.java:145)
    at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.getInitialContext(WlsRemoteServerImpl.java:94)
    at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.lookupJNDI(WlsRemoteServerImpl.java:54)
    at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.getRemoteAggregator(ALSBStatisticsManager.java:291)
    at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.access$000(ALSBStatisticsManager.java:38)
    Truncated. see log file for complete stacktrace
    Please provide your solutions here.
    Thanks

    Hi,
    I am using cluster with osb_server1 and osb_server2. While starting the servers, I am facing below error on Managed Server(osb_server2) but only warning on Managed Server(osb_server1).
    Warning on managed server1(osb_server1)
    <Warning> <ALSB Statistics Manager> <BEA-473007> <Aggregator did not receive statistics from [osb_server2] for the aggregation performed for tick 1855320.>
    Error on managed server2(osb_server2)
    <Nov 24, 2011 11:23:00 AM UTC> <Error> <ALSB Statistics Manager> <BEA-473003> <Aggregation Server Not Available. Failed to get remote aggregator
    java.rmi.UnknownHostException: Could not discover URL for server 'osb_server1'
    at weblogic.protocol.URLManager.findURL(URLManager.java:145)
    at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.getInitialContext(WlsRemoteServerImpl.java:94)
    at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.lookupJNDI(WlsRemoteServerImpl.java:54)
    at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.getRemoteAggregator(ALSBStatisticsManager.java:291)
    at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.access$000(ALSBStatisticsManager.java:38)
    Truncated. see log file for complete stacktrace
    Please provide your solutions here.
    Thanks

  • What Are the Security Implications of not Completely Signing Database?

    Hello everyone,
    What are the security implications of not completely signing the database?
    From http://www.archlinux.org/pacman/ ,
    The following quote implies that the database exists merely just in case hand tweaking is necessary:
    maintains a text-based package database (more of a hierarchy), just in case some hand tweaking is necessary.
    However, considering that there are cases that pacman's local database needs to be restored, there are implications that the database is essential for pacman to function properly.
    From https://wiki.archlinux.org/index.php/Ho … l_Database :
    Restore pacman's local database
    Signs that pacman needs a local database restoration:
    - pacman -Q gives absolutely no output, and pacman -Syu erroneously reports that the system is up to date.
    - When trying to install a package using pacman -S package, and it outputs a list of already satisfied dependencies.
    - When testdb (part of pacman) reports database inconsistency.
    Most likely, pacman's database of installed software, /var/lib/pacman/local, has been corrupted or deleted. While this is a serious problem, it can be restored by following the instructions below.
    I know that all official packages (from core, extra, community, etc.) are signed so that all files should be safe, but I'm just paranoid.
    What if the database was hacked?  Will this lead to installation of harmful software?
    Sincerely,
    Cylinder57
    Last edited by Cylinder57 (2012-10-15 03:42:31)

    Cylinder57 wrote:
    From this quote:
    Allan wrote:But, the OP (also?) talks about the local package database on his computer.  That is not signed at all as there is no point.  If someone can modify that, then they can regenerate the signature, or just modify any other piece of software on your computer.
    Is it going to be easy for anyone other than the authorized user to modify the local package database?
    Allan basically answered that with the quote above already as I understand it. Someone who has access to the installation, e.g. is able chrooting your PC via USB, is not held back by any ACLs. However, modifying the local database only makes limited sense because the packages are already installed. Pacman would only recheck, if you re-install a package. The only really relevant attack vector for the package database is
    (1) installing an older package with a vulnerability,
    (2) re-placing the up-to-date package sig in the local database with the older one and
    (3) modifying the system, e.g. via pacman.conf excludes, to not update that.
    then also re-installing would not create a sig-error and you get stuck with the bogus old package.
    With a signed database this would not be possible. However, as Allan wrote earlier also with a signed database that criminal can manually install (totally leaving pacman & package cache) whatever it needs in this scenario. So, if you are -really- paranoid about that, you probably want to spend (a lot of configuring) time with something like the "aide" package.
    Cylinder57 wrote:
    And, are the following statements correct:
    If the repository databases are modified, the hacker might be able to modify the packages on the server (Considering that if someone can modify the local package database, that person can modify any other piece of software on that particular computer.)
    However, pacman won't let users from installing the modified packages (due to package signing,) unless at one person with access is bribed (at least, for an individual package.)
    I don't know the intricacies of the server infrastructure - only saw they have great names :-), but I am pretty certain your statements assume that correctly. It is pretty unlikely that someone able to modify the central repository database fails at placing a bogus package for shipping with those access rights at this time. Yet it does no harm not to post any details of such a scenario here imo. In any case: A compromised mirror would be enough for that - and easier to achieve (hacked anywhere or e.g. in a non-democratic state). Plus you also answered it yourself. The keys are key for our safety there. Which keeps me hoping that no criminal lawnmover salesmen frequent the Brisbane area.
    As you put up a thread about this, one question you can ask yourself is:
    Have you always checked on updates new signatures keys which pacman asks about? If you ever pressed "accept/enter" without checking them out-of-band (e.g. the webserver), that compromised mirror database might have just created a "legitimate" key .. user error, but another attack vector the database signing would catch.
    edit: Re-thinking the last paragraph just after posting, I now believe it would not be that easy as implied - simply because the bogus key is not trusted by one of the master keys. The pacman pgp trust model should catch that without database signing. At least it would if only the official repositories are activated, but that's a pre-requisite to the whole thread.
    Last edited by Strike0 (2012-10-20 23:01:26)

  • SQL2310N the utility could not generate statistics: error "-911"

    Hello,
    we have a 46c system with DB V.8.2.2. Everytime we planned over db13 the runstat_all we get an error message.
    Errormessage: Error -2310 in dmdb6upd.c(687):
    SQL2310N the utility could not generate statistics: error "-911"
    Regards,
    Alexander Türk

    Here some more information:
    07.01.2007 13:00:31 Ausführung des logischen Kommandos REORGCHK_ALL auf Rechner b0d0m102
    07.01.2007 13:00:31 Parameter: -t all -n PH0 -z 3600 -m b -l 1800
    07.01.2007 20:01:09 pct_long_lob has been set to 10 percent
    07.01.2007 20:01:09 Checking for old entries in db6treorg/db6ireorg ...
    07.01.2007 20:01:09 Reading table names for runstats ...
    07.01.2007 20:01:09 Tables to process: 26350 ...
    07.01.2007 20:01:09 ERRORMESSAGE: Error -2310 in dmdb6upd.c(687):
    07.01.2007 20:01:09 SQL2310N  The utility could not generate statistics.  Error "-911"
    07.01.2007 20:01:09 was returned.
    07.01.2007 20:01:09
    07.01.2007 20:01:09 table: SAPR3.DB6PMHT
    07.01.2007 20:01:09 ERRORMESSAGE: Error -2310 in dmdb6upd.c(687):
    07.01.2007 20:01:09 SQL2310N  The utility could not generate statistics.  Error "-911"
    07.01.2007 20:01:09 was returned.
    07.01.2007 20:01:09
    07.01.2007 20:01:09 table: SAPR3.DB6PMHT_HD

  • Relation between computing and gathering statistics

    Hi gurus,
    What is the relation between computing and gathering statistics for a database object.Are they mutually dependent or one doesn't have anything to do with the other?How they affect performance of a database??
    plz dont redirect..just bottom lines are expected(be specific)
    Thanks in advance
    anirban

    computing term used to collect 100% statistics along with analyze command.
    gather stats is a new package provided by the oracle, also recommend to use this instead of analyze. You can also take 100% stats, like compute with analyze command, in gather stats pacakge, parameter estimate_percent=>null.
    exec dbms_stats.gather_table_stats('SCHEMA',TABLE',cascade=>true,estimate_percent=>NULL);
    Jaffar

  • Aggregator did not receive statistics from [sb03] for the current view

    I'm receiving a warning message in a BEA Aqualogic Service Bus console "Aggregator did not receive statistics from [sb03, sb05, sb06, sb04, sb02, sb01] for the current view".
    I don´t have any idea what is the root cause of this issue.
    Thanks,
    Norberto Enomoto

    Norberto,
    I've found this link with a list of SB Runtime Messages.
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/messages/alsb/kernel/l10n/ServiceDomainMBean.html
    One of this refer to the problem you're having:
    BEA-473051
    Error: Aggregator did not receive statistics from servers for the current view
    Description
         There is a communication failure between Aggregator server and other managed servers. One of the managed servers failed to send statistics to the managed server on which Aggregator is running.
    Cause
         Communication failure between managed servers.
    Action
         Wait for some time and try again.
    Att.
    José Compadre Junior
    Edited by: user10076953 on 22/03/2010 11:09
    Edited by: user10076953 on 22/03/2010 11:09

  • Gathering statistics in 8.1.7.4

    Hi
    I 10g there is a default job to collect statistics
    can somebody refer me to code that can do the same on 8.1.7.4
    in 8.1.7.4
    also is the "bytes" column in the dba_tables is updated automatically , when rows inserted and deleted.
    or we need to dbms_stats.gather_table_statistics() , same as in 10g
    Thanks

    thanks very much
    bytes.dba_tables look need gathering the statistics
    but
    bytes.dba_segments look DOES NOT need gathering the statistics
    i tested it , so i think i can rely on that
    also
    SQL> sho parameter opt
    NAME TYPE VALUE
    object_cache_optimal_size integer 102400
    optimizer_features_enable string 8.1.7
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_max_permutations integer 80000
    optimizer_mode string CHOOSE
    optimizer_percent_parallel integer 0

  • Need info abt gathering statistics

    Hi,
    We are using 8.0.6 database and have around 90GB data in our database. Since we are adding lot of data daily, I wanted to know, Do we need to gather/update statistics of all the tables in our database?
    1) How gathering/updating statistics will help improve my database.
    2) Do i need to have dba priviliges to gather these stats.
    3) Do i need to update the stats daily.
    4) Do i need to update stats for SYS? I heard it slows down the system.
    5) What are the methods in 8.0.6 to gather statistics
    Regards,
    Ateeq

    ateeqrahman wrote:
    Hi,
    We are using 8.0.6 database and have around 90GB data in our database. Since we are adding lot of data daily, I wanted to know, Do we need to gather/update statistics of all the tables in our database?
    1) How gathering/updating statistics will help improve my database.Yes ...
    2) Do i need to have dba priviliges to gather these stats.Not Necessary. You should have analyze any table privilege or execute permission on DBMS_STATS.
    3) Do i need to update the stats daily.Depends on change in data
    4) Do i need to update stats for SYS? I heard it slows down the system.
    5) What are the methods in 8.0.6 to gather statisticsPlease read
    http://www.oracle-base.com/articles/8i/RefreshingStaleStatistics8i.php
    http://www.oracle-base.com/articles/8i/CostBasedOptimizerAndDatabaseStatistics.php
    Regards
    Rajesh

Maybe you are looking for

  • Looking for an application that can scroll custom text over video.

    Hi, Not sure if this is the correct place to ask this - but here goes. In our school we have a display in the foyer that shows a looped quicktime video. I would very much like to scroll custom text with recent school related news in the forefront or

  • Can DU repair a FAT external?

    My friend who recently "switched" forgot to eject her FW external before powering down. The drive is formatted FAT 32. It won't mount either on her PB or ancient PC (not a good sign), but if it does show up in DU (hey, miracles can happen), is it saf

  • Cannot access security & privacy

    hey all please can you help me with this 'all of a sudden" problem that i have encountered that is driving me absolutely insane. i am trying to access the security and privacy setting under the system preferences tab but when i click on the icon a me

  • How can I set Fire FTP to open a .less file in Dreamweaver CS5?

    I have it set so that double-clicking a file will open it in a particular programme. For example, double-clicking a .css file will force Dreamweaver to open the file. But I do not know how to make Dreamweaver open a .less file. Anyone any ideas? Than

  • PDF TO WORD.DOCX

    I have legal documents that I need to covert from PDF to Word.docx so that they can be filled out online, then printed ready for signatures.