Partition Level Statistics gathering

Hi,
This is with regards to GRANULARITY option in DBMS_STATS.GATHER_TABLE_STATS procedure.
If I give 'PARTITION' as GRANULARITY would it gather statistics for all the partitions on a table or can I gather statistics only for a particular partition.
For example: If I have table T1 which is partitioned by month i.e JAN08, FEB08, ....., DEC09, JAN09, ... and If I apply DML only on say DEC08, can I gather statistics only for DEC08 partition ?
Please advise.
Thanks in Advance.
Regards
Pat

I do not think it is just because of misspelled word PARTITION:
SQL> CREATE TABLE "Patza"
  2      ( prod_id        NUMBER(6)
  3      , cust_id        NUMBER
  4      , time_id        DATE
  5      , channel_id     CHAR(1)
  6      , promo_id       NUMBER(6)
  7      , quantity_sold  NUMBER(3)
  8      , amount_sold         NUMBER(10,2)
  9      )
10  PARTITION BY RANGE (time_id)
11    (PARTITION "Sales_Q1_1998" VALUES LESS THAN (TO_DATE('01-APR-1998','DD-MON-YYYY')),
12     PARTITION "Sales_Q2_1998" VALUES LESS THAN (TO_DATE('01-JUL-1998','DD-MON-YYYY')),
13     PARTITION "Sales_Q3_1998" VALUES LESS THAN (TO_DATE('01-OCT-1998','DD-MON-YYYY')),
14     PARTITION "Sales_Q4_1998" VALUES LESS THAN (TO_DATE('01-JAN-1999','DD-MON-YYYY')),
15     PARTITION "Sales_Q1_1999" VALUES LESS THAN (TO_DATE('01-APR-1999','DD-MON-YYYY')),
16     PARTITION "Sales_Q2_1999" VALUES LESS THAN (TO_DATE('01-JUL-1999','DD-MON-YYYY')),
17     PARTITION "Sales_Q3_1999" VALUES LESS THAN (TO_DATE('01-OCT-1999','DD-MON-YYYY')),
18     PARTITION "Sales_Q4_1999" VALUES LESS THAN (TO_DATE('01-JAN-2000','DD-MON-YYYY')),
19     PARTITION "Sales_Q1_2000" VALUES LESS THAN (TO_DATE('01-APR-2000','DD-MON-YYYY')),
20     PARTITION "Sales_Q2_2000" VALUES LESS THAN (TO_DATE('01-JUL-2000','DD-MON-YYYY')),
21     PARTITION "Sales_Q3_2000" VALUES LESS THAN (TO_DATE('01-OCT-2000','DD-MON-YYYY')),
22     PARTITION "Sales_Q4_2000" VALUES LESS THAN (MAXVALUE))
23  ;
Table created.
SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS
  3  (
  4  OWNNAME => USER
  5  , TABNAME => 'Patza'
  6  , ESTIMATE_PERCENT => 25
  7  , METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO'
  8  , GRANULARITY => 'PATRITION'
  9  , PARTNAME => 'Sales_Q3_2000'
10  , CASCADE => TRUE
11  );
12  END;
13  /
BEGIN
ERROR at line 1:
ORA-20000: Unable to analyze TABLE "SCOTT"."PATZA" SALES_Q3_2000, insufficient
privileges or does not exist
ORA-06512: at "SYS.DBMS_STATS", line 13046
ORA-06512: at "SYS.DBMS_STATS", line 13076
ORA-06512: at line 2
SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS
  3  (
  4  OWNNAME => USER
  5  , TABNAME => '"Patza"'
  6  , ESTIMATE_PERCENT => 25
  7  , METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO'
  8  , GRANULARITY => 'PATRITION'
  9  , PARTNAME => '"Sales_Q3_2000"'
10  , CASCADE => TRUE
11  );
12  END;
13  /
BEGIN
ERROR at line 1:
ORA-20001: Illegal granularity PATRITION: must be AUTO | ALL | GLOBAL |
PARTITION | SUBPARTITION | GLOBAL AND PARTITION
ORA-06512: at "SYS.DBMS_STATS", line 13056
ORA-06512: at "SYS.DBMS_STATS", line 13076
ORA-06512: at line 2
SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS
  3  (
  4  OWNNAME => USER
  5  , TABNAME => 'Patza'
  6  , ESTIMATE_PERCENT => 25
  7  , METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO'
  8  , GRANULARITY => 'PARTITION'
  9  , PARTNAME => 'Sales_Q3_2000'
10  , CASCADE => TRUE
11  );
12  END;
13  /
BEGIN
ERROR at line 1:
ORA-20000: Unable to analyze TABLE "SCOTT"."PATZA" SALES_Q3_2000, insufficient
privileges or does not exist
ORA-06512: at "SYS.DBMS_STATS", line 13046
ORA-06512: at "SYS.DBMS_STATS", line 13076
ORA-06512: at line 2
SQL>
SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS
  3  (
  4  OWNNAME => USER
  5  , TABNAME => '"Patza"'
  6  , ESTIMATE_PERCENT => 25
  7  , METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO'
  8  , GRANULARITY => 'PARTITION'
  9  , PARTNAME => '"Sales_Q3_2000"'
10  , CASCADE => TRUE
11  );
12  END;
13  /
PL/SQL procedure successfully completed.
SQL> As you can see, table/partition check is done first. So OP either does not have privs on the table or, as I mentioned, table/partition names are sensitive.
SY.

Similar Messages

  • Delete global stats, leave partition level only in 10.2.0.3 .

    Hi,
    there is 10.2.0.3 4 node rac datawarehouse , most queries are with predicate related to partition key .
    So there is a lot of partition prunning involved .
    Currently data load is done via exchange partition and then only that partition stats is calculated.
    We dont got global stats (global_stats = NO in dba_tables) .
    Is that right way to deal with statistics in 10.2.0.3 ?
    I know that 10.2.0.4 brings us copy stats solution, but what with 10.2.0.3 ?
    How to deal with statistics update , related to exchanged partition .
    As far as I know there is no way to incrementaly update global statistics, so granuality => partition seems
    the only way .
    If I got that correctly, Oracle calculates global statistics from partition level statistics if there is no 'true' global stats calculated .
    The only issue I know is related to NDV estimation, but I think we can life with that.
    Please advice .
    Regards.
    GregG

    You really should collect both partition and global stats. If a query spans more than 1 partition then global stats are generally used. Partition stats are used when there is only a single partition access.
    There is no way in 10g to get incremental global stats. That was introduced in 11g. Global stats are collected by a table scan, they are not aggregated on 10g, only 11g with incremental enabled.
    However, there was a new granularity introduced in 10g called APPROX_GLOBAL AND PARTITION. This is part of patch 6526370.
    I'd recommend that you either gather global or use the APPROX_GLOBAL AND PARTITION, but it is best to have some global stats.
    More details on this here:
    http://structureddata.org/2008/07/16/oracle-11g-incremental-global-statistics-on-partitioned-tables/
    Regards,
    Greg Rahn
    http://structureddata.org

  • Cgi statistics gathering under 6.1 and Solaris 9

    Hello all,
    is it possible to log for cgi requests the value for handling each of the time spent on the request?
    I see a lot of editable parameters in the 'Performance, Tuning and Scaling Guide' but can't figure out how to do that.
    Once in a thread I read "...enable statistics gathering, then add %duration% to your access log format line".
    I can't find the terminus %duration% in the guide, which parameter is taken?
    Regards Nick

    Hello elvin,
    thanks for your reply. Now I think I managed to let the webserver log the duration of a cgi request, but I'm unsure how to interpret the value eg. in the access log I get
    ..."GET /cgi/beenden.cgi ... Gecko/20040113 MultiZilla/1.6.3.1d" 431710"
    ..."GET /pic.gif ... Gecko/20040113 MultiZilla/1.6.3.1d" 670"
    so the last value corresponds to my %duration% in the magnus.conf.
    431710 ... in msec? - makes no sense
    670 ... in msec?
    The complete string in magnus.conf reads as follows:
    Init fn="flex-init" access="$accesslog" format.access="%Ses->client.ip% - %Req->vars.auth-user% [%SYSDATE%] \"%Req->reqpb.clf
    -request%\" %Req->srvhdrs.clf-status% %Req->srvhdrs.content-length% \"%Req->headers.user-agent%\" \%duration%\""Regards Nick

  • Write-Behind batch behavior in EP partition level transactions

    Hi,
    We use EntryProcessors to perform updates on multiple entities stored in the same cache partition. According to the documentation, Coherence handles all the updates in a "sandbox" and then commits them atomically to the cache backing map.
    The question is, when using write-behind, does Coherence guarantee that all entries updated in the same "partition level transaction" will be present in the same "storeAll" operation?
    Again, according to the documentation, the write-behind thread behavior is the following:
    The thread waits for a queued entry to become ripe.
    When an entry becomes ripe, the thread dequeues all ripe and soft-ripe entries in the queue.
    The thread then writes all ripe and soft-ripe entries either via store() (if there is only the single ripe entry) or storeAll() (if there are multiple ripe/soft-ripe entries).
    The thread then repeats (1).
    If all entries updated in the same partition level transaction become ripe or soft-ripe at the same instant they will all be present in the storeAll operation. If they do not become ripe/soft-ripe in the same instant, they may not be all present.
    So, it all depends on the behavior of the commit of the partition level transaction, if all entries get the same update timestamp, they will all become ripe at the same time.
    Does anyone know what is the behavior we can expect regarding this issue?
    Thanks.

    Hi,
    That comment is still correct for 12.1 and 3.7.1.10.
    I've checked Coherence APIs and the ReadWriteBackingMap behavior, and although partition level transactions are atomic, the updated entries will be added one by one to the write behind queue. In each added entry coherence uses current time to calculate when each entry will become ripe, so, there is no guarantee that all entries in the same partition level transaction will become ripe at the same time.
    This leads me to another question.
    We have a use case where we want to split a large entity we are storing in coherence into several smaller fragments. We use EntryProcessors and partition level transactions to guarantee atomicity in operations that need to update more than one fragment of the same entity. This guarantees that all fragments of the same entity are fully consistent. The cached fragments are then persisted into database using write-behind.
    The problem now is how to guarantee that all fragments are fully consistent in the database. If we just relly on coherence write-behind mecanism we will have eventual consistency in DB, but in case of multi-server failure the entity may become inconsistent in database, which is a risk we wouldnt like to take.
    Is there any other option/pattern that would allow us to either store all updates done on the entity or no update at all?
    Probably if in the EntryProcessor we identify which entities were updated and if we place them in another persistency queue as a whole, we will be able to achieve this, but this is a kind of tricky workaround that we wouldnt like to use.
    Thanks.

  • Foreign keys at the table partition level

    Anyone know how to create and / or disable a foreign key at the table partition level? I am using Oracle 11.1.0.7.0. Any help is greatly appreciated.

    Hmmm. I was under the impression that Oracle usually ignores indices on columns with mostly unique and semi-unique values and prefers to do full-table scans instead on the (questionable) theory that it takes almost as much time to find one or more semi-unique entries in an index with a billion unique values as it does to just scan through three billion fields. Though I tend to classify that design choice in the same category as Microsoft's design decision to start swapping ram out to virtual memory on a PC with a gig of ram and 400 megs of unused physical ram on the twisted theory that it's better to make the user wait while it needlessly thrashes the swapfile NOW than to risk being unable to do it later (apparently, a decision that has its roots in the 4-meg win3.1 era and somehow survived all the way to XP).

  • Statistics gathering

    Hello ,
    Every one I'm little confuse about "Statistics gathering" in ebs so I have some question in my mind which are following.
    kindly any one clear my concept about it.I really appreciate you.
    1.What is Statistics gathering ?
    2. What is the benefit of it?
    3.Can after this our ERP performance is better?
    one question is out this subject is that can If any one wanna APPS DBA then who must be DBA(oracle 10g,9i etc) or who only have a concept of oracle dba like backup,recovery,cloning etc.
    Regards,
    Shahbaz khan

    1.What is Statistics gathering ?
    Statistics gathering is a process by which Oracle scans some or all of your database objects (such as tables, indexes etc.) and stores the information in objects such as dbal_tables, dba_histograms. Oracle uses this information to determine the best execution path for statements it has to execute (such as select, update etc.)
    2. What is the benefit of it?
    It does help the queries become more efficient.
    3.Can after this our ERP performance is better?
    Typically, if you are experiencing performance issues, this is one of the first remedies.
    one question is out this subject is that can If any one wanna APPS DBA then who must be DBA(oracle 10g,9i etc) or who only have a concept of oracle dba like backup,recovery,cloning etc.I will let Hussein or Helios answer that question. They can offer a lot of helpful advice. You can also refer to Hussein's recent thread on a similar topic.
    See Re: Time Management and planned prep
    Hope this helps,
    Sandeep Gandhi

  • What is the meaning of partition by statistics

    Hi All
    What is the meaning of partition by statistics ?
    I could not grasp it meaning.
    I have a fact table that has about 2.8 million records and around 8 dimensions
    I currently am parititioning it by time as each time period has equal number of records

    Murtuza:
    Certain processes generate postings above and beyond the entered information, eg. cash discounts or rounding etc.  these internally generated items are posting relevant but were not entered by user/transaction.  in this way that flag identifies the cause for the posting.
    regards,
    bill.

  • Setting of Optimizer Statistics Gathering

    I'm checking in my db setting and database is analysing each day. But as I notice there are a lot of tables that information shows last analysis in about month ago... Do I have to change some parameters?

    lesak wrote:
    I don't have any data that show you that my idea is good. I'd like to confirm on this forum that my idea is good or not. I've planned to make some changes to have better performance of query that read from top use tables. If this is bad solutions it's also important information for me.One point of view is that your idea is bad. That point of view would be to figure out what the best access for your query is and set that as a baseline, or figure out what statistics get you the correct plans on a single query that has multiple plans that are best with different values sent in through bind variables, and lock the statistics.
    Another point of view would be to gather current plans for currently used queries, then do nothing at all unless the optimizer suddenly decides to switch away from one, then figure out why.
    Also note the default statistics gathering is done in a window, if you have a lot of tables changing it could happen that you can't get stats in a timely fashion within the window.
    Whether the statistics gathering is appropriate may depend on how far off histograms are from describing the actual data distribution you see. What my be appropriate worry for one app may be obsessive tuning disorder for another. 200K rows out of millions may make no difference at all, or may make a huge difference if the newly added data is way off from what the statistics make the opitmizer think it is.
    One thing you are probably doing right is to recognize that tuning particular queries may be much more useful than obsessing over statistics.
    Note how much I've used the word "may" here.

  • Low-end RAID - Ugh (Or how to create partition level arrays)

    Ok... I got a new mobo and my system is up and running!
    Now let me tell you how I "want" to configure my drives.
    I have two Hitachi 160 SATA drives.  I would like to create two RAID partitions.  Configured like so:
    SATA1        SATA2
    20GB     +    20GB     @ Mirrored    =  20GB C:  (For Windows, etc.)  (Safe)
    140GB   +    140GB   @ Stripped    =  280GB  D:  (For everything else )  (Fast)
    The problem is that the stupid Nvidia RAID BIOS only seems to support creating drive level arrays and not partition level! 
    I only have expierence with high-end server RAID controllers and doing what I have layed out is perfectly possible.  Is this just something that "low-end" RAID controllers do not support?
    Thanks!

    Unfortunatly that is true, this controller does not support partition level arrays, only disc level.
    Be well....

  • How to check the progress of statistics gathering on a table?

    Hi,
    I have started the statistics gathering on a few big tables in my database.
    How to check the progress of statistics gathering on a table? Is there any data dictionary views or tables to monitor the progress of stats gathering.
    Regds,
    Kunwar

    Hi all
    you can check with this small script.
    it lists the sid details for long running session like
    when it started
    when last update
    how much time still left
    session status "ACTIVE/INACTIVE". etc.
    -- Author               : Syed Kaleemuddin_
    -- Script_name          : sid_long_ops.sql
    -- Description          : list the sid details for long running session like when it started when last update how much time still left.
    set lines 200
    col OPNAME for a25
    Select
    a.sid,
    a.serial#,
    b.status,
    a.opname,
    to_char(a.START_TIME,' dd-Mon-YYYY HH24:mi:ss') START_TIME,
    to_char(a.LAST_UPDATE_TIME,' dd-Mon-YYYY HH24:mi:ss') LAST_UPDATE_TIME,
    a.time_remaining as "Time Remaining Sec" ,
    a.time_remaining/60 as "Time Remaining Min",
    a.time_remaining/60/60 as "Time Remaining HR"
    From v$session_longops a, v$session b
    where a.sid = b.sid
    and a.sid =&sid
    And time_remaining > 0;
    Sample output:
    SQL> @sid_long_ops
    Enter value for sid: 474
    old 13: and a.sid =&sid
    new 13: and a.sid =474
    SID SERIAL# STATUS OPNAME START_TIME LAST_UPDATE_TIME Time Remaining Sec Time Remaining Min Time Remaining HR
    474 2033 ACTIVE Gather Schema Statistics 06-Jun-2012 20:10:49 07-Jun-2012 01:35:24 572 9.53333333 .158888889
    Thanks & Regards
    Syed Kaleemuddin.
    Oracle Apps DBA
    Mobile: +91 9966270072
    Email: [email protected]

  • MODIFY RECOVERY at Partition level

    Hello,
    Could you please let me know from which supportpack MODIFY RECOVERY happens at the Partition level ?
    From Note 427748 I come to know it is fixed for 6.10/20/40 systems and I could not get information for SAP 7.00 systems
    Thanks
    Aravinthan

    Hello,
    The note correction has fixed the issue and MODIFY RECOVERY happens at the partition level but i could not find the job output of MODIFY RECOVERY in DB13.
    I could get the information only from DSNACCMO.dbg
    Could you please clarify ?
    Thanks
    Aravinthan

  • Understand Oracle statistics gathering

    Hi experts,
    I am new in Oracle performance tuning. can anyone tell me what the mean of "Oracle statistics gathering" in simple words/way. i has read it from Oracle site http://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/stats.htm.
    But i am not understand it properly. It Any role in oracle performance tuning? Does it make good performance of Oracle DB???
    Reg
    Harshit

    Hi,
    You can check this in some Easy way :ORACLE-BASE - Oracle Cost-Based Optimizer (CBO) And Statistics (DBMS_STATS)
    >> It Any role in oracle performance tuning? Does it make good performance of Oracle DB???  :Yes
    HTH

  • Partition-level Transactions - Entry Processor access "other" cache.

    Hi there,
    Playing around with Coherence 3.7 and like the look of the new Partition-level Transactions. I'm trying to implement an Entry Processor that accesses an "other" cache in order to create an audit record.
    The slides from Brian's SIG presentation show the following code for an example EP doing this:
    public void process(Entry entry)
    // Update an entry in another cache.
    ((BinaryEntry)entry).getBackingMapContext(“othercache”).getBackingMap().put(“othercachekey”, value);
    The problem I'm facing is the API doesn't seem to have an implementation of BinaryEntry.getBackingMapContext(String cachename). It just has a no-arg version which accesses the cache of the current entry. It's not an error just in the API docs, as the code doesn't compile either.
    Any ideas what to do here?
    Cheers,
    Steve

    Care to expand on that, Charlie?
    Reason I ask is that since I posted my reply to JK, I noticed that I was getting classcast errors on my audit record insert in the server logs. I had to resort to the use of "converters" to get my newly created audit record into binary format before the "put" into the second (also distributed) cache was successful without errors:
    BackingMapManagerContext ctx = ((BinaryEntry)entry).getContext();
    ctx.getBackingMapContext("PositionAudit").getBackingMap().put(
    ctx.getKeyToInternalConverter().convert(pa.getAuditId()),
    ctx.getValueToInternalConverter().convert(pa));
    The "PositionAudit" cache is the one I want to create a new entry in each time an EntryProcessor is invoked against Position objects in the "Position" cache. The object I'm creating for that second cache, "pa" is the newly created audit object based on data in the "position" object the entry processor is actually invoked against.
    So the requirement is pretty simple, at least in my mind: Position objects in the "positions" cache get updated by an EntryProcessor. When the EntryProcessor fires, it must also write an "audit" record/object to a second cache, based upon data in the Position object just manipulated. As all of the caches are distributed caches - which store their data in Binary format, AFAIK - I still have to do the explicit conversion to binary format of my new audit object to get it successfully "put" into the audit cache.
    Seems to still be quite a bit of messing around (including the KeyAssociator stuff to make sure these objects are in the same partitions in the first place for all this to work), but at least I now get "transactionally atomic" operations across both the positions and positions audit caches, something that couldn't be done from an EP prior to 3.7.
    As I say, it works now. Just want to make sure I'm going about it the right way. :)
    Any comments appreciated.
    Cheers,
    Steve

  • Table Statistics Gathering Query

    Hey there,
    I'm currently getting trained in Oracle and one of the questions posed to me were create a table, insert a million rows into it and try to find the number of rows in it. I've tried the following steps to solve this,
    First table creation
    SQL> create table t1(id number);
    Table created.Data insertion
    SQL> insert into t1 select level from dual connect by level < 50000000;
    49999999 rows created.Gathering statistics
    SQL> exec dbms_stats.gather_table_stats('HR','T1');
    PL/SQL procedure successfully completed.Finally counting the number of rows
    SQL> select num_rows from user_tables where table_name='T1';
      NUM_ROWS
      49960410
    SQL> select count(*) from t1;
      COUNT(*)
      49999999My database version is,
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - ProductionI would like to know why there are two different results for the same table when using "num_rows" from the view "user_tables" and the aggregate function "count()" over the same table. Please do keep in mind that i'm studying oracle and this is from a conceptual point of view only. I would like to know how gathering the table statistics works using dbms_stats package works.
    Thank You,
    Vishal

    vishm8 wrote:
    Gathering statistics
    SQL> exec dbms_stats.gather_table_stats('HR','T1');
    PL/SQL procedure successfully completed.I would like to know why there are two different results for the same table when using "num_rows" from the view "user_tables" and the aggregate function "count()" over the same table. Please do keep in mind that i'm studying oracle and this is from a conceptual point of view only. I would like to know how gathering the table statistics works using dbms_stats package works.
    Thank You,
    VishalBecause you aren't specifying a value for estimate_percent in the procedure call (to gather_table_stats) Oracle will pick an estimate value for you. If you want to sample the entire table you would need to explicitly specify that in your procedure call.
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_stats.htm#ARPLS68582

  • Statistics gathering error

    Hi all,
    I am running on AIX  version 5.3 with oracle 10.2.0.1 database.
    Since yesterday I am encountering errors when gathering statistics from table partitions that already have data in it. I was able to gather without errors for years but then suddenly I got the following errors:
    exec dbms_stats.gather_table_stats('BLP', 'ADJUSTMENT_TRANSACTION', 'ADJUSTMENT_TRANSACTION_P201311', GRANULARITY=>'PARTITION')
    BEGIN dbms_stats.gather_table_stats('BLP', 'ADJUSTMENT_TRANSACTION', 'ADJUSTMENT_TRANSACTION_P201311', GRANULARITY=>'PARTITION'); END;
    ERROR at line 1:
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at "SYS.DBMS_STATS", line 13044
    ORA-00942: table or view does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 13076
    ORA-06512: at line 1
    I also got the following errors in the alert_logs:
    ORA-00600: internal error code, arguments: [KSFD_DECAIOPC], [0x7000004FF189780], [], [], [], [], [], []
    The other day the alert_log generated this error when generating statistics also for another table:
    ORA-01114: IO error writing block to file 1001 (block # 4026567)
    ORA-27063: number of bytes read/written is incorrect
    IBM AIX RISC System/6000 Error: 28: No space left on device
    As I checked, the server has sufficient space.
    Do you guys have any idea what could be the problem? I can't generate table statistics as of the moment due to this problem.
    Regards,
    Tim

    Hi Suntrupth,
    BLP@OLSG3DB  > show parameter filesystemio_options
    NAME                                 TYPE        VALUE
    filesystemio_options                 string      asynch
    BLP@OLSG3DB  > show parameter disk_asynch_io
    NAME                                 TYPE        VALUE
    disk_asynch_io                       boolean     TRUE
    No invalid objects where returned also:
    BLP@OLSG3DB  > select object_name from dba_objects where status='INVALID' and owner='SYS';
    no rows selected
    Regards,
    Tim

Maybe you are looking for

  • How do i find my Apple Parts Numbers for my Macbook Pro?

    I have a macbook pro, 2.4GHz Core 2 Duo. Model number is A1212. I was wondering how I could find out what the correct apple part number is for a replacement top case. (the place where you put the keyboard). I have no problem replacing the case, I jus

  • Importing classes that implement jsp tags

              I was making a custom JSP tag library. The tag functionality was implemented in           a class called, lets cay ClassA. I made the tld file and put it under the WEB-INF           directory. The class which implemented the functionality w

  • TS3297 My phone wont turn on. Error 21

    My phone won't turn on and it has been two days. Yesterday I travelled 600km to the nearest Apple store, only to be told they couldn't help me because I didn't make an appointment. Considering the iphone is my only device I couldn't have made an appo

  • IPad Apps showing (3) but no update available

    I have purchased few Apps and now getting updates, Apps on right side of iTunes showing 3 but no update is available whenever I am checking for update. What is the problem? can I get rid of this number in front of Apps?

  • Acrobat package does not install cs7

    When attempting to deploy a CS7 Acrobat Package to Windows 7, the installer finishes running, and Acrobat is listed in Programs and Features, but Acrobat is not installed or listed in Program Files. We have tried creating and installing both a 64-bit