Export with consistent=y raise snapshot too old error.

Hi,
Oracle version:9204
It raises
EXP-00056: ORACLE error 1555 encountered
ORA-01555: snapshot too old: rollback segment number 2 with name
"_SYSSMU2$" too small
when I do an export with consistent=y option.
And I find below information in alert_orcl.log
Wed Apr 20 07:50:01 2005
SELECT /*NESTED_TABLE_GET_REFS*/ "XXX"."TABLENAME".* FROM
"XXX"."TABLENAME"
ORA-01555 caused by SQL statement below (Query Duration=1140060307
sec, SCN: 0x0000.00442609):
The undo parameters:
undo_retention=10800(default value)
undo_retention is larger than the seconds run export(only 1800
seconds),so I think the default value is enough.
undo_management=auto(default value)
Maybe the rollback tablespace is too small(about 300M)? But I think oracle should increase the size of datafile in this mode.Is that right?
undo_tablespace=undotbs1
undo_suppress_errors=false
I think I must miss something.
Any suggestions will be very appreciated.
Thanks.
wy

UNDO_RETENTION is a request, not a mandate. If your UNDO tablespace is too small, Oracle may have to discard UNDO segments before UNDO_RETENTION is reached.
How much UNDO is your database generating every second?
SELECT stat.undoblks * param.value / 1024 / 1024 / 10 / 60 undo_mb_per_sec
  FROM v$undostat  stat,
       v$parameter param
WHERE param.name = 'db_block_size'Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC

Similar Messages

  • Snapshot too old error - disambiguation required.

    [http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:275215756923]
    It is known that Oracle provides row-level locks, meaning, two different transactions can update two different rows,
    1. Is this true even if they reside in the same block?
    2. If yes, as per the reasons stated in the above reference, Oracle should not find that the block has been modified for that particular, different, row. I am baffled at this point.
    Resue needed.
    Thanks in adv.
    Aswani V.

    if your concern is abou snapshoot too old (ora-01555) error, you know the reason for this undo tablespace size and the length of the longest running query. If you need to be able to get consistent read version of the block, which is needed for your long running query(which raises snapshot too old error), you have two options. 1) you can increase the size of your undo tablespace and find the optimum size considering your pick workload and longest running query. I mean your undo tablespace size should be enough to hold the undo data generated during the peak workload and provide read consistent version of the block for your longest running query. Again, it does not guarantees snapshot tooo old error could not happen. 2) in the second choise, you can change your undo tablespaces for guaranteed retention. In this case, despite the time of running of your longest query, you will not get snapshot too old errors. But, it measn that, during the heavy workload period, you can get errors with transactions which involve DML operations, if they could not find enough space for UNDO. So, i would increase the size of Undo tablespace and measure if it is ok or not. Oracle itself smartly choses which undo data is gonna be overwritten, even in the case of noguarantee undo tablespace. I.e. it will not overwrite the read consistent versions of undo blocks if there is some free(or expired) undo blocks.

  • Expdp failing with snapshot too old error

    Hi,
    We are getting the below errors while migrating partitioned tables using expdp.
    The source and target databases are both running on 10.2.0.5 and the main thing is source database doesn't have any active sessions. This is a clone of a Prod Database and no one is accessing it.
    ORA-31693: Table data object "DPMMGR"."WHSE_CTNR_EVNT_W":"MSG_PRCS_N"."MSG_PRCS_N_DC556" failed to load/unload and is being skipped due to error:
    ORA-02354: error in exporting/importing data
    ORA-01555: snapshot too old: rollback segment number 31 with name "_SYSSMU31$" too small
    ORA-31693: Table data object "DPMMGR"."RLTM_PRDCT_LOG":"RPL_20120814" failed to load/unload and is being skipped due to error:
    ORA-02354: error in exporting/importing data
    ORA-01555: snapshot too old: rollback segment number 14 with name "_SYSSMU14$" too small
    Undo Tablespace has enough space but still the expdp is failing.
    SQL>/
    TABLESPACE Totalspace(MB) Used Space(MB) Freespace(MB) % Used % Free
    UNDO01 145096 115338 29758 79.49 20.51
    SQL> show parameter undo
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 14400
    undo_tablespace string UNDO01
    Please let me know any workarounds.
    Thanks

    Undo Tablespace has enough space but still the expdp is failing.increasing undo_retention will helps here.
    undo_retention integer 14400currently 4 hr,increase it to 6 hr and try expdp again.

  • Snapshot too old error during drop tablespace

    Hi Experts
    When we are doing BW reorg and steps followed are
    1. created a newtablespace with source tablespace TABART class reference.
    2. Export the source tablespace to the filesystem level.
    3. DROP the source tablespace now.
    4. Rename the new tablespace to source tablespace name.
    5. Import
    Here in the third step i have received snapshot too old error.
    BR0301E SQL error -604 at location BrSqlExecute-1, SQL statement:
    '/* BRSPACE */ drop tablespace PSAPADSOLD including contents and datafiles cascade constraints'
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01555: snapshot too old: rollback segment number 0 with name "SYSTEM" too small
    BR1017E Execution of SQL statement 'drop tablespace PSAPADSOLD including contents and datafiles cascade constraints' failed
    so i tried to rename the tablespace and set to offline and tried to import only 240 tables were imported compared to 24057 tables.Still the PSAPADS - Source tablespace shows 65000 elements.
    my queries:
    1. After Export of the tablespace how come the Source tablespace retain the tables.
    2. why i could not able to drop the tablespace
    I had increased the UNDO_RETENTION to 86400 my oracle version is 10.2.04.
    Source table space PSAPADS is 80 GB and has only 30 GB data and the remaining are free.
    PSAPUNDO was 17GB in size.
    Kinldy suggest
    Regards
    Bala

    Hi Stefa
    Thanks for your reply.
    how to find the System undo is still active.How to track which Undo which is active and their steps.
    As per the metalink note our PSAPUNDO is Locally Managed Tablepsace hence the second workaround is applicable..
    How to validate this will not give a problem again while i am doing reorganization.
    When i checked a sap note 1039060 this note is applicable to windows i dont how it can be impleneted.
    Whether any merge fix will help on this
    Regards
    Bala

  • What is snapshot too old error.

    Hi,
    Could you please explain the snapshot too old error detail manner.
    And i how can i solve the problem.
    Thanks,
    MuthyaM

    Hi,
    When a transaction occur original data will be copied to an undo segment in order to make a read consistent image for other users, as well as to make rollback possible. It works as a round robin fashion so that, old expired data in the undo segment will get replaced with new data.
    If you start a query and the it was found some blocks had been changed AFTER you started the query, then query will go and check undo segment for the original data. If those original data had been overwritten, then you will get snapshot too old error.
    To solve this, You can enable retention grantee option and set retention period equal to the maximum query length.
    Regards,
    Sajeeva.

  • ORA-01555: snapshot too old error

    While i was trying to run the following anonymous block to analyze all the tables and indexes in my schema, it ran for approx. 5 hours and ended up with
    ORA-01555: snapshot too old error
    Can anybody explain me why this happened?
    SQL> DECLARE
    2 CURSOR tab_cur
    3 IS
    4 SELECT table_name
    5 FROM user_tables;
    6
    7 CURSOR indx_cur
    8 IS
    9 SELECT index_name
    10 FROM user_indexes;
    11 BEGIN
    12 FOR rec IN tab_cur
    13 LOOP
    14 EXECUTE IMMEDIATE 'ANALYZE TABLE '
    15 || rec.table_name
    16 || ' COMPUTE STATISTICS';
    17 END LOOP;
    18
    19 FOR rec IN indx_cur
    20 LOOP
    21 EXECUTE IMMEDIATE 'ANALYZE INDEX '
    22 || rec.index_name
    23 || ' COMPUTE STATISTICS';
    24 END LOOP;
    25 END;
    26 /
    DECLARE
    ERROR at line 1:
    ORA-01555: snapshot too old: rollback segment number 13 with name "_SYSSMU13$"
    too small
    ORA-06512: at line 12
    Elapsed: 05:01:26.08
    Thanks and Regards
    --DKar                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Your cursor loop uses the database catalog.
    The analyze updates the database catalog -- including some of the same tables required by the cursor loop.
    The undo retention was not sufficient to hold all of the undo necessary to maintain read consistency of the catalog.
    Try using something like this, instead.
    -- for 9i
    BEGIN                                 
    DBMS_STATS.GATHER_SCHEMA_STATS (
              ownname               => '<YOUR SCHEMA>',                    
              estimate_percent      => DBMS_STATS.AUTO_SAMPLE_SIZE,          
              block_sample          => TRUE,        
              method_opt            => 'FOR ALL COLUMNS SIZE AUTO',
              degree                => 6,
              granularity           => 'ALL',
              cascade               => TRUE,
              options               => 'GATHER'
    END;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Snapshot too old error 1555

    how to over come this problem practically

    I can't speak for the original poster, of course.
    But snapshot too old errors are what you get when, in order to produce the last piece of data at the finish time of some report, you have to roll the data you encounter in the buffer cache back to the state it was in at the start time.
    A query which starts at 10.00am, for example, and finishes at 10.03 must see the data at 10.03 in the state it was at at 10.00am. So at the end of the query process, we have to do a '3 minute rollback'.
    But if your query starts at 10.00am and doesn't finish until 10.53am, then you'd better hope 53 minutes of undo still exists in your database, because you're going to need to roll 10.53 data back to the state it was at 10.00.
    The larger the time difference between the start and finish of a query, therefore, the more likely it becomes that the undo needed to generate a read consistent image won't be available, and that's when you get the 1555 signalled.
    Most advice on 1555s say to increase the size of your rollback segments or your undo tablespace or increase the undo_retention time, so that the necessary undo **is** still available by the time you get to the query's finish time. But it's also a reasonable suggestion (though much harder to implement!) to tune the query so that the gap between start and finish time is much smaller than before. If a query used to take 53 minutes to complete but after tuning only takes 23 minutes to run, you've reduced the age of the undo you need available to produce the report by 30 minutes.
    In other words, the common approach to 1555s is to increase the amount of undo that's available on the system. The approach being suggested here is to **reduce** the need for so much undo in the first place by reducing the length of time queries run for.

  • Snapshot too old error occured, kindly need solution for it...

    Dar friends
    I got snapshot too old error on most used database
    Kindly give me the solution....
    my solution was
    Alter rollback segment <rollback segmnt name>
    datafile '<path>/filename.dbf' resize <no>k;
    or
    alter rollback segment <rollback segmnt name>
    add datafile '<path>/filename.dbf' size <no>k
    storage(initial 1k nxt 1k minextnts 2 maxextents unlimited)
    Kindly suggest me wheather my code is currect or not...
    Ur
    Friend

    I don't know what version of the database you are running? I'm only using 8.1.7.4. But where I come from, you add datafiles to the tablespace, not the rollback segment.
    alter tablespace rollback
    add datafile '&lt;blah, blah&gt;'
    size 147m
    autoextend on next 100m maxsize 2047m;
    Make sure that you have a suitable number of rollback segments that are well-sized extents. But mostly, listen the Tom Best, and try and introduce some best practices (no pun intended) to reduce the likelihood of this situation arising.

  • Advanced Queues Snapshot too old error

    I am using the advanced queues to submit work for parallel processes running through the Oracle Job Queue.
    I have attempted running anywhere from 1 to 5 simultaneous processes (in addition the the process which submits them to the Oracle job queue and populates the advanced queues) and I am getting sporadic Snapshot too old errors when the other processes are attempting to dequeue. The Advanced queues are populated before the other processes are submitted to the job queue, so I don't see that there could be conflicts between one process enqueuing while another is dequeuing.
    The reason I am attempting this is to try and gain some performance by running processes in parallel.
    Has anyone else had problems like this? Is this a bug in Oracle 8.1.6? Is there a parameter setting I need to adjust? Are there any suggestions for getting around this problem?

    I don't know what version of the database you are running? I'm only using 8.1.7.4. But where I come from, you add datafiles to the tablespace, not the rollback segment.
    alter tablespace rollback
    add datafile '&lt;blah, blah&gt;'
    size 147m
    autoextend on next 100m maxsize 2047m;
    Make sure that you have a suitable number of rollback segments that are well-sized extents. But mostly, listen the Tom Best, and try and introduce some best practices (no pun intended) to reduce the likelihood of this situation arising.

  • UNdo error (ora-01555) - Snapshot too old error

    Hi,
    If undo get filled and if we get a snapshot too old error then what is the solution for this error, plz give step by step solution for this.

    You prevent ORA-01555 errors by
    1) Setting UNDO_RETENTION equal to or greater than the length of the longest running query/ serializable transaction in the system
    2) Ensuring the UNDO tablespace is large enough to accomodate the amount of UNDO generated over that period of time.
    You would need to determine things like the length of your longest running query, the amount of UNDO getting generated, etc. and adjust your UNDO_RETENTION and UNDO tablespace accordingly.
    Justin

  • Snapshot too old error won't go  away ...

    I am Oracle 10.2.0.3 on HP UNIX.
    I have a 80GB database. I am doing full export with consistent=Y. I get following error towards the end of export:
    . Exporting referential integrity constraints
    . exporting stored procedures
    . exporting operators
    . exporting indextypes
    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting triggers
    . exporting materialized views
    . exporting snapshot logs
    . exporting job queues
    EXP-00008: ORACLE error 1555 encountered
    ORA-01555: snapshot too old: rollback segment number 2 with name "_SYSSMU2$" too small
    EXP-00000: Export terminated unsuccessfully
    ~
    ~
    No other jobs are running or even users accessing the database at the time of export database. My undo table space size is now 6GB (I had started with 2GB and increasing it to 6GB) has not helped. Is there a way to consistently get rid of the error? I am assuming rest of my export is still valid.

    What's the size of the UNDO tablespace? Just increase it
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com
    [Step by Step install Oracle on Linux and Automate the installation using Shell Script |http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/]

  • Snapshot too Old Error - Help needed

    One of the batch job is failing due to oracle error “snapshot too old.”
    Please let me know what are the possible reason’s this problem occurs and suggest me the possible solutions to rectify this problem.
    Thanks in advance

    > this can mean different things:
    1. your rollback / undo is too small for this
    transaction,
    Incorrect. A snapshot too old means that a consistent read cannot be maintained. A read is not a transaction - it does not cause any locks on rows.
    What is can mean that rollback is too small to maintain a consistent read for a sufficiently long enough period for the consistent read to be completed.
    > 2. commits aren't made often enough,
    NOT TRUE!! Not in Oracle. (sorry for the emphatic bold, but this an OWT within Oracle - it is very far from the truth in Oracle and Oracle is not SQL-Server)
    > 3. there are concurrent transactions which act on the
    same tables.
    This is usually the case - more accurately, fetching across commits. This is caused by creating a consistent read on a table (opening a cursor in PL/SQL), reading the rows (fetching from cursor in PL/SQL) and then updating those exact same rows.
    The consistent read deals with version n of the table. At the same time the exact same process updates those very same rows creating new versions of those rows.
    Rollbacks are overwritten (it is a circular buffer) and the version n of the rows cannot be maintained and "goes out of scope" as the rollbacks containing that version of the rows are re-used.

  • Snapshot too old error

    Hi All,
    need suggestions for tuning a query failing with the error "Rollback segment too small; snapshot too old”. The query is taking a long time to run.

    Here is the Explain plan (if you can understand !!!). Unable to generate the tkprof as the query is taking exceptionally long time.
    EXECUTION_PLAN
    0 SELECT STATEMENT Cost = 18675
    1 NESTED LOOPS Cost = 1
    100 TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B Cost = 2
    101 INDEX UNIQUE SCAN MTL_SYSTEM_ITEMS_B_U1 UNIQUE
    2 NESTED LOOPS Cost = 1
    98 TABLE ACCESS BY INDEX ROWID WSH_NEW_DELIVERIES Cost = 2
    99 INDEX UNIQUE SCAN WSH_NEW_DELIVERIES_U1 UNIQUE
    3 NESTED LOOPS Cost = 1
    96 TABLE ACCESS BY INDEX ROWID WSH_DELIVERY_ASSIGNMENTS Cost = 2
    97 INDEX RANGE SCAN WSH_DELIVERY_ASSIGNMENTS_N3 NON-UNIQUE
    4 NESTED LOOPS Cost = 1
    94 TABLE ACCESS BY INDEX ROWID WSH_DELIVERY_DETAILS Cost = 2
    95 INDEX RANGE SCAN WSH_DELIVERY_DETAILS_N3 NON-UNIQUE
    5 NESTED LOOPS Cost = 1
    92 TABLE ACCESS BY INDEX ROWID HZ_PARTIES Cost = 2
    93 INDEX UNIQUE SCAN HZ_PARTIES_U1 UNIQUE
    6 NESTED LOOPS Cost = 1
    7 NESTED LOOPS Cost = 1
    8 NESTED LOOPS Cost = 1
    9 NESTED LOOPS Cost = 1
    10 NESTED LOOPS Cost = 1
    EXECUTION_PLAN
    11 NESTED LOOPS Cost = 1
    12 NESTED LOOPS Cost = 1
    13 NESTED LOOPS Cost = 1
    14 NESTED LOOPS Cost = 1
    15 NESTED LOOPS Cost = 1
    16 NESTED LOOPS Cost = 1
    17 NESTED LOOPS Cost = 1
    18 NESTED LOOPS Cost = 1
    19 NESTED LOOPS Cost = 1
    20 NESTED LOOPS Cost = 1
    21 NESTED LOOPS Cost = 1
    22 NESTED LOOPS Cost = 1
    23 HASH JOIN Cost = 1
    24 TABLE ACCESS BY INDEX ROWID OE_ORDER_HEADERS_ALL Cost = 1
    25 NESTED LOOPS Cost = 1
    26 NESTED LOOPS Cost = 1
    27 NESTED LOOPS Cost = 1
    28 HASH JOIN Cost = 1
    29 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION Cost = 1
    30 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_IX1 NON-UNIQUE Cost = 1
    31 NESTED LOOPS Cost = 2
    EXECUTION_PLAN
    32 NESTED LOOPS Cost = 1
    33 NESTED LOOPS Cost = 1
    34 NESTED LOOPS Cost = 1
    35 NESTED LOOPS Cost = 1
    36 NESTED LOOPS Cost = 1
    37 NESTED LOOPS Cost = 1
    38 NESTED LOOPS Cost = 1
    39 TABLE ACCESS BY INDEX ROWID FND_FLEX_VALUE_SETS Cost = 1
    40 INDEX UNIQUE SCAN FND_FLEX_VALUE_SETS_U2 UNIQUE Cost = 1
    41 TABLE ACCESS BY INDEX ROWID FND_FLEX_VALUE_SETS Cost = 2
    42 INDEX UNIQUE SCAN FND_FLEX_VALUE_SETS_U2 UNIQUE
    43 TABLE ACCESS BY INDEX ROWID FND_FLEX_VALUES Cost = 2
    44 INDEX RANGE SCAN FND_FLEX_VALUES_N3 NON-UNIQUE
    45 TABLE ACCESS BY USER ROWID FND_FLEX_VALUES Cost = 2
    46 TABLE ACCESS BY INDEX ROWID HZ_LOCATIONS Cost = 2
    47 INDEX RANGE SCAN HZ_LOCATIONS_N5 NON-UNIQUE Cost = 1
    48 TABLE ACCESS BY INDEX ROWID HZ_PARTY_SITES Cost = 2
    49 INDEX RANGE SCAN HZ_PARTY_SITES_N2 NON-UNIQUE
    50 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCT_SITES_ALL Cost = 2
    51 INDEX RANGE SCAN HZ_CUST_ACCT_SITES_N1 NON-UNIQUE
    52 TABLE ACCESS BY INDEX ROWID HZ_CUST_SITE_USES_ALL Cost = 2
    EXECUTION_PLAN
    53 INDEX RANGE SCAN HZ_CUST_SITE_USES_N1 NON-UNIQUE Cost = 1
    54 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK UNIQUE
    55 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION Cost = 2
    56 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 NON-UNIQUE
    57 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK UNIQUE
    58 INDEX RANGE SCAN OE_ORDER_HEADERS_N4 NON-UNIQUE
    59 TABLE ACCESS FULL FND_FLEX_VALUES Cost = 2
    60 TABLE ACCESS BY INDEX ROWID FND_FLEX_VALUES Cost = 2
    61 INDEX RANGE SCAN FND_FLEX_VALUES_N3 NON-UNIQUE
    62 TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL Cost = 2
    63 INDEX RANGE SCAN OE_ORDER_LINES_N1 NON-UNIQUE
    64 TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS Cost = 2
    65 INDEX UNIQUE SCAN MTL_PARAMETERS_U1 UNIQUE
    66 TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL Cost = 2
    67 INDEX RANGE SCAN XXCTS_OE_OE_ORDER_LINES_ALL_N3 NON-UNIQUE
    68 TABLE ACCESS BY INDEX ROWID OE_TRANSACTION_TYPES_TL Cost = 2
    69 INDEX RANGE SCAN OE_TRANSACTION_TYPES_TL_U1 UNIQUE Cost = 1
    70 TABLE ACCESS BY INDEX ROWID HZ_CUST_SITE_USES_ALL Cost = 2
    71 INDEX UNIQUE SCAN HZ_CUST_SITE_USES_U1 UNIQUE
    72 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK UNIQUE
    73 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION Cost = 2
    EXECUTION_PLAN
    74 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 NON-UNIQUE
    75 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION Cost = 2
    76 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 NON-UNIQUE
    77 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK UNIQUE
    78 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCT_SITES_ALL Cost = 2
    79 INDEX UNIQUE SCAN HZ_CUST_ACCT_SITES_U1 UNIQUE
    80 TABLE ACCESS BY INDEX ROWID XXCTS_OE_ORDER_HEADERS_ALL Cost = 2
    81 INDEX UNIQUE SCAN XXCTS_OE_ORDER_HEADERS_ALL_U1 UNIQUE
    82 TABLE ACCESS BY INDEX ROWID HZ_PARTY_SITES Cost = 2
    83 INDEX UNIQUE SCAN HZ_PARTY_SITES_U1 UNIQUE
    84 TABLE ACCESS BY INDEX ROWID HZ_LOCATIONS Cost = 2
    85 INDEX UNIQUE SCAN HZ_LOCATIONS_U1 UNIQUE
    86 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS Cost = 2
    87 INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 UNIQUE
    88 TABLE ACCESS BY INDEX ROWID HZ_PARTIES Cost = 2
    89 INDEX UNIQUE SCAN HZ_PARTIES_U1 UNIQUE
    90 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS Cost = 2
    91 INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 UNIQUE
    102 rows selected.

  • Reg: snapshot too old error

    Hi Consultants ,
    when i run requests iam getting below errors please advice me
    ORA-01555: snapshot too old: rollback segment number 6 with name "_SYSSMU6$" too small
    this is error getting for 2 programs.
    Thanks,
    Anu.

    Hi,
    What is the database version?
    Please see if these documents help.
    Note: 40689.1 - ORA-01555 "Snapshot too old" - Detailed Explanation
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=40689.1
    Note: 1005107.6 - ORA-01555: snapshot too old - Causes and Solutions
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=1005107.6
    Regards,
    Hussein

  • ORA-01555: snapshot too old error while ANALYZE command!!!

    Hi All,
    I have 2-node 10gR2 RAC on RHEL4. The database is working fine. And i am able to import the schema also. But when i try to analyze all the tables in schema..i'm getting the following error...my undo_tablespace is of 1.2GB in both instances of RAC. and unde_retention is set to 900 (default).
    ORA-01555: snapshot too old: rollback segment number with name "" too small
    Please, can anybody tell me what might be the problem?
    Thanks
    Praveen.

    Hi,
    When the analyze of the table is going on, lot of dml activity is also going on to the table. So run the analyze when there is minimal dml activity on the table. Increase undo_tablespace,undo_retention might help your scenarios.
    10g has got guaranteed undo retention, you can also use that.
    Regards,
    Satheesh Babu.S
    http://www.satheeshbabus.blogspot.com

Maybe you are looking for