ORA-04092 trying to analyze table

Hi,
I was trying to analyze some tables issuing:
analyze table owner.table_name compute statistics;
Immediately raises the ora-04092, as far as I seen, read, and googled, this ORA error occurs only when someone try to commit inside a trigger, but clearly this is not the case.
And even in some web applications this error appears.
This error started to occur after session limit exceeded, doesn't sound related to me, but maybe it can be interesting information.
Any suggestions please, I'm comfused.
Regards

Here more information:
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SO: RHELAS 5
And according recomendation, I tryed to use DBMS_STAT, with same results:
SQL> conn system/*******@siin
Connected.
SQL> conn mailedu/*******@siin
ERROR:
ORA-04092: cannot COMMIT in a trigger
ERROR:
ORA-24315: illegal attribute type
Warning: You are no longer connected to ORACLE.
SQL> conn mailedu/********@siin
ERROR:
ORA-24315: illegal attribute type
SQL> conn system/********@siin
ERROR:
ORA-24315: illegal attribute type
Once I restart sqlplusw, and I could connect (just the first connect is successfull) I tryed with dbms_stats:
SQL> exec dbms_stats.gather_schema_stats ( -
ownname => 'MAILEDU', -
options => 'GATHER' );BEGIN dbms_stats.gather_schema_stats ( ownname => 'MAILEDU', options => 'GATHER' ); END;
ERROR at line 1:
ORA-04092: cannot COMMIT in a trigger
ORA-06512: at "SYS.DBMS_STATS", line 14002
ORA-06512: at "SYS.DBMS_STATS", line 13974
ORA-06512: at line 1
Looking further inside the forum I found this thread:
stupid ora-04092 error message
In the conversation it is mentioned that according document Link:[7886990|https://metalink.oracle.com/CSP/ui/flash.html#tab=KBHome(page=KBHome&id=()),(page=KBNavigator&id=(bmDocDsrc=Bug&bmDocTitle=AFTER%20OCCURRING%20ORA-18,%20ORACLE%20THROWS%20ORA-4092%20CONTINOUSLY&bmDocID=7886990&viewingMode=1143&from=BOOKMARK&bmDocType=BUG))]
it can occur after ORA-00018 maximum number of sessions exceeded
this one is my scenario,
And...
Bug No: 6977167
Filed 16-APR-2008 Updated 19-AUG-2009
Product Oracle Server - Enterprise Edition Product Version 10.2.0.2
Platform Linux x86-64 Platform Version rel4
Database Version 10.2.0.2 Affects Platforms Generic
Severity Severe Loss of Service Status Development to Q/A
Base Bug N/A Fixed in Product Version 11.2
So I would like to mark this as a known bug, but I don't know how to proeced.
Regards

Similar Messages

  • ORA-20000: Unable to analyze TABLE "ECI"."COUNTRY"

    Oracle9i 9.2.0.7 on Windows Server 2003 32bit
    Using the "ANALYZE" in the Enterrpise Manager Console
    begin
    dbms_stats.gather_table_stats(ownname=>'ECI',tabname=>'COUNTRY',partname=>NULL);
    end;
    ORA-20000: Unable to analyze TABLE "ECI"."COUNTRY", insufficient priviledges or does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 10292
    ORA-06512: at "SYS.DBMS_STATS", line 10315
    ORA-06512: at line 2
    Using SQLPLUS
    SQL>begin
    2>dbms_stats.gather_table_stats(ownname=>'ECI',tabname=>'country',partname=>NULL);
    3>end;
    4>/
    ORA-20000: Unable to analyze TABLE "ECI"."COUNTRY", insufficient priviledges or does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 10292
    ORA-06512: at "SYS.DBMS_STATS", line 10315
    ORA-06512: at line 2
    COMMENT:I noticed here that eventhough I specifically used (tabname=>'country') it still used "ECI"."COUNTRY"(ALL CAPS) in executing my statement
    I also tested on other procedure.
    Using SQLPLUS
    SQL>begin
    2>dbms_redefinition.can_redef_table('ECI','country',dbms_redefinition.cons_use_pk);
    3>end;
    4>/
    BEGIN
    ERROR at line 1:
    ORA-00942: table or view does not exist
    ORA-06512: at "SYS.DBMS_REDEFINITION", line 8
    ORA-06512: at "SYS.DBMS_REDEFINITION", line 247
    ORA-06512: at line 2
    I don't understand why this error happens because
    a) the schema and table exist (I double checked)
    b) the error only happens on a single schema for only the old tables, when I create new tables I could "ANALYZE" it. I also can "ANALYZE" the indexes.
    c)I have used both the sys and system user logging in as SYSDBA
    In the ff exercise, I noticed that "ECI"."productrange" will work but "ECI"."PRODUCTRANGE" won't:
    SQL>select count(*) from "ECI"."productrange";
    COUNT(*)
    8
    SQL>select count(*) from "ECI"."PRODUCTRANGE";
    select count(*) from "ECI"."PRODUCTRANGE"
    ERROR at line 1:
    ora-00942: TABLE OR VIEW DOES NOT EXIST
    Can anyone kindly help me?

    You should not be creating tables in Oracle with names enclosed in double quotes. In that case Oracle preserves the case, making it difficult for others to identify the table.
    Create the table without using double quotes (may be CTAS) and everything should work fine.

  • OWB ORA-2000 unable to analyze table

    During execution I received the error above. In the mapping, I have two targets in two different schema's. I receive the error on the target that is in a different schema than the mapping.
    Is there a workaround? We do not have rights to grant analyze any table to the schema owner of the mapping.

    In OWB select Mapping, context menu Properties -> Code generation options -> Change Analyze table statements to false.
    Bye
    Detlef

  • ORA-6512 when using Analyze in 9.2.0.7

    Good Day! Kindly Help
    I'm using 9.2.0.7 on Windows server 2003 SP1.
    Whenever I use the Analyze Wizard located on Oracle EM Console, just to compute statistics for a certain table, the ff is the query that is executed:
    begin dbms_stats.gather_table_stats(ownername=>'ECI'.tabname=>'AGGREGATEENTITY',partname=>NULL);
    it produces the ff error:
    ORA-20000:Unable to analyze TABLE "ECI"."AGGREGATEENTITY", insufficient priviledges or does not exist
    ORA-06512: at"SYS.DBMS_STATS", LINE 10292
    ORA-06512: at"SYS.DBMS_STATS", LINE 10315
    ORA-06512: at line 2
    It produces the same error for only one of my schema. Oh by the way I'm using user SYS as sysdba to do this.

    You doesn't seem to have privileges to analyze the table. You would either be needing ANALYZE ANY privilege (or others....can't remember).
    HTH
    Thanks
    Chandra Pabba
    Message was edited by:
    ChandraP
    Message was edited by:
    ChandraP

  • ORA-00081 & ORA-00600 while analyzing tables in 10g 10.2.0.2

    Did anyone face the below issue while analyzing the tables and table partitions?
    ORA-00081: address range [0x600000000009EC20, 0x600000000009EC24) is not readable
    ORA-00600: internal error code, arguments: [qkaffsindex3], [], [], [], [], [], [], []
    The failing sql is
    'ANALYZE TABLE "SAP<SID>"."/BIC/E100076" PARTITION ("/BIC/E100076142008007") COMPUTE STATISTICS FOR TABLE FOR COLUMNS SIZE 75 "SID_0FISCPER","KEY_
    1000761","KEY_1000762","KEY_1000763","KEY_1000765","KEY_1000766","KEY_1000767","KEY_100076P","KEY_ZXGL_C19T","KEY_ZXGL_C19U" FOR ALL LOCAL INDEX
    ES'
    Oracle version is 10.2.0.2 and OS is HP-UX Itanium ia64 11.23.

    I almost forgot it, anyway:
    Do not use the COMPUTE and ESTIMATE clauses of ANALYZE statement to collect optimizer statistics. These clauses are supported solely for backward compatibility and may be removed in a future release. The DBMS_STATS package collects a broader, more accurate set of statistics, and gathers statistics more efficiently.
    You may continue to use ANALYZE statement to for other purposes not related to optimizer statistics collection:
    To use the VALIDATE or LIST CHAINED ROWS clauses
    To collect information on free list blocks
    Further reference:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41282
    Adith

  • Analyze table + ORA-3113

    We have a process that executes weekly for analyzing tables with the option estimate statistics.
    In only one database it occurs the error ORA-03113, end of comunication channel.
    Is there solution for this problem?
    Thanks
    Angie

    The ORA-3113 error occurs when your session lose connetion ith the database for some reason. There is no single cause for this error, or at least none I was ever able to find in several attempts to solve different problems. Your only hope is intensive problem solving. A couple of possibilites to help you get started.
    If you are getting this on an analyze, the most likely reason is a corrupt data block in the datafile. This could be either at the Oracle level or at the OS level. Try to determine if it always occurs for the same object. If so, then try to analyze it manually using validate structure. If this fails then try exporting the object, re-create it and then import the data into the newly created object.
    Another possibility is that some other process is doing something that analyze cannot cope with. Check if the error occurs at the same time every time. If so, check if there are other jobs running against the database at the same time. Try rescheduling the other jobs, or the analyze to see if that solves the problem. It is also possible that an OS job could cause this even if it is not using the database (We had a backup job that would occasionally eat our server. This sometimes caused Oracle connections to time out and raise 3113 when the backup job finished.).
    Check any OS error logs available to see if anything shows up around the time of the 3113 error.
    You should also check the Oracle alert logs, and look for trace files in udump, bdump and cdump directories.
    Good Luck.
    null

  • SQL Error: ORA-04092: cannot COMMIT in a trigger

    Trying to drop the table inside the trigger but i'm unable to do it.
    SQL Error: ORA-04092: cannot COMMIT in a trigger
    I need to drop the table based on the some condition say condition is the archive table with more than millions of records which is of no use so i plan to drop the table.
    I will be inserting the the unwanted table to mytable ,mytable which is having the trigger will fire to drop the table.
    I need this to be done on automatic basis so i have chosen trigger.
    is there anyway of automatic other than trigger in this case.

    933663 wrote:
    Trying to drop the table inside the trigger but i'm unable to do it.
    SQL Error: ORA-04092: cannot COMMIT in a trigger
    I need to drop the table based on the some condition say condition is the archive table with more than millions of records which is of no use so i plan to drop the table.
    I will be inserting the the unwanted table to mytable ,mytable which is having the trigger will fire to drop the table.
    I need this to be done on automatic basis so i have chosen trigger.
    is there anyway of automatic other than trigger in this case.You can't COMMIT inside a trigger. Oracle issue an auto COMMIT before and after the execution of DDL. So you can't use DDL in trigger. You may get suggestion to use AUTONOMOUS_TRANSACTION to perform COMMIT within tirgger. But dont do that. Its wrong idea.
    I will suggest you look back into your requirement and see what exactly you want. You could schedule a job that runs on a daily basis that will pick up the object details from your table and drop them accordingly.

  • ANALYZE Tables

    I am trying to find out what is the best approach for Optimizer_Mode and ANALYZE Tables. In v11.0.3 NAC, is it still RULE Mode and NO ANALYZE?
    What is the scenario in 11i?
    RDBMS is currently 8.1.5.1 but will be 8.1.6.x this weekend.
    Also does anyone have a shareable list of parameters for the init<SID>.ora? I have a machine with 18 CPU and a huge chunk of RAM.
    Regards

    Remember :
    - RULE Mode is for transactionnal Oracle Applications. Therefore, Forms screens and so on are coded with RULE optimization in mind.
    - Benefits of ANALYZE can be proven with some selected reports that uses HINTS in the SELECT statement.
    => I suggest u to monitor reports submitted and their execution time, collect their names, verify their codes. U'll have a deep vision of ur system.
    Instead of that, analyze all schemas without questions (careful it can be long !!!) but u will never know whether it is needed or not.

  • Error while impdp: ORA-02374: conversion error loading table

    Hi,
    I am trying to convert the character set from WE8ISO8859P1 to AL32UTF8 using expdp/impdp. for this I first convert WE8ISO8859P1 to WE8MSWIN1252 in source DB to get rid of “lossy” data. I created new database(target) with character set AL32UTF8 and nls_length_semantics = ’CHAR’ and created all the tablespaces as in source DB with auoexend on. I took full export (expdp) of source DB excluding TABLESPACE,STATISTICS,INDEX,CONSTRAINT,REF_CONSTRAINT and imported using impdp to target DB. I found below error in the import log file:
    ORA-02374: conversion error loading table "SCTCVT"."SPRADDR_CVT"
    ORA-26093: input data column size (44) exceeds the maximum input size (40)
    ORA-02372: data for row: CONVERT_STREET_LINE1 : 0X'20202020202020202020202020202020202020202020202020'
    I checked with select query on both DBs with below results.
    source DB:
    04:58:42 SQL> select count(*) from "SCTCVT"."SPRADDR_CVT";
    COUNT(*)
    74553
    target DB:
    04:59:24 SQL> select count(*) from "SCTCVT"."SPRADDR_CVT";
    COUNT(*)
    74552
    please suggest me a solution to this.
    Thanks and Regards.
    Edited by: user12045167 on May 9, 2011 10:39 PM

    Thanks for your update maher.
    09:15:53 SQL> desc "SCTCVT"."SPRADDR_CVT"
    Name Null? Type
    SPRADDR_PIDM NUMBER(8)
    CONVERT_PIDM VARCHAR2(9 CHAR)
    SPRADDR_ATYP_CODE VARCHAR2(2 CHAR)
    CONVERT_ATYP_CODE VARCHAR2(2 CHAR)
    SPRADDR_SEQNO NUMBER(2)
    CONVERT_SEQNO VARCHAR2(2 CHAR)
    SPRADDR_FROM_DATE DATE
    CONVERT_FROM_DATE VARCHAR2(8 CHAR)
    SPRADDR_TO_DATE DATE
    CONVERT_TO_DATE VARCHAR2(8 CHAR)
    SPRADDR_STREET_LINE1 VARCHAR2(30 CHAR)
    CONVERT_STREET_LINE1 VARCHAR2(40 CHAR)
    SPRADDR_STREET_LINE2 VARCHAR2(30 CHAR)
    CONVERT_STREET_LINE2 VARCHAR2(40 CHAR)
    SPRADDR_STREET_LINE3 VARCHAR2(30 CHAR)
    CONVERT_STREET_LINE3 VARCHAR2(40 CHAR)
    SPRADDR_CITY VARCHAR2(20 CHAR)
    CONVERT_CITY VARCHAR2(25 CHAR)
    SPRADDR_STAT_CODE VARCHAR2(3 CHAR)
    CONVERT_STAT_CODE VARCHAR2(25 CHAR)
    SPRADDR_ZIP VARCHAR2(10 CHAR)
    CONVERT_ZIP VARCHAR2(15 CHAR)
    SPRADDR_CNTY_CODE VARCHAR2(5 CHAR)
    CONVERT_CNTY_CODE VARCHAR2(5 CHAR)
    SPRADDR_NATN_CODE VARCHAR2(5 CHAR)
    CONVERT_NATN_CODE VARCHAR2(5 CHAR)
    SPRADDR_PHONE_AREA VARCHAR2(3 CHAR)
    CONVERT_PHONE_AREA VARCHAR2(3 CHAR)
    SPRADDR_PHONE_NUMBER VARCHAR2(7 CHAR)
    CONVERT_PHONE_NUMBER VARCHAR2(7 CHAR)
    SPRADDR_PHONE_EXT VARCHAR2(4 CHAR)
    CONVERT_PHONE_EXT VARCHAR2(4 CHAR)
    SPRADDR_STATUS_IND VARCHAR2(1 CHAR)
    CONVERT_STATUS_IND VARCHAR2(1 CHAR)
    SPRADDR_ACTIVITY_DATE DATE
    CONVERT_ACTIVITY_DATE VARCHAR2(8 CHAR)
    SPRADDR_USER VARCHAR2(30 CHAR)
    CONVERT_USER VARCHAR2(30 CHAR)
    SPRADDR_ASRC_CODE VARCHAR2(4 CHAR)
    CONVERT_ASRC_CODE VARCHAR2(4 CHAR)
    SPRADDR_DELIVERY_POINT NUMBER(2)
    CONVERT_DELIVERY_POINT VARCHAR2(2 CHAR)
    SPRADDR_CORRECTION_DIGIT NUMBER(1)
    CONVERT_CORRECTION_DIGIT VARCHAR2(1 CHAR)
    SPRADDR_CARRIER_ROUTE VARCHAR2(4 CHAR)
    CONVERT_CARRIER_ROUTE VARCHAR2(4 CHAR)
    SPRADDR_GST_TAX_ID VARCHAR2(15 CHAR)
    CONVERT_GST_TAX_ID VARCHAR2(15 CHAR)
    SPRADDR_REVIEWED_IND VARCHAR2(1 CHAR)
    CONVERT_REVIEWED_IND VARCHAR2(1 CHAR)
    SPRADDR_REVIEWED_USER VARCHAR2(30 CHAR)
    CONVERT_REVIEWED_USER VARCHAR2(30 CHAR)
    SPRADDR_DATA_ORIGIN VARCHAR2(30 CHAR)
    CONVERT_DATA_ORIGIN VARCHAR2(30 CHAR)
    SPRADDR_CVT_RECORD_ID NUMBER(8)
    SPRADDR_CVT_STATUS VARCHAR2(1 CHAR)
    SPRADDR_CVT_JOB_ID NUMBER(8)
    so here we can see its value is 40 (CONVERT_STREET_LINE1 VARCHAR2(40 CHAR)).
    shall i go ahead altering the column?

  • Query takes very long time and analyze table hangs

    Hi
    One of the oracle query taking very long time (ie more than a day) and affecting business requirment of getting the report in time.
    I tried to analyze the table with compute statistics option, however it hangs/runs forever on one of the huge table?
    Please let me know how to troubleshoot this issue

    Hi,
    What's your Oracle version?
    You should use DBMS_STATS package not ANALYZE..
    Regards,

  • ORA-02374: conversion error loading table during import using IMPDP

    HI All,
    We are trying to migrate the data from one database to an other database.
    The source database is having character set
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    US7ASCII
    The destination database is having character set
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    We took an export of the whole database using expdp and when we try to import to the destination database using impdp. We are getting the following error.
    ORA-02374: conversion error loading table <TABLE_NAME>
    ORA-12899: value too large for column <COLUMN NAME> (actual: 42, maximum: 40)
    ORA-02372: data for row:<COLUMN NAME> : 0X'4944454E5449464943414349E44E204445204C4C414D414441'
    Kindly let me know how to overcome this issue in destination.
    Thanks & Regards,
    Vikas Krishna

    Hi,
    You can overcome this issue by increasing the column width in the target database for the max value required for all data to be imported successfully in the table.
    Regards

  • ORA-12060: shape of prebuilt table does not match definition query

    Oracle version: 11G Release 2
    When Iam trying to create a Materialized view with on prebuilt table syntax I am facing the below issue.
    Create table sample_table as select col1,col2,col3 from sample_view;
    table created.
    Create Materialized view sample_table on prebuilt table refresh complete on demand as
    select col1,col2,col3 from sample_view;
    I am getting the below exception
    Error report:
    SQL Error: ORA-12060: shape of prebuilt table does not match definition query
    12060. 00000 - "shape of prebuilt table does not match definition query"
    *Cause:    The number of columns or the type or the length semantics of a
    column in the prebuilt table did not match the materialized
    view definition query.
    *Action:   Reissue the SQL command using BUILD IMMEDIATE, BUILD DEFERRED, or
    ensure that the prebuilt table matches the materialized view
    definition query.
    How to resolve this issue?

    SQL> create table sample_table as
      2  select owner, table_name, tablespace_name
      3  from dba_tables
      4  where rownum < 11;
    Table created.
    SQL> Create Materialized view sample_table on prebuilt table refresh complete on demand as
      2  select owner, table_name, tablespace_name
      3  from dba_tables;
    Materialized view created.What issue?
    Which leads me to ask what version of Oracle you have because we don't know.
    SELECT *
    FROM v$version;

  • Where is the output of analyze table name validate structure cascade

    Hi,
    database version:8.1.7.0.0
    os :solaris 5.9
    since i used to get ORA-00600: INTERNAL ERROR CODE, ARGUMENTS: [25012], [7], [39] and i need to validate the table and this table is very huge(200 gb) where will be the output generated if any error is there in table including indexes.
    sql>analyze table event_t validate structure cascade;
    Regards
    Prakash

    Hello Helios
    sorry ,
    I am using 10.2 and reviewing
    http://docs.oracle.com/cd/B14117_01/server.101/b10759/statements_4005.htm#sthref4205
    my quote is from this document. My question of an ideal case, when there is no block corruption
    regards ,
    Pavel
    Edited by: Pavel on Oct 17, 2012 3:55 AM

  • ORA-1653 (unable to extend table) and ORA-1654  (unable to extend index)

    Hi,
    We recently installed 12c.r1 and have it running now form some three weeks. About 100 assets currently in it.
    When trying to add a new discovery profile a received an error message from the BUI, in the cacao log from the EC i found a lot java exceptions caused (probably by : Internal Exception: java.sql.SQLException: ORA-01653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS)
    When looking at the alert log from the database i found its full with ORA-1653 and ORA-1654 messages; (and still those errors are being put in the alert logfile on a continues basis.)
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.VDO_SERVICE_INFO by 128 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.VDO_SERVICE_INFO by 128 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    And
    ORA-1654: unable to extend index OC.VMB_RESOURC_ASSOCIA_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VMB_RESOURC_ASSOCIA_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VMB_RESOURC_ASSOCIA_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VMB_RESOURCE_CAPABIL1_UNQIDX by 128 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VMB_RESOURCE_CAPABIL1_UNQIDX by 128 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VMB_RESOURCE_CAPABIL1_UNQIDX by 128 in tablespace OC_DEFAULT_TS
    Only thing i could think of would be a space issue in the filesystem. But there's still some 15G of free space available for the DB to extend.
    Any clues as to where to find the cause of this?
    Thanks in advance
    Kind regards
    Patrick

    Hi,
    Sorry for the late response (wasn't in the office last week)
    I'v extended the zpool with additional LUN's , now there is 168GB of free space (total DB size now 42GB) so, efficient free space should be available. After a restart of the DB unfortunately again the alert file is flooded with ORA-1653 / 64 messages on a continues basis;
    ORA-1654: unable to extend index OC.VDO_SENSOR_INFO_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VDO_SENSOR_INFO_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VDO_ALERT_MONITOR_ST1_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VDO_ALERT_MONITOR_ST1_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VDO_SENSOR_INFO_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VDO_SENSOR_INFO_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VDO_SENSOR_INFO_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VDO_SENSOR_INFO_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    Mon Jul 16 13:56:46 2012
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    Mon Jul 16 13:56:55 2012
    ORA-1654: unable to extend index OC.VDO_SENSOR_INFO_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    ORA-1654: unable to extend index OC.VDO_SENSOR_INFO_ID_UNQIDX by 8 in tablespace OC_DEFAULT_TS
    Mon Jul 16 13:57:02 2012
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    ORA-1653: unable to extend table OC.PERSISTENTALERT by 8 in tablespace OC_DEFAULT_TS
    etc,.....etc,......etc,.....
    Unsure what to do.
    Check the PCT_USED with a script and found;
    NAME MBYTES USED FREE PCT_USED LARGEST MAX_SIZE PCT_MAX_USED EXTENT_MAN SEGMEN
    USERS 5 1.31 3.69 26.25 3.69 32767.98 0 LOCAL AUTO
    OC_INDEX_TS 100 1 99 1 99 32767 0 LOCAL AUTO
    OC_DATA_TS 100 1 99 1 99 32767 0 LOCAL AUTO
    TEMP 174 174 0 100 0 32767.98 .53 LOCAL MANUAL
    SYSTEM 720 711.31 8.69 98.79 8 32767.98 2.17 LOCAL MANUAL
    SYSAUX 1230 1148.44 81.56 93.37 64.44 32767.98 3.5 LOCAL AUTO
    UNDOTBS1 7625 445.75 7179.25 5.85 3656 32767.98 1.36 LOCAL MANUAL
    OC_DEFAULT_TS 32767 32767 0 100 0 32767 100 LOCAL AUTO
    8 rows selected.
    Seems the OC_DEFAULT_TS is 100% full.
    Shouldn't this autoextend?!?
    I'm no DBA, and the OPCenter installation is default 'out-of-the-box' on a new system. Only running for a month now with about 100 assets.
    Any help appreciated
    Thanks
    Patrick
    Edited by: Patrick on Jul 16, 2012 3:13 PM
    Edited by: Patrick on Jul 16, 2012 3:15 PM

  • ORA-01653: unable to extend table DISPATCH.T_EVENT_DATA by 4096 in tablespa

    Hello everybody,
    I try to explain the problem I had, because I still didn't understand real causes.
    Everything started when I got this error:
    ORA-01653: unable to extend table DISPATCH.T_EVENT_DATA by 4096 in tablespace USERS
    I'm using ASM.
    This was the situation of the tablespace USER:
    FILE NAME                                                 TB NAME   SIZE (gb)                   STATUS               
    DATA/evodb/datafile/users.261.662113927     USERS     63,999969482421875     AVAILABLE
    and this was the situation of the DATAS diskgroup:
    GR # NAME        FREE_MB    USABLE     STATE      SECTOR SIZE   BLOCKSIZE
    2     DATA     60000     60000     MOUNTED     512     4096
    That diskgroup is composed by 5 files:
    PATH       DISK#       GR NAME           FREE MB    OS MB       TOTAL MB NAME                FAILGROUP
    /dev/asm2     0     DATA          12000     48127     48127     DATA_0000     DATA_0000
    /dev/asm3     1      DATA          12000     48127     48127     DATA_0001     DATA_0001
    /dev/asm4     2     DATA          12000     48127     48127     DATA_0002     DATA_0002
    /dev/asm5     3     DATA          12000     48127     48127     DATA_0003     DATA_0003
    /dev/asm6     4     DATA          12000     48127     48127     DATA_0004     DATA_0004
    This are the information about the table got from the dba_tables table:
    OWNER     DISPATCH
    TABLE_NAME     T_EVENT_DATA
    TABLESPACE_NAME USERS
    CLUSTER_NAME     
    IOT_NAME     
    STATUS     VALID
    PCT_FREE     10
    PCT_USED     
    INI_TRANS     1
    MAX_TRANS     255
    INITIAL_EXTENT     4294967296
    NEXT_EXTENT     
    MIN_EXTENTS     1
    MAX_EXTENTS     2147483645
    PCT_INCREASE     
    FREELISTS     
    FREELIST_GROUPS     
    LOGGING     YES
    BACKED_UP      N
    NUM_ROWS     532239723
    BLOCKS     1370957
    EMPTY_BLOCKS     0
    AVG_SPACE      0
    CHAIN_CNT 0
    AVG_ROW_LEN     32
    AVG_SPACE_FREELIST_BLOCKS     0
    NUM_FREELIST_BLOCKS     0
    DEGREE     1
    INSTANCES     1
    CACHE     N
    TABLE_LOCK     ENABLED
    SAMPLE_SIZE     532239723
    LAST_ANALYZED 21/09/2009 22.45
    PARTITIONED     NO
    IOT_TYPE     
    TEMPORARY     N
    SECONDARY      N
    NESTED     NO
    BUFFER_POOL     DEFAULT
    ROW_MOVEMENT DISABLED
    GLOBAL_STATS     YES
    USER_STATS     NO
    DURATION     
    SKIP_CORRUPT     DISABLED
    MONITORING     YES
    CLUSTER_OWNER     
    DEPENDENCIES     DISABLED
    COMPRESSION     DISABLED
    COMPRESS_FOR     
    DROPPED      NO
    READ_ONLY     NO
    So, my question is:
    Why did it happen?
    Why the table was unable to allocate the space? From what I can see the space was there.
    I alstro tried an ALTER TABLESPACE USER COALESCE, but with no luck.
    To solve the problem, I had to create another tablespace and put there the T_EVENT_DATA table.
    Looking forward to read some answer,
    thanks in advance!

    There can be two reasons:
    1.) Datafile is unable to extend as the auto-extend is set to NO.
    2.) Datafile reached to the MAXSIZE provided at the datafile creation.
    Query dba_data_files view and confirm this.
    Regards.

Maybe you are looking for