Datafile Extend

Hi All,
Oracle 10gR2 on Unix based platform.
Using ASSM, and datafiles are autoextensible. But I got this error while running a Client side process,
OCI0000178 - Unable to execute - INSERT INTO VPRODDTA.F57FGIGL (DGCPY, DGMCU, DGLOCN, DGITM, DGLITM, DGLOTN, DGFYR).........
OCI0000179 - Error - ORA-01654: unable to extend index VPRODDTA.F57STKD_0 by 128 in tablespace VPRODDTAIIf it is autoextensible, why it can't extend the index segment.

ASSM does not mean that the datafiles are AutoExtensible.
ASSM is a segment attribute.
AutoExtensibility is a physical attribute for the datafiles.
Datafiles in a tablespace may or may not be autoextensible.
Autoextensibility may be turned OFF after they are created.
One or more datafiles may have hit the maxsize already.
Hemant K Chitale

Similar Messages

  • OC4J Wrapper problem

    Hi,
    I'm getting the following error during deployment. The code compiles, deploys, and works in J2EE.fine and works:
    Auto-deploying DataFileEntity-ejb.jar (No previous deployment found)... DataFile
    Home_EntityHomeWrapper2.java:694: 'finally' without 'try'.
    finally
    ^
    DataFileHome_EntityHomeWrapper2.java:699: 'try' without 'catch' or 'finally'.
    ^
    DataFileHome_EntityHomeWrapper2.java:703: 'catch' without 'try'.
    catch(java.sql.SQLException e)
    ^
    DataFileHome_EntityHomeWrapper2.java:747: '}' expected.
    ^
    DataFileHome_EntityHomeWrapper2.java:749: 'try' without 'catch' or 'finally'.
    public DataFileHome_EntityHomeWrapper2() throws java.rmi.RemoteException
    ^
    DataFileHome_EntityHomeWrapper2.java:749: Statement expected.
    public DataFileHome_EntityHomeWrapper2() throws java.rmi.RemoteException
    ^
    6 errors
    Error compiling
    :\OC4J\j2ee\home\applications\FMSTest/DataFileEntity-ejb.jar: Syntax error in source
    Any Idea what's going on?
    Is this a OC4J bug?
    Thanks
    Nabil
    null

    Hi Ray,
    This the code:
    * Title: DataFile
    * Description: Remote interface, CMT entity ejb
    * Copyright: Copyright (c) 2001
    * Company:
    * @author Nabil Khalil
    * @version 1.0
    * @since JDK1.3, J2SDKEE1.2.1
    package com.equifax.fms.ejbs.datafile;
    import javax.ejb.EJBObject;
    import java.rmi.RemoteException;
    public interface DataFile extends EJBObject {
    public int getModelNum() throws RemoteException;
    public String getFileName() throws RemoteException;
    public String getFileType() throws RemoteException;
    public int getFileLrecl() throws RemoteException;
    public String getFilePath() throws RemoteException;
    public float getFileWeight() throws RemoteException;
    public void setModelNum(int modelNum) throws RemoteException;
    public void setFileType(String fileType) throws RemoteException;
    public void setFileName(String fileName) throws RemoteException;
    public void setFileLrecl(int fileLrecl) throws RemoteException;
    public void setFilePath(String filePath) throws RemoteException;
    public void setFileWeight(float fileWeight) throws RemoteException;
    * Title: DataFileHome
    * Description: Home interface, CMT entity ejb
    * Copyright: Copyright (c) 2001
    * Company:
    * @author Nabil Khalil
    * @version 1.0
    * @since JDK1.3, J2SDKEE1.2.1
    package com.equifax.fms.ejbs.datafile;
    import java.util.Collection;
    import java.rmi.RemoteException;
    import javax.ejb.CreateException;
    import javax.ejb.FinderException;
    import javax.ejb.EJBHome;
    public interface DataFileHome extends EJBHome {
    public DataFile create(int modelNum, String fileType, String fileName, int fileLrecl, String filePath,
    float fileWeight) throws RemoteException, CreateException;
    public DataFile findByPrimaryKey(DataFilePKey PKey) throws FinderException, RemoteException;
    public Collection findByModel(int model) throws FinderException, RemoteException;
    * Title: DataFileEJB
    * Description: Bean class, CMT entity ejb
    * Copyright: Copyright (c) 2001
    * Company:
    * @author Nabil Khalil
    * @version 1.0
    * @since JDK1.3, J2SDKEE1.2.1
    package com.equifax.fms.ejbs.datafile;
    import javax.ejb.EntityBean;
    import javax.ejb.CreateException;
    import javax.ejb.EntityContext;
    public class DataFileEJB implements EntityBean {
    public int model_num;
    public String file_type;
    public String file_name;
    public int file_lrecl;
    public String file_path;
    public float file_weight;
    private boolean showInfo = true;
    private EntityContext context;
    public DataFilePKey ejbCreate(int modelNum,
    String fileType,
    String fileName,
    int fileLrecl,
    String filePath,
    float fileWeight ) throws CreateException {
    if (modelNum <= 0 &#0124; &#0124; fileType == null) {
    throw new CreateException("DatFileEJB: model_num and file_type are required to create data_file row.");
    model_num = modelNum;
    file_type = fileType;
    file_name = fileName;
    file_lrecl = fileLrecl;
    file_path = filePath;
    file_weight = fileWeight;
    if (showInfo)
    System.out.println("\n>>>>>>>>>>>> DataFileEJB <<<<<<<<<<\n" +
    " model_num: " + model_num + "\n" +
    " file_type: " + file_type + "\n" +
    " file_name: " + file_name + "\n" +
    " file_lrecl: " + file_lrecl + "\n" +
    " file_path: " + file_path + "\n" +
    "file_weight: " + file_weight + "\n" +
    "======================================="
    return null;
    public int getModelNum() {
    return model_num;
    public String getFileType() {
    return file_type;
    public String getFileName() {
    return file_name;
    public int getFileLrecl() {
    return file_lrecl;
    public String getFilePath() {
    return file_path;
    public float getFileWeight() {
    return file_weight;
    public void setModelNum(int modelNum) {
    model_num = modelNum;
    public void setFileType(String fileType ) {
    file_type = fileType;
    public void setFileName(String fileName) {
    file_nam e = fileName;
    public void setFileLrecl(int fileLrecl) {
    file_lrecl = fileLrecl;
    public void setFilePath(String filePath) {
    file_path = filePath;
    public void setFileWeight(float fileWeight) {
    file_weight = fileWeight;
    public void setEntityContext(EntityContext context) {
    this.context = context;
    public void ejbActivate() {
    DataFilePKey pkey = (DataFilePKey)context.getPrimaryKey();
    public void ejbPassivate() {
    model_num = 0;
    file_type = null;
    DataFilePKey pkey = null;
    public void ejbRemove() { }
    public void ejbLoad() { }
    public void ejbStore() { }
    public void unsetEntityContext() {
    context = null;
    public void ejbPostCreate(int modelNum,
    String fileType,
    String fileName,
    int fileLrecl,
    String filePath,
    float fileWeight ) { }
    * Title: DataFilePKey
    * Description: Primary key class, CMT entity ejb
    * Copyright: Copyright (c) 2001
    * Company:
    * @author Nabil Khalil
    * @version 1.0
    * @since JDK1.3, J2SDKEE1.2.1
    package com.equifax.fms.ejbs.datafile;
    import java.io.Serializable;
    public class DataFilePKey implements Serializable {
    public int model_num;
    public String file_type;
    public DataFilePKey() {};
    public DataFilePKey(int modelNum, String fileType) {
    model_num = modelNum;
    file_type = fileType;
    public int getModelNum() {
    return model_num;
    public String getFileType() {
    return file_type;
    public void setModelNum(int modelNum) {
    model_num = modelNum;
    public void setStep(String fileType) {
    file_type = fileType;
    public boolean equals(Object other) {
    if (other instanceof DataFilePKey) {
    return(model_num == (((DataFilePKey)other).model_num) &&
    file_type.equals((((DataFilePKey)other).file_type)));
    return false;
    public int hashCode() {
    StringBuffer sb = new StringBuffer();
    sb.append(model_num);
    sb.append(file_type);
    String st = sb.toString();
    int hashCode = st.hashCode();
    return hashCode;
    Thanks
    Nabil
    null

  • Write/open error

    Dear Colleagues,
    I have Oracle 7.3.4 with Sco Open Server.
    Here is a record from alert.log:
    KCF: write/open error dba=0x2800bd49 block=0xbd49 online=1
    file=5 /usr2/oradata/bb/users.ora
    error=7374 txt: 'Additional information: 48457'
    May be somebody khows what's going on?
    null

    Do you have your datafile extends through autoextend with multiple db_writers ?
    If yes it looks like a SCO specific issue
    (Bug# 916018). You can use a workaround:
    disable "autoexend"
    or
    we suggest that you first upgrade to 7.3.4.4
    and then apply the fix for this bug
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Yuri Khupchenko ([email protected]):
    Dear Colleagues,
    I have Oracle 7.3.4 with Sco Open Server.
    Here is a record from alert.log:
    KCF: write/open error dba=0x2800bd49 block=0xbd49 online=1
    file=5 /usr2/oradata/bb/users.ora
    error=7374 txt: 'Additional information: 48457'
    May be somebody khows what's going on?<HR></BLOCKQUOTE>
    null

  • ORA-1653: unable to extend table - but enough space for datafile

    We encountered this problem in one of our database Oracle Database 10g Release 10.2.0.4.0
    We have all datafiles in all tablespaces specified with MAXSIZE and AUTOEXTEND ON. But last week database could not extend table size
    Wed Dec  8 18:25:04 2013
    ORA-1653: unable to extend table PCS.T0102 by 128 in                 tablespace PCS_DATA
    ORA-1653: unable to extend table PCS.T0102 by 8192 in                tablespace PCS_DATA
    Wed Dec  8 18:25:04 2013
    ORA-1653: unable to extend table PCS.T0102 by 128 in                 tablespace PCS_DATA
    ORA-1653: unable to extend table PCS.T0102 by 8192 in                tablespace PCS_DATA
    Wed Dec  8 18:25:04 2013
    ORA-1653: unable to extend table PCS.T0102 by 128 in                 tablespace PCS_DATA
    ORA-1653: unable to extend table PCS.T0102 by 8192 in                tablespace PCS_DATA
    Datafile was created as ... DATAFILE '/u01/oradata/PCSDB/PCS_DATA01.DBF' AUTOEXTEND ON  NEXT 50M MAXSIZE 31744M
    Datafile PCS_DATA01.DBF had only 1GB size. Maximum size is 31GB but database did not want to extend this datafile.
    We used temporary solution and we added new datafile to same tablespace. After that database and our application started to work correctly.
    There is enough free space for database datafiles.
    Do you have some ideas where could be our problem and what should we check?
    Thanks

    ShivendraNarainNirala wrote:
    Hi ,
    Here i am sharing one example.
    SQL> select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE",
      2   round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE",
      3   round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE"
      4   from dba_tables where owner in('DWH_SCHEMA1','RM_SCHEMA_DDB','RM_SCHEMA') and round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) > 10 ORDER BY FRAGMENTED_SPACE;
    OWNER           TABLE_NAME                        BLOCKS   NUM_ROWS AVG_ROW_LEN TOTAL_SIZE           ACTUAL_SIZE          FRAGMENTED_SPACE
    DWH_SCHEMA1     FP_DATA_WLS                        14950     168507          25 116.8MB              4.02Mb               112.78MB
    SQL> select tablespace_name from dba_segments where segment_name='FP_DATA_WLS' and owner='DWH_SCHEMA1';
    TABLESPACE_NAME
    DWH_TX_DWH_DATA
    SELECT /* + RULE */  df.tablespace_name "Tablespace",
           df.bytes / (1024 * 1024) "Size (MB)",
           SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
           Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
           Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
      FROM dba_free_space fs,
           (SELECT tablespace_name,SUM(bytes) bytes
              FROM dba_data_files
             GROUP BY tablespace_name) df
    WHERE fs.tablespace_name   = df.tablespace_name
    GROUP BY df.tablespace_name,df.bytes
    UNION ALL
    SELECT /* + RULE */ df.tablespace_name tspace,
           fs.bytes / (1024 * 1024),
           SUM(df.bytes_free) / (1024 * 1024),
           Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
           Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
      FROM dba_temp_files fs,
           (SELECT tablespace_name,bytes_free,bytes_used
              FROM v$temp_space_header
             GROUP BY tablespace_name,bytes_free,bytes_used) df
    WHERE fs.tablespace_name   = df.tablespace_name
    GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
    ORDER BY 4 DESC;
    set lines 1000
    col FILE_NAME format a60
    SELECT SUBSTR (df.NAME, 1, 60) file_name, df.bytes / 1024 / 1024 allocated_mb,
    ((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))
    used_mb,
    NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
    FROM v$datafile df, dba_free_space dfs
    WHERE df.file# = dfs.file_id(+)
    GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes
    ORDER BY file_name;
    Tablespace                      Size (MB)  Free (MB)     % Free     % Used
    DWH_TX_DWH_DATA                     11456       8298         72         28
    FILE_NAME                                                    ALLOCATED_MB    USED_MB FREE_SPACE_MB
    /data1/FPDIAV1B/dwh_tx_dwh_data1.dbf                                 1216       1216             0
    /data1/FPDIAV1B/dwh_tx_dwh_data2.dbf                                10240       1942          8298
    SQL> alter database datafile '/data1/FPDIAV1B/dwh_tx_dwh_data2.dbf' resize 5G;
    alter database datafile '/data1/FPDIAV1B/dwh_tx_dwh_data2.dbf' resize 5G
    ERROR at line 1:
    ORA-03297: file contains used data beyond requested RESIZE value
    Although , we did moved the tables into another TB , but it doesn't resolve the problem unless we take export and drop the tablespace aand again import it .We also used space adviser but in vain .
    As far as metrics and measurement is concerned , as per my experience its based on blocks which is sparse in nature related to HWM in the tablespace.
    when it comes to partitions , just to remove fragmentation by moving their partitions doesn't help  .
    Apart from that much has been written about it by Oracle Guru like you .
    warm regards
    Shivendra Narain Nirala
    how does free space differ from fragmented space?
    is all free space considered by you to be fragmented?
    "num_rows*avg_row_len" provides useful result only if statistics are current & accurate.

  • Unable to extend datafile which is autoextend on.

    Hi All,
    I am facing interesting problem in
    Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    SunOS XYZ 5.10 Generic_138888-07 sun4u sparc SUNW,Sun-Fire-880
    Datafile is autoextend on and enough free space is available in mount point on OS, but datafile is unable to extend.....
    Below are the details:
    select file_name,AUTOEXTENSIBLE from dba_data_files where tablespace_name='I_20090414_4';
    FILE_NAME                                                                                                      AUTOEXTENSIBLE
    /mnt/tfmdtwmna01/apps3/oradata/wmsgivn/i_20090414_4.dbf                                   YES
    SQL> !df -h /mnt/tfmdtwmna01/apps3/
    Filesystem             size   used  avail capacity  Mounted on
    ABCDEF.cda.com:/vol/vol3/apps3
                           315G   201G   114G    64%    /mnt/tfmdtwmna01/apps3Did anyone have faced this type of issue????? I think this database is hitting some bugs....
    Some inputs:
    OS was recently upgraded to Solaris 5.10 and Database was also recently upgraded from 9.2.0.7 to 9.2.0.8
    Please help!!!!!
    -Yasser
    Edited by: YasserRACDBA on Apr 15, 2009 6:18 PM

    SQL> sho parameter block
    NAME                                 TYPE        VALUE
    db_block_buffers                     integer     0
    db_block_checking                    string      FALSE
    db_block_checksum                    boolean     TRUE
    db_block_size                        integer     8192
    db_file_multiblock_read_count        integer     16-Yasser

  • Extend datafile in raw device

    Hi All,
    need some help. our database resides in raw device. currently datafiles resides in different logical volume in HP UNIX server version 11.11. our database is Oracle9i.
    there is one datafile with size 4G, not auto extensible resides in logical volume with physical size 4G. if i extend the logical volume from 4G to 6G, can i just resize the datafile to 6G?
    or do i need to create a new datafile because current file size is big?

    With raw devices alternative cost of performance is typical management difficulties like this, but after 10g ASM is a big shift - http://www.dbazine.com/olc/olc-articles/still5
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/storeman.htm#i1021337
    Best regards.

  • Can we find out the date/time when the datafile last extended?

    Hello all,
    Can we find out the date/time when the datafile last extended?
    Is it possible to find out from the alert.log file?
    Correct me if I am worng?
    Thanks in advance
    Himanshu

    In continuation with the earlier post, can you tell me what sort of entry should be search if the above information is available in the alert.log file?
    Thanks
    Himanshu

  • About automatically extend datafile!

    HI All Experts,
         What is the advantage and disadvantage about using automatically extend on datafile? Any suggestion is welcome!

    I agree with forbrich, I also do not like using autoextend option. In case when you are obligated to use it (i.e. customer requirement) I would advice you to use MAXSIZE option. This will limit the size to which file can autoextend. You should also choose reasonable datafile sizes, you should choose reasonable sizes. You have to remember about data availability. If you place or you data in one datafile and this file will be corrupted you whole application will be not available during restore of for example 32 GB datafile. If you devide your data wisely between tablespaces in case of one datafile corruption many parts of your application can be available to end users.
    Best Regards
    Krystian Zieja / mob

  • Turns An Existing Fixed Size Datafile To Auto Extendable

    Hi
    I wish to have a sql statement which turns an existing fixed size datafile to auto extendable.
    Wishes
    Jawad

    An example from my test DB :
    SYS@db102 SQL> alter database datafile '/home/ora102/oradata/db102/test01.dbf'
      2  autoextend on next 5M maxsize 100M;
    Database altered.
    SYS@db102 SQL>                                                                                     

  • How datafile being extended over different mount point automatically

    Hello,
    I would like to understand if I have like 20 datafiles created over 2 mount point. All of it being setup with auto extend of 1GB and maxfile of 10GB. All files are not a max size yet.
    10 datafiles at /mountpoint1 with free space of 50GB
    10 datafiles at /mountpoint2 with free space of 200MB
    Since mountpoint2 have absolutely no space for auto extend, will it keep extending datafiles at mountpoint1 until it hit the maxsize of each file?
    Will it cause any issue of having mountpoint could not be extended due to mountpoint2?

    Girish Sharma wrote:
    In general, extents are allocated in a round-robin fashionNot necessarily true. I used to believe that, and even published a 'proof demo'. But then someone (may have been Jonothan Lewis) pointed out that there were other variables I didn't control for that can cause oracle to completely fill one file before moving to the next. Sorry, I don't have a link to that converstation, but it occurred in this forum, probably some time in 2007-2008.Ed,
    I guess you are looking for below thread(s)... ?
    Re: tablespaces or datafile
    or
    Re: tablespace with multiple files , how is space consumed?
    Regards
    Girish SharmaYes,but even those weren't the first 'publication' of my test results, as you see in those threads I refer to an earlier demo. That may have been on usenet in comp.database.oracle.server.

  • Extended datafile

    Dear Team,
    After restoring my Test server. When I am extended  datafile . It give below  error messege
    Tablespace extension main menu
    1 = Extend tablespace
    2 - Show tablespaces
    3 - Show data files
    4 - Show disk volumes
    5 * Exit program
    6 - Reset program status
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR0662I Enter your choice:
    2
    BR0280I BRSPACE time stamp: 2010-08-11 20.57.50
    BR0663I Your choice: '2'
    BR0280I BRSPACE time stamp: 2010-08-11 20.57.51
    BR0301E SQL error -1157 at location BrTspListGet-15, SQL statement:
    'OPEN curs_36 CURSOR FOR'
    'SELECT TABLESPACE_NAME, BYTES FROM DBA_FREE_SPACE UNION ALL SELECT TABLESPACE_NAME, BYTES_FREE + BYTES_USED FROM V$TEMP_SPACE_HEADER UNION ALL SELECT TABLESPACE_NAME, NVL(BYTES_USED, 0) * -1 FROM GV$TEMP_EXTENT_POOL ORDER BY 1'
    ORA-01157: cannot identify/lock data file 255 - see DBWR trace file
    ORA-01110: data file 255: '/oracle/PRD/sapdata2/temp_1/temp.data1'
    BR0669I Cannot continue due to previous warnings or errors - you can go back to repeat the last action
    BR0280I BRSPACE time stamp: 2010-08-11 20.57.51
    BR0671I Enter 'b[ack]' to go back, 's[top]' to abort:
    why this give this messge .how to extended datafile.

    Dear
    I have schedule All house keeping job via SM36.
    I checked   initPRD.ora file its show below   db_files = 254.
    number of processes
    sessions = 1.2 * processes
    processes = 80
    sessions  = 96
    AUDITING AND STATISTICS
    sql_trace=TRUE
    audit_trail = true
    db_block_lru_extended_statistics = 1000
    db_block_lru_statistics = true
    PART III, STATIC PARAMETERS                             #
    DB-NAME
    db_name = PRD
    DB-BLOCKSIZE
    db_block_size = 8192
    DB-FILES
    db_files = 254
    OPTIMIZER MODE
    #optimizer_mode = choose
    #optimizer_search_limit = 3
    PATHS / DESTINATIONS / TRACES
    /oracle/PRD/saptrace/background: trace files of the background
    processes
    /oracle/PRD/saptrace/usertrace:  trace files of the user processes
    log_archive_dest is a destination, not a path.
    The archivefiles get the name
    /oracle/PRD/oraarch/PRDarch<thread#>_<log#>
    background_dump_dest = /oracle/PRD/saptrace/background
    user_dump_dest       = /oracle/PRD/saptrace/usertrace
    core_dump_dest       = /oracle/PRD/saptrace/background
    log_archive_dest     = /oracle/PRD/oraarch/PRDarch
    #log_archive_format  = %t_%s
    Regards

  • Recover datafile that has extended beyond OS limit?

    We have a Linux server running Oracle 8.1.7 and we have a data file (users01.dbf) that had "autoextend" enabled and has now grown past the upper limit of the underlying filesystem. Oracle will not open the file, as the OS returns an error: "Linux Error: 75: Value too large for defined data type"
    Here are some details:
    OS: Redhat 7
    Filesystem: Ext2
    Oracle Ver: 8.1.7
    DBF Size: >2gb
    I have attempted to copy the .dbf files to a new server with little luck. Redhat 10 with Oracle 10 complained that the ctl files were corrupt, so I deleted them and recreated them based on the data files. After that, the DB indicated that the datafiles needed a media recovery, so I performed a "recover datafile..." on each of them successfully. Once I try to open the database, I get a "End of communication channel" error. I cannot mount it in Windows as the blocksizes are different (8192 vs. 4096)
    I am at the end of my ideas on how to recover this file. Any suggestions?
    I have thought of:
    1. Bring a copy of 8.1.7 up on a new Redhat system running a newer FS (Reiser) and try to open the file
    2. Add a new drive to the old sever and format that with the new FS, copy the dbf files there and point the DB to those files
    Any tools that I try to run on the old server all hit the roadblock of the Linux:75 error.
    Thanks for any ideas that come to mind.

    Can you give more detail about how you tried to copy the database files?
    First, I assume you are copying with the database down right?
    I would not change versions of Oracle during the copy, only the OS.
    SuSE Linux Enterprise Server 8 will work with your version of Oracle but I cannot from memory remember what its OS limit is.
    I would research either Suse 9 or Red Hat 9 and see if they work with your version of Oracle and support larger file limits.
    This site appears not to have info on Oracle 8 http://www.puschitz.com/
    However, he can had it in the past and he might provide some help.
    I like the copy idea, but I think you might lose some data.
    I'm thinking you will try this if you iron out the OS issue
    Start the database mount
    SQLPLUS> alter database recover database until cancel using backup controlfile;
    SQLPLUS> alter database recover cancel;
    SQLPLUS> alter database open resetlogs;
    I wish you luck and no loss of data!

  • Problem in extending datafile

    how to resize the datafile which is in the mount state
    Edited by: user11345217 on Mar 24, 2010 3:41 PM

    Hi ,
    Can you please try the below command as Asif said above alter database,alter tablespace will only work when your database is open.So you have to open your database first.
    SQL> ALTER DATABASE DATAFILE 'F:\oradata\live\Mydb02.ora' RESIZE 500m;
    Best regards,
    Rafi.
    http://rafioracledba.blogspot.com/
    Edited by: Rafi (Oracle DBA) on Mar 24, 2010 3:16 AM

  • Error While addding a new datafile

    Dear All,
    We have the tablespace PSAPUSER1D in one of our production systems,which is 100% full .I am trying to extend the tablespace using brtools,but i am unable to do it and it is giving the following error.
    BR0280I BRSPACE time stamp: 2007-11-05 06.55.50
    BR0657I Input menu 303 - please check/enter input values
    Options for extension of tablespace PSAPUSER1D (1. file)
    1 * Last added file name (lastfile) ....... [/oracle/P01/sapdata1/psapuser1d_1.dbf]
    2 * Last added file size in MB (lastsize) . [273]
    3 - New file to be added (file) ........... [/oracle/P01/sapdata2/psapuser1d_1.dbf]
    4 ~ Raw disk / link target (rawlink) ...... []
    5 - Size of the new file in MB (size) ..... [273]
    6 - File autoextend mode (autoextend) ..... [yes]
    7 - Maximum file size in MB (maxsize) ..... [0]
    8 - File increment size in MB (incrsize) .. [20]
    9 - SQL command (command) ................. [alter tablespace PSAPUSER1D add datafile '/oracle/P01/sapdata2/psapuser1d_1.dbf' size 273M autoextend on next 20M maxsize unlimited]
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR0662I Enter your choice:
    c
    BR0280I BRSPACE time stamp: 2007-11-05 06.55.56
    BR0663I Your choice: 'c'
    BR0259I Program execution will be continued...
    BR1052W File psapuser1d_1.dbf is already used by the database
    <u><b>BR1055E Database file /oracle/P01/sapdata2/psapuser1d_1.dbf must be located in a subdirectory of 'sapdata' directory</b>BR0669I Cannot continue due to</u> previous warnings or errors - you can go back to repeat the last action
    BR0280I BRSPACE time stamp: 2007-11-05 06.55.56
    BR0671I Enter 'b[ack]' to go back, 's[top]' to abort:
    so please kindly give any suggestions.

    Dear Llanes,
    A strange thing that i notice in one of our systems is that we dont have the standard directory structure like this
    "/oracle/P01/sapdata1/usr_1/psapuser1d_1.dbf",
    but we have the following structure
    $ ls -l /oracle/P01/sapdata1
    total 168573168
    drwxr-xr-x   2 orap01   dba           96 Apr 21  2006 cntrl
    drwxr-xr-x   2 orap01   dba           96 Jul  7 21:52 erp_1
    -rw-r-----   1 orap01   dba      286408704 Nov  5 08:43 psapuser1d_1.dbf
    -rw-r-----   1 orap01   dba      268451840 Nov  5 08:43 psapuser1i_1.dbf
    -rw-r-----   1 orap01   dba      838868992 Nov  5 08:55 system_1.dbf
    Wherein the datafiles have been directly added to the sapdata1 directory,without the corresponding subdirectory.so how can  we proceed in this scenario.
    Regards
    Balaji.P

  • Unable to extend table

    Hi All,
    I have Oracle 8i installed on Redhat 7.3
    I am trying to import from a large dmp file (10 GB). Initially I was getting error related to "unable to extend datafile", which I got over by creating multiple datafiles with size 2GB each. Thanks to the help provided on this list.
    But now I ran into another error that reads as following:
    IMP-00058: ORACLE error 1653 encountered
    ORA-01653: unable to extend table XXX.XXX_DAT by 311072 in tablespace YYY
    IMP-00017: following statement failed with ORACLE error 1031:
    "CREATE TRIGGER "USERME".XXX_BI before insert"
    I have created enough datafile and I have 60 GB available space on the drive. Please guide me as to what am I doing wrong here, and how could I get over this problem.
    Thanks.
    Amit

    Hi Joel,
    Thanks for your time.
    I have already done that part before I started import. The user has unlimited quota on the tablespace. But the problem still shows up.
    By the way, I have multiple dmp files, I could import the first dmp file, but this error shows up while doing the import for the next dmp file, which should just append the data to the existing table. I am sorry, I should have mentioned this before.
    Please advise.
    -Amit

Maybe you are looking for

  • I can't open more than 5 tabs at a time in Firefox

    This is how I work: I have 7 folders, each with 40 web pages as files. From each previous previous version of the Mac OS, including Mavericks, I've been able to open one of the folders, "Select All" and open all 40 web page files in tabs at once in F

  • Is BI Publisher 11g compatible with CC&B 2.3.1

    Dear All, Is BI Publisher 11g compatible with CC&B 2.3.1. In the Oracle support and forums, its mentioned that the BI Publisher 10g can be integrated with CC&B 2.3.1 and the sample reports provided with the application can be used for online bill dis

  • Help building an executable that uses a factory pattern

    Hello, I'm trying to build an .exe from a VI that uses the factory pattern. The VI gives me the error that it can't find the classes to load and is looking outside the .exe file to find them. The specific error is: "Get LV Class Default Value.vi<APPE

  • DSL line drop problems

    I've had BT Broadband for about 2yrs with no problems,  then from the end of January I've had repeated line drops.  The HH recovers sometimes straight away, sometimes after 2 to 5 drops in quick succession.  each time the line speed is good but the c

  • Can we create BS name is different name for SAP rather than SID

    Hi All, Business system 'A' is configured for SAP system and some IDOCS interfaces are running on with this busiess system name. but actually business system 'A' name is not the SAP system ID(lets say 'B'). Now Client proxy scenario is configured. It