Re org tablespace

Hi Guys,
My tabelspace is fragmented, so I'm trying to de-frag by issuing this command:
Alter tablespace TBS_NAME coalesce;
But after I've ran it and then refreshed the Tablespace Map. I don't see the changes. Does one have to run it more that once to see the changes?
Thanx.

I doubt if it is possible.I don't see any attachment option.
WHat is your issue?Why do youwant to reorganise the tablespace?
How big is the tablespace?If it is locally managed,you don't need to.
-We are experiencing performance issues on the appalication side. The Team that did an investigation suggetsted that the problem was caused by fragmantation of the tablespace. ( I did an analysis on the tablespace but no space management issues were detected.
-The tablesace is locally managed and is about 10G
-I've got free blocks scattered all over even if I coalesce free extents I don't see them joining forming one big chunk.
Thanx.

Similar Messages

  • Re-org tablespace with BOLBS.

    Hi,
    I want to do reorg of tablespace wich has tables with BLOBS in it. It is a 9i database and the size of this tablespace is around 1000 GB. The records are being extracted out of this table and at present the internal space used by lobs as queried by dbms_lob.getlength is 250 GB.
    I know that it has to be done only by exp/imp and it is going to take a long time.
    1.What should be the optimum internal space used(which is 250 g, now) at which I should think of starting the reorg. This cannot come below 150 g.
    2.How long will the reorg take.
    3.Is there any method or base line which will indicate me the approximate time required.
    Any experience, input at this forum will greatly help me.
    Thanks
    KH

    Do you really have a single 1TB tablespace?Yes
    Respect!! Out of interest, how many datafiles isThe number of datafiles are
    COUNT(FILE_NAME)
    524
    s that? On what OS?the OS is
    SunOS srvrname 5.9 Generic_118558-06 sun4u sparc SUNW,Sun-Fire-V490
    Incidentally, why are you having to reorganise this
    tablespace? I think I can guess but I'd like to know
    for sure.
    This tablespace has grown to 1000 + Gig. When it was designed there
    was no plan to purge/delete older records. Now the purge job is going
    on to control the size of the database. And records are being
    extracted and stored in an archival database. Thus the internal space
    used by the lobs in that tablespace(I queried with dbms_lob.getlength)
    has reduced to 250 Gig. But you know the tablespace is still holding
    the 1000+ Gigs space and I know re-org is the only way to release space of this tablespace.
    The option I am left with is the export and then import which will
    for sure going to take weeks but may be month. As one of my
    colleague's experience is that for a 10 Gig re-org it took 72 hours,
    and other senior management is also of the same opinion.
    Is there any relation to calculate how long will this take.
    Thanks APC
    KH

  • I want to re-org my tablespace ( Dont want to use Export / IMPORT)

    Hello Friends,
    Please suggest how to do tablespace re-org without using import / export method
    Thanks
    Krishna

    8f114e08-97f8-4f5d-b50f-3684247f58e9 wrote:
    Hello Friends,
    Please suggest how to do tablespace re-org without using import / export method
    Thanks
    Krishna
    8f114e08-97f8-4f5d-b50f-3684247f58e9 wrote:
    Hello Friends,
    Please suggest how to do tablespace re-org without using import / export method
    Thanks
    Krishna
    what do you mean by "re-org"?
    why do you think a "re-org" is necessary or beneficial?

  • I have re-orged the PORTAL's tablespace from USERS into XYZ and now portal don't work

    Once I installed portal30 (3.09.x) I choosed USERS for the tablespace of PORTAL30 schema. I created two new tablespace PTL_DATA and PTL_INDEX. I then wrote a script to move all tables belonging to PORTAL30 and PORTAL30_SSO to the newly created tablespaces. Portal now DOES NOT work. I can go ahead and put it back, but has any one experienced something like this. The error I am getting after the re-org is as follows:
    Call to WPG_SESSION API Failed.
    Error-Code:1422
    Error TimeStamp:Wed, 24 Apr 2002 16:44:43 GMT
    Database Log In Failed
    TNS is unable to connect to destination. Invalid TNS address supplied or destination is not listening. This error can also occur because of underlying network transport problems.
    Verify that the TNS name in the connectstring entry of the DAD for this URL is valid and the database listener is running.
    I can not find error-code: 1422 any where in the metalink. I can connect to sqlplus through the command line. I have bounced oracle and apache.
    Any help is appreciated

    By "doing the reset button thing" you likely restored the Airport Express to its default settings - which includes enabling its built in router. When that happens, you lose access to network resources on your home LAN that that Airport Express is also connected to.
    To fix this problem - run the Airport Admin Utility. Select your Airport Express, click to Configure. Click on the Network tab. Uncheck the setting to "Distribute IP addresses". Update settings to the Airport Express. Finally, restart all wireless computers. That should solve your problem.

  • Tablespace Re-Org

    Hello SAP Gurus,
    Can you please tell me all the different ways to do tablespace re-org? Some sort of documentation related to the same would be very helpful.
    Thank you.
    Venu.

    SAP note 646681 is also excellent if you plan on doing online reorgs.

  • How to exclude recycling bin from tablespace usage calculations

    I have a generic query to tell me how much of a given tablespace is used. I want to exclude objects in the recycling bin.
    in dba_segments these segments all have names that begin with BIN$
    This is not the case in dba_extents where we get the calculations from. How do I exclude objects in the recylcing bin? Basically how do I exclude segments in dba_extents that are in the recycling bin
    select      f.tablespace_name,a.total,
         u.used,f.free,
         round((u.used/a.total)*100) "% used",
         round((f.free/a.total)*100) "% Free",
         round(((0.10*u.used)-f.free)/0.9) "10%",
         round(((0.15*u.used)-f.free)/0.85) "15%",
         round(((0.20*u.used)-f.free)/0.8) "20%",
         round(((0.25*u.used)-f.free)/0.75) "25%"
    from
    (select tablespace_name, sum(bytes/(1024*1024)) total from dba_data_files group by tablespace_name) a,
    (select tablespace_name, round(sum(bytes/(1024*1024))) used from dba_extents group by tablespace_name) u,
    (select tablespace_name, round(sum(bytes/(1024*1024))) free from dba_free_space group by tablespace_name) f
    WHERE a.tablespace_name = f.tablespace_name
    and a.tablespace_name = u.tablespace_name
    and a.tablespace_name=TRIM(UPPER('&&TS_NAME'))
    /

    You could join the two views or you could use dba_segments rather than extents in your query or you could use the built-in functionality rather than writing your own.
    http://www.morganslibrary.org/reference/dbms_space.html
    DBMS_SPACE.FREE_BLOCKS
    DBMS_SPACE.UNUSED_SPACE

  • LOBs tablespace issue

    Hello everyone!!!
    I'm using for testing an ORACLE XE on my own XP machine (which has a cluster of 4K), and I've created a tablespace for LOBs with those settings:
    CREATE TABLESPACE lobs_tablespace
    DATAFILE '.....lobs_datafile.dbf'
    SIZE 20M
    AUTOTEXTEND ON
    NEXT 20M
    MAXSIZE UNLIMITED
    EXTENT MANAGEMENT LOCAL
    BLOCKSIZE 16K;
    so I created a test table with a LOB field:
    CREATE TABLE test_table
    ID NUMBER,
    FIELD1 VARCHAR2(50)
    FIELD2 BLOB,
    CONSTRAINT......
    LOB (FIELD2) STORE AS LOB_FIELD2 (
    TABLESPACE lobs_tablespace
    STORAGE (INITIAL 5M NEXT 5M PCTINCREASE 0 MAXEXTENTS 99)
    CHUNK 16384
    NOCACHE NOLOGGING
    INDEX LOB_FIELD2_IDX (
    TABLESPACE lobs_tablespace_idxs));
    where lobs_tablespace_idxs is created with blocksize of 16K
    so at this point, because i'm doing some tests on functions, I tried to insert in this table with a:
    FOR i IN 1..10000 LOOP
    fn_insert_into_table('description', 'filename');
    END LOOP;
    trying to insert a word file with dimension of almost 5Mb and I get the datafile lobs_datafile.dbf increased from start of 50M to almost 5Gb...
    I have some parameters settled as:
    db_16K_cache_size=1028576
    db_block_checking = false
    undo_management = auto
    db_block_size = 8192
    sga_target = 128M
    sga_max_size = 256M
    so the question is: doing some calculus 5Mb of a file * 10000 should be at max 60Mb...not 5Gb...so why the datafile increased so much as like it did? shall I have to check something else that I've missed?....
    Thanks a lot to everyone! :-)

    Hi,
    I'm guessing that you'll need to do a bit of a re-org in order to free up the space.
    You may well be able to do that just at the LOB level, rather than rebuilding the entire table.
    There's stuff about that in Chapter 3 of the 10G App Developers Guide: Large Objects.
    Of course, it the table is now empty, then you might as well just drop it and recreate.
    After that, you should be able to resize the datafile.

  • How do we know when datafile was added to a tablespace

    HI experts,
    I wanted to know how do we know when datafile was added to a tablespace.
    Actually i need to monitor how much space was used for a month for a tablespace
    thanks in advance

    Which could be turned into an external table and queried at will:
    Demo of the alert log as a table at:
    http://www.psoug.org/reference/externaltab.html
    Just a quick note with respect to background_dump_dest for anyone starting into 11g. It is deprecated and BDUMP and UDUMP no longer exist: Read the docs.

  • Table Re-org using OEM - Space did not released

    All,
    I'm in the process of doing online table re-org using oem segment advisor.
    But even after the i re-organized the table it is showing the same space that means the recommended was not reclaimed.
    Any one having idea about table re-org using OEM
    Version : 10.2.0.3
    OEM Version : 10.2.0.5

    select 'alter table '||table_name||' move tablespace YOUR_TS'||chr(10)||
    'LOB ('||column_name||') store as '||segment_name||chr(10)||
    '(tablespace YOUR_TS);'
    from user_lobs
    /

  • Degree of parallelism and number of files in tablespace

    Hi,
    I am trying to find out the relationship between the number of files in a tablespace and degree of parallelism. does number of files in a tablespace effect dop in parallel query. because if more files are in the tablespace so the IO of the system has increased and system has become more feverable for parallel query.
    However i looked into the formulas for calculating dop i dont find any parameter which specify how many file are there in a tablespace. please give me the formula of calculting dop in oracle.
    regards
    Nick

    Maurice Muller wrote:
    Did you run this test on an Exadata Storage Server? How much IO throughput did you have per process? No. That one is from a RAC cluster of DL580 G5s. There were 7 in that query, but I had 8 total. One was down at the time due to hardware failure.
    The >500 DOP is from a RAC cluster of 16 DL580s (so DOP=512) to be exact.
    The amount of I/O you get per slave (or talking aggregate) is dependent on what the query execution plan is. Simple things like select count(*) are very I/O intensive, but not very CPU/memory intensive. The blocks are read, counted and then discarded. A group by or hash join will be more CPU intensive and less I/O intensive. Over the course of a given query execution the use of resources will generally alternate. Heavy on I/O at first, then more CPU heavy for a hash join, etc.
    How much IO throughput do you recommend per CPU(Core) for an Oracle DWH server?As a rule of thumb I size systems to have around 100MB/s of physical I/O throughput per CPU core. So for a four socket quad core DL580 G5 (16 cores) target 1600MB/s, which works out nicely to four 4Gbps FCP ports. Do note if you are using compression you will be delivering a logical I/O rate higher than the physical based on the compression ratio for the data.
    With Exadata, things are a little different. Since the CPUs that do the physical I/O are not at the DB layer, no wait I/O shows up in the metrics. The db blocks are transferred to the host using iDB protocol, not FCP like most external storage.
    For instance, here is a screen capture of a HP Oracle Database Machine running a single query that uses PQ
    http://structureddata.org/files/life_without_waitio.png
    The problem is that most storage system are, from my point of view, much to slow compared to the available CPUs performance. In most cases people don't realise that the IO throughput required for large parallel queries is not comparable with the one required for an OLTP system having >90% cache hit ratio.Very true. Also discussed here:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/hardware.htm
    Regards,
    Greg Rahn
    http://structureddata.org

  • Master in Oracle and explicit tablespaces

    Hi All - I'm curious if anyone has installed Tidal 6.1.x without explicitly defining tablespaces.  The DBA's don't define tablespaces as "Oracle has managed tablespaces for the last 4 versions."
    I can update the upd.xml file for the Master database, but I'm wondering about how to handle the ClientManager cache since Tidal builds tables, views and indexes there dynamically.
    Thanks for any feedback!
    Michelle Morris            

    Hi,
    You can extract items without List Price , but that item will get extracted to iProcurement without a price then you wont be able to enter a price while creating and approving a requisition.
    Steps to Tie to ASL and BPA
    1. Create the item in Item Master
    2. Purchasing Attributes setup completed (Also ensure PO Item Category is set)
    3. Create ASL for the item in the ASL screen (Ensure its for child org / it has a site )
    4. Go and create a BPA for the above supplier and site along with the item in the line details
    5. Run Extractor program for classifications and items
    6. Go to iProc and query for the item in Categories -- U should be able to view the same with the price as entered in BPA
    Regards,
    Sanjam

  • Reclaim unused space in tablespace

    Hi,
    I have a tablespace with a size of 150GB, and after 2 years the company told me the retention period for our data is only 4 months and after deleting the data I'm only using 40GB out of the 150GB size of the tablespace where my table is located. My problem is how can I shrink the size of the tablespace?
    I have tried to resize the datafiles but I have little success because I have only been able to shrink 20GB so my tablespace size is still big(130GB).
    I have the option of shutting down the database for a max of 4 to 5 hours. Aside from exp/imp what other solution can I do?
    Kindly Help.
    Thanks and Best Regards

    Is this a 10g database? You may not need to move to a different tablespace.
    Have a read of the "shrink space" entry on http://www.psoug.org/reference/tables.html
    (also check the manual: http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10759/statements_3001.htm#i2192484)
    That might help -- and note that the segment space management for the tablespace has to be automatic otherwise you'll get an ORA-10635 invalid segment or tablespace type error message.
    Hope that works for you.

  • Xmltype Tablespacing

    Hi,
    I created a table in this way:
    CREATE TABLE PRESENTATION_XML
           (PREF varchar2(10), CODICE varchar2(14),PRESENTATION sys.XMLTYPE)
    xmltype column PRESENTAZIONE
    XMLSCHEMA "http://test.example.org/presentation.xsd"
    element "Project"Is there a way to simply change column PRESENTATION tablespace after creation of the table?
    By default my tablespace is DATA, I would to to change it to DATALOB, maybe a generic example si better as explanation of my problem. With clob i could do a simple alter table, but with xmltype i get this error:
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as xxxxxxx
    SQL> create table prova (numero number(1), testo clob);
    Table created
    SQL> alter table prova move lob (testo) store as (tablespace datalob);
    Table altered
    SQL> create table prova2 (numero number(1), testo sys.XMLTYPE);
    Table created
    SQL> alter table prova2 move lob (testo) store as (tablespace datalob);
    alter table prova2 move lob (testo) store as (tablespace datalob)
    ORA-00904: "TESTO": invalid identifier
    SQL>  Second question, but same problem:
    Creating presantation column inside DATALOB tablespace.
    I already read the in the XML DB documentation the "Specifying Storage Models for Structured Storage of Schema-Based XMLType", but didn't find a solution to my problem.
    I tried to create a schema in this way... obviously with no good results....
    CREATE TABLE PRESENTATION_XML
           (PREF varchar2(10), CODICE varchar2(14),PRESENTATION sys.XMLTYPE)
    xmltype column PRESENTAZIONE
    XMLSCHEMA "http://test.example.org/presentation.xsd"
    element "Project"
            lob (PRESENTAZIONE."XMLDATA")
                store as (tablespace datalob)Should i use Unstructured Storage?
    Thanks for your help.
    Stefano

    Does http://www.liberidu.com/blog/?p=56 help?

  • Stange free space in tablespace

    Checking a space usage of my tablespace using those queries http://vsbabu.org/oracle/sect03.html (especially query which name USAGE) shows me that one of my table is about 100GB big, but 90GB is free space?! So only 10% of space is used. Can I do something with this "empty" space?

    Sure. Below is listing from my all tablespaces files:
    'FILE_NAME' USER_BYTES/1024/1024   BYTES/1024/1024        MAXBYTES/1024/1024     AUTOEXTENSIBLE
    file_name_1    4,9375                 5                      32767,984375           YES           
    file_name_2    32767,875              32767,984375           32767,984375           YES           
    file_name_3    613,125                613,1875               32767,984375           YES           
    file_name_4    5219,9375              5220                   32767,984375           YES           
    file_name_5    32767,875              32767,984375           32767,984375           YES           
    file_name_6    9071,625               9071,6875              32767,984375           YES           
    file_name_7    74,9375                75                     32767,984375           YES           
    file_name_8    32767,875              32767,984375           32767,984375           YES           
    file_name_9    32767,875              32767,984375           32767,984375           YES           
    file_name_10   24,9375                25                     32767,984375           YES           
    file_name_11   32767,875              32767,984375           32767,984375           YES           
    file_name_12   191,75                 191,8125               32767,984375           YES           
    file_name_13   124,9375               125                    32767,984375           YES           
    file_name_14   9669,9375              9670                   32767,984375           YES           
    file_name_15   29851,875              29852                  32767,984375           YES           
    file_name_16   1000,6875              1000,75                32767,984375           YES           
    file_name_17   32767,875              32767,984375           32767,984375           YES           
    file_name_18   32761,9375             32762,0625             32767,984375           YES           
    file_name_19   20031,9375             20032                  32767,984375           YES           
    file_name_20   22171,9375             22172                  32767,984375           YES           
    file_name_21   3999,9375              4000                   0                      NO            
    file_name_22   49,9375                50                     32767,984375           YES           
    22 rows selected At this output it doesn't look as bad as in previous queries, does it?
    Edited by: lesak on Jun 18, 2011 11:29 AM

  • 01653 unable toextend table DEV2_SOAINFRA.AUDIT_TRAIL by 1024 in tablespace

    Hi I am facing this error when i try to test my composite in EM Console...can anyone please let me know what should i do inorder to overcome this error...do i need to delete some records from audit_trail table under dev2_soainfra schema....
    The log message says...
    Caused by: java.sql.SQLException: ORA-01653: unable to extend table DEV2_SOAINFRA.AUDIT_TRAIL by 1024 in tablespace DEV2_SOAINFRA
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:457)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
         at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:889)
         at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:476)
         at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:204)
         at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:540)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1079)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1466)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3752)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3887)
         at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1508)
         at weblogic.jdbc.wrapper.PreparedStatement.executeUpdate(PreparedStatement.java:172)
         at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:788)
         ... 153 more
    Error Code: 1653
    Call: INSERT INTO AUDIT_TRAIL (CIKEY, COUNT_ID, NUM_OF_EVENTS, BLOCK_USIZE, CI_PARTITION_DATE, BLOCK_CSIZE, BLOCK, LOG) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
         bind => [220615, 0, 37, 43588, 2011-06-22 17:27:02.212, 2027, 0, [B@23a79360]
    Query: InsertObjectQuery(com.collaxa.cube.persistence.dto.AuditTrail@74fb23db)
         at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:324)
         at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:797)
         at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:863)
         at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:583)
         at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:526)
         at org.eclipse.persistence.internal.sessions.AbstractSession.executeCall(AbstractSession.java:980)
         at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:206)
         at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:192)
         at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:341)
         at org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:162)
         at org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:177)
         at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:465)
         at org.eclipse.persistence.queries.InsertObjectQuery.executeCommit(InsertObjectQuery.java:80)
         at org.eclipse.persistence.queries.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:90)
         at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:290)
         at org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
         at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:740)
         at org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:643)
         at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:108)
         at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:85)
         at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2908)
         at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1291)
         at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1273)
         at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1233)
         at org.eclipse.persistence.internal.sessions.CommitManager.commitNewObjectsForClassWithChangeSet(CommitManager.java:224)
         at org.eclipse.persistence.internal.sessions.CommitManager.commitAllObjectsForClassWithChangeSet(CommitManager.java:191)
         at org.eclipse.persistence.internal.sessions.CommitManager.commitAllObjectsWithChangeSet(CommitManager.java:136)
         at org.eclipse.persistence.internal.sessions.AbstractSession.writeAllObjectsWithChangeSet(AbstractSession.java:3348)
         at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabase(UnitOfWorkImpl.java:1422)
         at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitToDatabase(RepeatableWriteUnitOfWork.java:610)
         at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabaseWithChangeSet(UnitOfWorkImpl.java:1527)
         at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.issueSQLbeforeCompletion(UnitOfWorkImpl.java:3181)
         at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.issueSQLbeforeCompletion(RepeatableWriteUnitOfWork.java:332)
         at org.eclipse.persistence.transaction.AbstractSynchronizationListener.beforeCompletion(AbstractSynchronizationListener.java:157)
         at org.eclipse.persistence.transaction.JTASynchronizationListener.beforeCompletion(JTASynchronizationListener.java:68)
         at weblogic.transaction.internal.ServerSCInfo.doBeforeCompletion(ServerSCInfo.java:1239)
         at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1214)
         at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:116)
         at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1316)
         at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:2132)
         at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:272)
         at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:239)
         at weblogic.ejb.container.internal.BaseLocalObject.postInvoke1(BaseLocalObject.java:622)
         ... 112 more
    Thanks,
    N

    It appears from the log message that your Database is running out of space. Check how much is the space left on your Database and delete some instances from SOAINFRA.

Maybe you are looking for