GoldenGate: Initialization when Table has Clob

I have been testing Goldengate and I have performed schema initialization successfully when
all tables have scalar data types.
I have a problem when I include a table with a clob column in this test.
My clob table is simply:
desc johns_clob
Name Null? Type
ID NOT NULL NUMBER
MYCLOB CLOB
I've loaded the clob column with a character string that is 30000 characters long.
When I try to start the initial load, I get the following information in my ggserr.log file and the
process abends:
2010-11-10 16:47:40 INFO OGG-00963 Oracle GoldenGate Manager for Oracle, mgr.prm: Command received from GGSCI on host 10.143.204.77 (START EXTRACT INITLOAD ).
2010-11-10 16:47:40 INFO OGG-00975 Oracle GoldenGate Manager for Oracle, mgr.prm: EXTRACT INITLOAD starting.
2010-11-10 16:47:40 INFO OGG-01017 Oracle GoldenGate Capture for Oracle, initload.prm: Wildcard resolution set to IMMEDIATE because SOURCEISTABLE is used.
2010-11-10 16:47:40 INFO OGG-00992 Oracle GoldenGate Capture for Oracle, initload.prm: EXTRACT INITLOAD starting.
2010-11-10 16:47:41 ERROR OGG-01192 Oracle GoldenGate Capture for Oracle, initload.prm: Trying to use RMTTASK on data types which may be written as LOB chunks (Table: 'JOHNB.JOHNS_CLOB').
2010-11-10 16:47:41 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, initload.prm: PROCESS ABENDING.
Trying to use RMTTASK on data types which may be written as LOB chunks
I cannot find any reference to the message "Trying to use RMTTASK on data types which may be written as LOB chunks".
Any help is appreciated.
Thanks,
John

Hi,
Did you check this Metalink Doc?
Does GoldenGate Extract Support Oracle Spatial? [ID 971719.1]
Modified 07-JAN-2010 Type HOWTO Status PUBLISHED
In this Document
Solution
Applies to:
Oracle GoldenGate - Version: 4.0.0 - Release: 4.0.0
Information in this document applies to any platform.
Solution
Issue:
Does GoldenGate Extract support Oracle Spatial?
Solution Overview:
Oracle Spatial is supported with limitations.
Solution Details:
Spatial is supported:
1.Like-to-like only.
ASSUMETARGETDEFS is required in the Replicat parameter file.
SOURCEDEFS is not supported currently.
2. XMLType embedded in Spatial (and other UDT and system-DT) is not supported currently. (OS-7065)
3. The Spatial types that includes LOB are not supported currently. (OS-7134)
--------------------------------- example --------------------------------
NOTE: The 'SDO' indicates a spatial type object.
--------------------------------- example --------------------------------
--- create source table----
CREATE TABLE cities (
city VARCHAR2 (30) primary key,
state VARCHAR2 (30),
geom mdsys.SDO_GEOMETRY
--- create target table----
CREATE TABLE cities_t (
city VARCHAR2 (30) primary key,
state VARCHAR2 (30),
geom mdsys.SDO_GEOMETRY
INSERT into cities values ('SF','CA',
mdsys.sdo_geometry (2008, null, null,
mdsys.sdo_elem_info_array (1,1003,3),
mdsys.sdo_ordinate_array (-109,37,-102,40)
commit;
SQL >select * from cities; <======================= source table
CITY STATE
GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
SF CA
SDO_GEOMETRY(2008, NULL, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 3), SDO_ORDINATE_ARR
AY(-109, 37, -102, 40))
SQL >select * from cities_t; <======================target table
CITY STATE
GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
SF CA
SDO_GEOMETRY(2008, NULL, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 3), SDO_ORDINATE_ARR
AY(-109, 37, -102, 40))
Thanks.

Similar Messages

  • Show scrollpane only when table has x rows

    Is there any way to not make use of scrollpane when a table currently has less that 5 rows ?
    I have my table embedded in a JScrollPane, but I want to show the scrollpane only if my table has atleast
    'x' number of rows.
    Thanks,

    Something like...
    DefaultTableModel tm = new DefaultTableModel(data, index);
    int size = tm.getRowCount();
    JTable t = new JTable(tm);
    JPanel p = new JPanel();
    if(size > 5)
      JScrollPane jsp = new JScrollPane(t);
      p.add(jsp);
    else
      p.add(t);
    // add p to JFrame

  • Need to hide row when table has 1 entry in adobe

    Dear Experts,
    I have made select statement in Initialization and in context i have called Table EKPO and under that EKET based on EBELP where clause. Then I have called Sub form for both Tables and made as EKPO(Role Body Row) EKET(Role Table for subform1, Role Body row for subform2)
    Eg: Table EKPO
                  Table EKET
    Need to hide if EKET has one row for that, I want to know number of rows in table, if row 1 then need to hide otherwise need to show in adobe.
    Sharrad Dixit
    dixitasharad at gmail

    Hi A,
    I hope as per your previous post, you might have already set the presentation variable. You can write the column formula now as:
    case when @{variables.country} = 'All Choices' then sum(revenue) by year else <your previous case to hide the USA column} end.
    Hope this helps.
    Thank you,
    Dhar

  • How to CREATE VIEW to merge two tables each of which has CLOB-typed column

    I failed in creating a view to merge two tables that have CLOB-type column each.
    The details are:
    Database: Oracle 9i (9.2.0)
    Two tables "test" and "test_bak", each of which has the following structure:
    ID Number(10, 0)
    DUMMY VARCHAR2(20)
    DUMMYCLOB CLOB
    The following operation fails:
    create view dummyview (id, dummy, dummyclob) as
    select id, dummy, dummyclob from test
    union
    select id, dummy, dummyclob from test_bak;
    I was announced:
    select test.id, test.dummy, test.dummyclob
    ERROR in line 2:
    ORA-00932: inconsistent data type: required - , but CLOB presented.
    But if creating views from only ONE table with CLOB-type columns, or from two tables WITHOUT CLOB-typed columns, the creation will succeed. The following 1) and 2) will succeed, both:
    1) one table, with CLOB-typed column
    create view dummyview (id, dummy, dummyclob) as
    select id, dummy, dummyclob from test;
    2) two tables, without CLOB-typed columns
    create view dummyview (id, dummy) as
    select id, dummy from test
    union
    select id, dummy from test_bak;
    I want to merge the two tables all, with complete columns, how to write the CREATE VIEW SQL statement?
    many thanks in advance

    Dong Wenyu,
    No.
    But you could do this:
    SELECT source.*, nvl (tab1.clob_column, tab2.clob_column)
    FROM your_table1 tab1, your_table2 tab2, (
    SELECT primary_key, ...
    FROM your_table1
    UNION
    SELECT primary_key, ...
    FROM your_table2
    ) source
    WHERE source.primary_key = tab1.id (+)
    AND source.primary_key = tab2.id (+)
    In other words, do the set operation (UNION (ALL)/INTERSECT/MINUS) on just the PK columns before pulling in the LOB columns.
    d.

  • It has error when table row big than 500

    Hi,All
    my war file in Appserver8 is correct,but in JBoss4.0.2,when table row only 100 or 200, be ok yet,but big than 500,has below error.
    2007-12-14 14:43:22,062 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[localhost].[yx].[Faces Servlet]] Servlet.service() for servlet Faces Servlet threw exception
    com.sun.data.provider.DataProviderException: java.lang.reflect.InvocationTargetException
         at com.sun.data.provider.impl.MethodResultTableDataProvider.invokeDataMethod(MethodResultTableDataProvider.java:257)
         at com.sun.data.provider.impl.MethodResultTableDataProvider.invokeDataMethod(MethodResultTableDataProvider.java:215)
         at com.sun.data.provider.impl.MethodResultTableDataProvider.testInvokeDataMethod(MethodResultTableDataProvider.java:267)
         at com.sun.data.provider.impl.MethodResultTableDataProvider.getRowCount(MethodResultTableDataProvider.java:397)
         at com.sun.rave.web.ui.component.TableRowGroup.getRowKeys(TableRowGroup.java:806)
         at com.sun.rave.web.ui.component.TableRowGroup.getFilteredRowKeys(TableRowGroup.java:429)
         at com.sun.rave.web.ui.component.TableRowGroup.getSortedRowKeys(TableRowGroup.java:1385)
         at com.sun.rave.web.ui.component.TableRowGroup.getRenderedRowKeys(TableRowGroup.java:876)
         at com.sun.rave.web.ui.component.TableRowGroup.iterate(TableRowGroup.java:1956)
         at com.sun.rave.web.ui.component.TableRowGroup.processDecodes(TableRowGroup.java:1679)
         at javax.faces.component.UIComponentBase.processDecodes(UIComponentBase.java:880)
         at javax.faces.component.UIComponentBase.processDecodes(UIComponentBase.java:880)
         at com.sun.rave.web.ui.component.Form.processDecodes(Form.java:78)
         at javax.faces.component.UIComponentBase.processDecodes(UIComponentBase.java:880)
         at javax.faces.component.UIComponentBase.processDecodes(UIComponentBase.java:880)
         at javax.faces.component.UIComponentBase.processDecodes(UIComponentBase.java:880)
         at javax.faces.component.UIComponentBase.processDecodes(UIComponentBase.java:880)
         at javax.faces.component.UIViewRoot.processDecodes(UIViewRoot.java:306)
         at com.sun.faces.lifecycle.ApplyRequestValuesPhase.execute(ApplyRequestValuesPhase.java:79)
         at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
         at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:194)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:81)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
         at org.jboss.web.tomcat.security.CustomPrincipalValve.invoke(CustomPrincipalValve.java:39)
         at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:153)
         at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:59)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856)
         at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:744)
         at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
         at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112)
         at java.lang.Thread.run(Thread.java:595)
    Caused by: java.lang.reflect.InvocationTargetException
         at sun.reflect.GeneratedMethodAccessor359.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at com.sun.data.provider.impl.MethodResultTableDataProvider.invokeDataMethod(MethodResultTableDataProvider.java:236)
         ... 43 more
    Caused by: java.rmi.ServerException: RuntimeException; nested exception is:
         java.lang.NullPointerException
         at org.jboss.ejb.plugins.LogInterceptor.handleException(LogInterceptor.java:386)
         at org.jboss.ejb.plugins.LogInterceptor.invoke(LogInterceptor.java:196)
         at org.jboss.ejb.plugins.ProxyFactoryFinderInterceptor.invoke(ProxyFactoryFinderInterceptor.java:122)
         at org.jboss.ejb.SessionContainer.internalInvoke(SessionContainer.java:624)
         at org.jboss.ejb.Container.invoke(Container.java:873)
         at sun.reflect.GeneratedMethodAccessor92.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:141)
         at org.jboss.mx.server.Invocation.dispatch(Invocation.java:80)
         at org.jboss.mx.server.Invocation.invoke(Invocation.java:72)
         at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:249)
         at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:644)
         at org.jboss.invocation.local.LocalInvoker$MBeanServerAction.invoke(LocalInvoker.java:155)
         at org.jboss.invocation.local.LocalInvoker.invoke(LocalInvoker.java:104)
         at org.jboss.invocation.InvokerInterceptor.invokeLocal(InvokerInterceptor.java:179)
         at org.jboss.invocation.InvokerInterceptor.invoke(InvokerInterceptor.java:165)
         at org.jboss.proxy.TransactionInterceptor.invoke(TransactionInterceptor.java:46)
         at org.jboss.proxy.SecurityInterceptor.invoke(SecurityInterceptor.java:55)
         at org.jboss.proxy.ejb.StatelessSessionInterceptor.invoke(StatelessSessionInterceptor.java:97)
         at org.jboss.proxy.ClientContainer.invoke(ClientContainer.java:86)
         at $Proxy129.getAmmeterByBooknumForInputData(Unknown Source)
         at khdf.khdfsession.KhdfSessionClient.getAmmeterByBooknumForInputData(KhdfSessionClient.java:214)
         ... 47 more
    Caused by: java.lang.NullPointerException
         at khdf.KhdfSessionBean.getAmmeterByBooknumForInputData(KhdfSessionBean.java:7024)
         at sun.reflect.GeneratedMethodAccessor360.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at org.jboss.invocation.Invocation.performCall(Invocation.java:345)
         at org.jboss.ejb.StatelessSessionContainer$ContainerInterceptor.invoke(StatelessSessionContainer.java:214)
         at org.jboss.resource.connectionmanager.CachedConnectionInterceptor.invoke(CachedConnectionInterceptor.java:185)
         at org.jboss.ejb.plugins.StatelessSessionInstanceInterceptor.invoke(StatelessSessionInstanceInterceptor.java:130)
         at org.jboss.webservice.server.ServiceEndpointInterceptor.invoke(ServiceEndpointInterceptor.java:51)
         at org.jboss.ejb.plugins.CallValidationInterceptor.invoke(CallValidationInterceptor.java:48)
         at org.jboss.ejb.plugins.AbstractTxInterceptor.invokeNext(AbstractTxInterceptor.java:105)
         at org.jboss.ejb.plugins.TxInterceptorCMT.runWithTransactions(TxInterceptorCMT.java:335)
         at org.jboss.ejb.plugins.TxInterceptorCMT.invoke(TxInterceptorCMT.java:166)
         at org.jboss.ejb.plugins.SecurityInterceptor.invoke(SecurityInterceptor.java:139)
         at org.jboss.ejb.plugins.LogInterceptor.invoke(LogInterceptor.java:192)
         ... 68 more
    Heading 6: h6.

    Hi,Futeleufu_John
    Yes,All rows of the table display in a page througe setting layoutpanel's property "overflow:scroll".
    Same war file & jar files,it works fine when i deploys in SunAppserver8 whether the table have many rows,but in Jboss,few rows are fine,e.g. 200 rows are ok,but the table rows more than 500,it's error.
    i has test the ejb's method in Jbuilder,it's fine more than 500 rows,that is, the method is fine,only because of displaying in Jboss.
    the method KhdfSessionBean.getAmmeterByBooknumForInputData's code snippets as follows:
    public AmmeterDTO[] getAmmeterByBooknumForInputData(String charcomid,
          Integer booknum, String operid)
        if (charcomid == null || booknum == null)
          return null;
        ArrayList ammeterList = new ArrayList();
        AmmeterDTO ammeterDto = new AmmeterDTO();
        try
          Context ctx = new InitialContext();
          DataSource ds = (DataSource) ctx.lookup(strJdbc);
          conn = ds.getConnection();
          callstat = conn.prepareCall("{call p_get_ammeterinfo(?,?,?,?,?)}");
            ammeterDto = new AmmeterDTO(..........);
            ammeterList.add(ammeterDto);
        catch (SQLException ex)
        catch (NamingException ex)
        catch (ParseException ex)
        finally
          try
            if (callstat != null)
              callstat.close();
            if (conn != null)
              conn.close();
            if (rs != null)
              rs.close();
          catch (SQLException ex1)
        return (AmmeterDTO[]) ammeterList.toArray(new AmmeterDTO[0]);
      }

  • How to make a form using a wizard when the table has no primary key ?

    Hi,
    I want to make a form to update, delete a table. The table has no primary key. The problem is that the Wizard ask for a primary key.
    How to avoid using a primary key ? i mean I don't want to create a primary key if is is possible.
    I would like to use the wizard; is it possible ?
    Thank you for your kind answers.
    Christian

    I believe the key is choosing 'Interactive' as opposed to 'Classic' in the implementation and then you can choose 'Existing Trigger' for the primary key source and it should work to use an existing column as your proimary key.

  • Oracle 11g imp erroneously tries to recreate existing tables with CLOBs?

    I have a shell script for loading database dumps from both Datapump and the older exp/imp.
    Often when loading dumps, I need to rename the schema owner and tablespace names (which is handled by REMAP_SCHEMA and REMAP_TABLESPACE in Datapump).
    However I have a whole bunch of dumps created with exp at this point and not that many Datapump dumps yet. As such the old style dumps are handled by the shell script in this way:
    1) A first pass imp is run using INDEXFILE to generate a file with the SQL to create tables and indexes. Options also include FROMUSER and TOUSER.
    2) A series of sed command edit the SQL file to change the tablespace names (which are schema owner specific in our case).
    3) The editted SQL file is run with sqlplus to create the tables and indexes.
    4) A second pass imp is run to load the table rows as well as triggers, stored procedures, views, etc. Options include FROMUSER, TOUSER, COMMIT=Y, IGNORE=Y, BUFFER, STATISTICS=NONE, CONSTRAINTS=N
    This shell script has been working great for loading exp dump files into Oracle 9 and Oracle 10 databases, but now that I'm trying to load these dumps into Oracle 11, it fails.
    The problem is in step 4, the imp program is trying to create some of the tables that already were created with sqlplus in step 2. The problematic tables all seem to have CLOB columns in them. The table creation fails because it tries to use the tablespace names from the dump file, which do not exist in the destination database. And when the table creation fails, imp then decides not to load the rows for those table.
    This seems like a bug in the Oracle 11 imp program. I don't understand why it thinks it needs to recreate tables that already exist when those tables have CLOB columns. Is there something different about CLOB columns in Oracle 11 that I should know about that might be confusing imp into thinking that it needs to create tables when they already exist? Maybe I need to do something to those tables in SQL so that imp does not think it needs to recreate them?
    I know that the tables with the CLOBs were created correctly because I was trying to find some way to workaround this. For step 4, I tried using DATA_ONLY=Y, in which case imp does not try to create the tables and just loads the table rows. Of course using DATA_ONLY, I don't get a lot of other things like triggers, view and stored procedures. I started to try to get around that by doing 3 passes with imp, so that I could pick up the missing pieces by using an imp pass with ROWS=N, but strangely that has the same problem of trying to recreate the existing tables.

    The only solution I've found so far as a workaround is rather convoluted.
    1. I took an export using datapump's expdp of SCHEMA1 (in 10g it will skip the table with the xmltype).
    2. I imported the data to my empty schema (SCHEMA2) using impdp. To avoid the error that the type already exists with another OID, I used the TRANSFORM=oid:n parameter e.g.
    impdp user/pwd dumpfile=noxmltable.dmp logfile=importallbutxmltable.log remap_schema=SCHEMA1:SCHEMA2 TRANSFORM=oid:n directory=MYDUMPDIR
    3. I then manually created my xmltype table in the SCHEMA2 and did a select into to load it (make sure you have the select privileges to do so):
    INSERT INTO SCHEMA2.XMLTABLE2 SELECT * FROM SCHEMA1.XMLTABLE1;
    4. I am still taking an export with exp of the xmltable as well even though I'm not sure I can do anything with it.
    Thanks!
    Edited by: stacyz on Jul 28, 2009 9:49 AM

  • Understanding logminer results -- inserting row into table with CLOB field

    In using log miner I have noticed that inserts into rows that contain a CLOB (I assume this applies to other LOB type fields as well, have only tested with CLOB so far) field are actually recorded as two DML entries.
    --the first entry is the insert operation that inserts all values with an EMPTY_CLOB() for the CLOB field
    --the second entry is the update that sets the actual CLOB value (+this is true even if the value of the CLOB field is not being set explicitly+)
    This separation makes sense as there may be separate locations that the values are being stored etc.
    However, what I am tripping over is the fact the first entry, the Insert, has a RowId value of 'AAAAAAAAAAAAAAAAAA' which is invalid if I attempt to use it in a flashback query such as:
    SELECT * FROM PERSON AS OF SCN #####'  where RowId = 'AAAAAAAAAAAAAAAAAA'The second operation, the Update of the CLOB field, has the valid RowId.
    Now, again, this makes sense if the insert of the new row is not really considered "+done+" until the two steps are done. However, is there some way to group these operations together when analyzing the log contents to know that these two operations are a "+matched set+"?
    Not a total deal breaker, but would be nice to know what is happening under the hood here so I don't act on any false assumptions.
    Thanks for any input.
    To replicate:
    Create a table with CLOB field:
    CREATE TABLE DEVUSER.TESTTABLE
            ID NUMBER
           , FULLNAME VARCHAR2(50)
          , AGE NUMBER  
          , DESCRIPTION CLOB
           );Capture the before SCN:
    SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Insert a new row in the test table:
    INSERT INTO TESTTABLE(ID,FULLNAME,AGE) VALUES(1,'Robert BUILDER',35);
         COMMIT;Capture the after SCN:
    SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Start logminer session with the bracketing scn values and options etc:
    EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN=>2619174, ENDSCN=>2619191, -
               OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + -
               DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.NO_ROWID_IN_STMT + DBMS_LOGMNR.NO_SQL_DELIMITER)Query the logs for the changes in that range:
    SELECT
           commit_scn, xid,operation,table_name,row_id
           ,sql_redo,sql_undo, rs_id,ssn
           FROM V$LOGMNR_CONTENTS
        ORDER BY xid asc,sequence# ascResults:
    2619178     0C00070028000000     START                  AAAAAAAAAAAAAAAAAA     set transaction read write
    2619178     0C00070028000000     INSERT     TESTTABLE     AAAAAAAAAAAAAAAAAA     insert into "DEVUSER"."TESTTABLE" ...
    2619178     0C00070028000000     UPDATE     TESTTABLE     AAAFEXAABAAALEJAAB     update "DEVUSER"."TESTTABLE" set "DESCRIPTION" = NULL ...
    2619178     0C00070028000000     COMMIT                  AAAAAAAAAAAAAAAAAA     commitEdited by: 958701 on Sep 12, 2012 9:05 AM
    Edited by: 958701 on Sep 12, 2012 9:07 AM

    Scott,
    Thanks for the reply.
    I am inserting into the table over a database link.
    I am using the new version of HTML Db (2.0)
    HTML Db is connected to an Oracle 10 database I think, however the table I am trying to insert data into (via the database link) is in an Oracle 8 database - this is why we created a link to it as we couldn't have the HTML Db interacting with the Oracle 8 database directly due to compatibility problems (or so I've been told)
    Simon

  • Migrating a large table containing clobs to a new dedicated tablespace.

    Hi All
    I am seeking some advice on something.
    I have a Oracle 10g live database with a table used for auditing purposes, this table has 5.8 million rows and contains clob data. When I last performed a full export of this table using data pump the dump file was around 40GB in size and as you can imagine took a long amount of time to complete.
    This table is currently located within the main data table space, with that in mind I am looking at moving this table to a new dedicated table space onto dedicated hard disks. Theory being this will allow easier DBA administration regarding separate rman backups of this new table space and improve performance on the main data table space, writes to the audit table will now be on separate disks in a dedicated table space. I have a test system that is a complete copy of live I can use to test this.
    Can anyone advise the best way to achieve the transfer of the audit table to the new table space, I intend to create a new table space of the same size on the located on separate dedicated disks.
    Option 1
    I Presume there are two main options, a full data export of the table and then reimport into the new table space ensuring to rebuild index's? While doing so I presume the database would need to be inaccessible to users? otherwise inserts will be done to the source table I am trying to migrate to the new table space? I think I would need to take the system offline while doing this anyway as the system would perform alot slower while the export is taking place.
    Option 2
    Use the alter table move table space feature? I am new to the DBA role so I have little knowledge on this function.
    Please can you advise which of the above would be the best cause of action, or if deed there is any other options/solutions I am not aware of?
    Many Thanks in advance.

    Actually, most queries are using an index range scan or index full scan.
    But, just for the sake of this discussion, here is the simple query I mentioned.
    Note that my concern is of the chained or migrated rows, and how to resolve them.
    But if my table contains 520 columns, how can I get around intra-block chaining?
    The question also still remains how can I tell the difference between row chaining, row migration, and intra-block chaining and which is it that is showing up in dba_tables.chained_cnt?
    simple query:
    SELECT C536870916,COUNT(T2179.C536881135)
    FROM aradmin.T2179
    WHERE ((T2179.C536871037 = 'Trouble') AND ((T2179.C536870944 = 'New')
    OR (T2179.C536870944 = 'Assigned') OR (T2179.C536870944 = 'On Hold')))
    GROUP BY C536870916
    ORDER BY C536870916
    Explain plan for simple query is:
    SELECT STATEMENT
    SORT GROUP BY NOSORT
    TABLE ACCESS BY INDEX ROWID
    INDEX FULL SCAN
    Also, on this table of 520 columns, there are already 61 indexes, of which 20 are LOB indexes, and one FB index. All others are normal indexes.
    My thinking on moving the table is that I would have to rebuild each and every index, however, I'm not sure how to rebuild LOB or FB indexes.
    So, back to my original question: can I safely "move" a table containing several CLOB columns and also how do I rebuild the LOB indexes?
    Thanks.

  • Calculate row size of table with clob

    Hi
    I have a table which has clob data type, how can I calculate size of each row ?

    Mark D Powell wrote:
    What version of Oracle?
    Are you interested in the combined row size of the base table row and the Lob segment or just the row size of the base table row? Or just the lob segment size?
    Are you asking about a specific row or just the average for the entire table?
    Depending on your version of Oracle dbms_stats which calculates the avg_row_len for the base table when lob columns are present incorrectly.
    HTH -- Mark D Powell --I need row size for every record in the table, the table itself is only 1GB, can be combined size or lob size relevant to each record in a table it doesnt matter.
    v11.2
    thanks

  • Error: "The search cannot be executed because the table has pending changes that would be lost."

    Hello,
    I'm working a developing an OA page that will displays the contents of an Oracle table and allows the user to update records in a table as needed.
    When I hit submit button to save the changes in the update page, the control goes back to main page (where all the table records are displayed). It displays the updated record with the new information.However when I hit "Go" button on the mainPG, I get the error "The search cannot be executed because the table has pending changes that would be lost. and the changes are not committed.
    ANy suggestions on where I should look will be greatly appreciated.
    Posting code for my controller
    =======================
              if ( pageContext.getParameter("saveRate") != null )
              personam.invokeMethod("saveRateToDatabase");
    Code from my AM
    =============
        public void saveRateToDatabase()
          getOADBTransaction().commit();
          System.out.println("40--After commit has been executed");
    Code from my VORowImpl
    ===================
    package cggv.oracle.apps.gl.server;
    import oracle.apps.fnd.framework.server.OAViewRowImpl;
    import oracle.jbo.domain.Date;
    import oracle.jbo.domain.Number;
    import oracle.jbo.server.AttributeDefImpl;
    // ---    File generated by Oracle ADF Business Components Design Time.
    // ---    Custom code may be added to this class.
    // ---    Warning: Do not modify method signatures of generated methods.
    public class xxCggGlRatesVORowImpl extends OAViewRowImpl {
        public static final int RATEID = 0;
        public static final int FROMCURRENCY = 1;
        public static final int TOCURRENCY = 2;
        public static final int FROMCONVERSIONDATE = 3;
        public static final int TOCONVERSIONDATE = 4;
        public static final int USERCONVERSIONTYPE = 5;
        public static final int CONVERSIONRATE = 6;
        public static final int MODEFLAG = 7;
        /**This is the default constructor (do not remove)
        public xxCggGlRatesVORowImpl() {
        /**Gets the attribute value for the calculated attribute RateId
        public Number getRateId() {
            return (Number) getAttributeInternal(RATEID);
        /**Sets <code>value</code> as the attribute value for the calculated attribute RateId
        public void setRateId(Number value) {
            setAttributeInternal(RATEID, value);
            //populateAttribute(RATEID, value);
        /**Gets the attribute value for the calculated attribute FromCurrency
        public String getFromCurrency() {
            return (String) getAttributeInternal(FROMCURRENCY);
        /**Sets <code>value</code> as the attribute value for the calculated attribute FromCurrency
        public void setFromCurrency(String value) {
            setAttributeInternal(FROMCURRENCY, value);      
        /**Gets the attribute value for the calculated attribute ToCurrency
        public String getToCurrency() {
            return (String) getAttributeInternal(TOCURRENCY);
        /**Sets <code>value</code> as the attribute value for the calculated attribute ToCurrency
        public void setToCurrency(String value) {
            setAttributeInternal(TOCURRENCY, value);
        /**Gets the attribute value for the calculated attribute FromConversionDate
        public Date getFromConversionDate() {
            return (Date) getAttributeInternal(FROMCONVERSIONDATE);
        /**Sets <code>value</code> as the attribute value for the calculated attribute FromConversionDate
        public void setFromConversionDate(Date value) {
            setAttributeInternal(FROMCONVERSIONDATE, value);      
        /**Gets the attribute value for the calculated attribute ToConversionDate
        public Date getToConversionDate() {
            return (Date) getAttributeInternal(TOCONVERSIONDATE);
        /**Sets <code>value</code> as the attribute value for the calculated attribute ToConversionDate
        public void setToConversionDate(Date value) {
            setAttributeInternal(TOCONVERSIONDATE, value);       
        /**Gets the attribute value for the calculated attribute UserConversionType
        public String getUserConversionType() {
            return (String) getAttributeInternal(USERCONVERSIONTYPE);
        /**Sets <code>value</code> as the attribute value for the calculated attribute UserConversionType
        public void setUserConversionType(String value) {
            setAttributeInternal(USERCONVERSIONTYPE, value);
        /**Gets the attribute value for the calculated attribute ConversionRate
        public Number getConversionRate() {
            return (Number) getAttributeInternal(CONVERSIONRATE);
        /**Sets <code>value</code> as the attribute value for the calculated attribute ConversionRate
        public void setConversionRate(Number value) {
            setAttributeInternal(CONVERSIONRATE, value);
        /**Gets the attribute value for the calculated attribute ModeFlag
        public String getModeFlag() {
            return (String) getAttributeInternal(MODEFLAG);
        /**Sets <code>value</code> as the attribute value for the calculated attribute ModeFlag
        public void setModeFlag(String value) {
            setAttributeInternal(MODEFLAG, value);      
        /**getAttrInvokeAccessor: generated method. Do not modify.
        protected Object getAttrInvokeAccessor(int index,
                                               AttributeDefImpl attrDef) throws Exception {
            switch (index) {
            case RATEID:
                return getRateId();
            case FROMCURRENCY:
                return getFromCurrency();
            case TOCURRENCY:
                return getToCurrency();
            case FROMCONVERSIONDATE:
                return getFromConversionDate();
            case TOCONVERSIONDATE:
                return getToConversionDate();
            case USERCONVERSIONTYPE:
                return getUserConversionType();
            case CONVERSIONRATE:
                return getConversionRate();
            case MODEFLAG:
                return getModeFlag();
            default:
                return super.getAttrInvokeAccessor(index, attrDef);
        /**setAttrInvokeAccessor: generated method. Do not modify.
        protected void setAttrInvokeAccessor(int index, Object value,
                                             AttributeDefImpl attrDef) throws Exception {
            switch (index) {
            case RATEID:
                setRateId((Number)value);
                return;
            case FROMCURRENCY:
                setFromCurrency((String)value);
                return;
            case TOCURRENCY:
                setToCurrency((String)value);
                return;
            case FROMCONVERSIONDATE:
                setFromConversionDate((Date)value);
                return;
            case TOCONVERSIONDATE:
                setToConversionDate((Date)value);
                return;
            case USERCONVERSIONTYPE:
                setUserConversionType((String)value);
                return;
            case CONVERSIONRATE:
                setConversionRate((Number)value);
                return;
            case MODEFLAG:
                setModeFlag((String)value);
                return;
            default:
                super.setAttrInvokeAccessor(index, value, attrDef);
                return;
        /**Gets xxCggGlRatesEO entity object.
        public xxCggGlRatesEOImpl getxxCggGlRatesEO() {
            return (xxCggGlRatesEOImpl)getEntity(0);

    Hi,
    Check these links:
    Oracle Apps: Search cannot be executed because the table has pending changes that would be lost
    Re: Getting error in search page search cannot be executed
    http://jneelmani.blogspot.in/2009/11/oaf-search-cannot-be-executed-because.html
    --Sushant

  • "Error:The search cannot be executed because the table has pending changes that would be lost", after DELETE

    Good day,
    On Search Page, I have searched for the record(s) then deleted a record and got confirmation message i.e. Record has deleted. Next when I search for any record I'm getting below error.
    Error
    The search cannot be executed because the table has pending changes that would be lost.
    Could you please help me to fix this issue. Your response is highly appreciated.
    Item properties:
    Item Style : Image
    Action Type: Fire Action
    Event : delete
    Below is the code using in CO and AM
    Controller (processFormRequest):
    if ("delete".equals(pageContext.getParameter(EVENT_PARAM)))
              // The user has clicked a "Delete" icon so we want to display a "Warning"
              // dialog asking if she really wants to delete the employee. Note that we
              // configure the dialog so that pressing the "Yes" button submits to
              // this page so we can handle the action in this processFormRequest( ) method.
              String visit_id = pageContext.getParameter("visit_id");
              String employeeName = pageContext.getParameter("last_name") + ", " + pageContext.getParameter("first_name");
              MessageToken[] tokens = { new MessageToken("EMP_NAME", employeeName)};
              OAException mainMessage = new OAException("FND", "XXXX_EMP_DELETE_WARN", tokens);
              // Note that even though we're going to make our Yes/No buttons submit a
              // form, we still need some non-null value in the constructor's Yes/No
              // URL parameters for the buttons to render, so we just pass empty
              // Strings for this.
              OADialogPage dialogPage = new OADialogPage(OAException.WARNING,
                mainMessage, null, "", "");
              // Always use Message Dictionary for any Strings you want to display.
              String yes = pageContext.getMessage("AK", "FWK_TBX_T_YES", null);
              String no = pageContext.getMessage("AK", "FWK_TBX_T_NO", null);
              // We set this value so the code that handles this button press is
              // descriptive.
    dialogPage.setOkButtonItemName("DeleteYesButton");
              // The following configures the Yes/No buttons to be submit buttons,
              // and makes sure that we handle the form submit in the originating
              // page (the "Employee" summary) so we can handle the "Yes"
              // button selection in this controller.
    dialogPage.setOkButtonToPost(true);
    dialogPage.setNoButtonToPost(true);
    dialogPage.setPostToCallingPage(true);
              // Now set our Yes/No labels instead of the default OK/Cancel.
    dialogPage.setOkButtonLabel(yes);
    dialogPage.setNoButtonLabel(no);
              // We need to keep hold of the employeeNumber and employeeName.
              // The OADialogPage gives us a convenient means
              // of doing this. Note that the use of the Hashtable is 
              // most appropriate for passing multiple parameters. See the OADialogPage
              // javadoc for an alternative when dealing with a single parameter.
              java.util.Hashtable formParams = new java.util.Hashtable(1);
    formParams.put("visit_id", visit_id);
    formParams.put("empName", employeeName);
    dialogPage.setFormParameters(formParams);
              pageContext.redirectToDialogPage(dialogPage);
        else if (pageContext.getParameter("DeleteYesButton") != null)
              // User has confirmed that she wants to delete this employee.
              // Invoke a method on the AM to set the current row in the VO and
              // call remove() on this row.
              String employeeNumber = pageContext.getParameter("visit_id");
              String employeeName = pageContext.getParameter("empName");
              Serializable[] parameters = { employeeNumber };
             // OAApplicationModule am = pageContext.getApplicationModule(webBean);
    am.invokeMethod("deleteEmployee", parameters);
              // Now, redisplay the page with a confirmation message at the top. Note
              // that the deleteEmployee() method in the AM commits, and our code
              // won't get this far if any exceptions are thrown.
              MessageToken[] tokens = { new MessageToken("EMP_NAME", employeeName) };
              OAException message = new OAException("FND",
                "XXXX_EMP_DELETE_CONFIRM", tokens, OAException.CONFIRMATION, null);
    pageContext.putDialogMessage(message);
    Application Module:
      public void deleteEmployee(String visit_id)
            // First, we need to find the selected employee in our VO.
            // When we find it, we call remove( ) on the row which in turn
            // calls remove on the associated EmployeeEOImpl object.
            int empToDelete = Integer.parseInt(visit_id);
              OAViewObject vo = (OAViewObject)getNonEmployeesSummaryVO1();
        NonEmployeesSummaryVORowImpl row = null;
            // This tells us the number of rows that have been fetched in the
            // row set, and will not pull additional rows in like some of the
            // other "get count" methods.
           int fetchedRowCount = vo.getFetchedRowCount();
            // We use a separate iterator -- even though we could step through the
            // rows without it -- because we don't want to affect row currency.
            RowSetIterator deleteIter = vo.createRowSetIterator("deleteIter");
    if (fetchedRowCount > 0)
              deleteIter.setRangeStart(0);
              deleteIter.setRangeSize(fetchedRowCount);
              for (int i = 0; i < fetchedRowCount; i++)
                row = (NonEmployeesSummaryVORowImpl)deleteIter.getRowAtRangeIndex(i);
                // For performance reasons, we generate ViewRowImpls for all
                // View Objects. When we need to obtain an attribute value,
                // we use the named accessors instead of a generic String lookup.
                // Number primaryKey = (Number)row.getAttribute("EmployeeId");
                Number primaryKey = row.getVisitId();
                if (primaryKey.compareTo(empToDelete) == 0)
                  // This performs the actual delete.
                  row.remove();
                    getTransaction().commit();
                  break; // only one possible selected row in this case
            // Always close the iterator when you're done.
            deleteIter.closeRowSetIterator();
          } // end deleteEmployee
    Thanks,
    Ravi

    Hi
    Check this link Getting error in search page search cannot be executed
    Regards,
    Dilip

  • Table has 85 GB data space, zero rows

    This table has only one column. I ran a transaction that inserted more than a billion rows into this table but then rolled it back before completion.
    This table currently has zero rows but a select statement takes about two minutes to complete, and waits on I/O.
    The interesting thing here is that previous explanations to this were ghost records in case of deletes,
    there are none. m_ghostRecCnt is zeroed for all data pages.
    This is obviously not a situation in which the pages were placed in a deferred-drop queue either, or else the page count would be decreasing over time, and it is not.
    This is the output of DBCC PAGE for one of the pages:
    PAGE: (3:88910)
    BUFFER:
    BUF @0x0000000A713AD740
    bpage = 0x0000000601542000          bhash = 0x0000000000000000          bpageno = (3:88910)
    bdbid = 35                          breferences = 0                    
    bcputicks = 0
    bsampleCount = 0                    bUse1 = 61857                      
    bstat = 0x9
    blog = 0x15ab215a                   bnext = 0x0000000000000000          
    PAGE HEADER:
    Page @0x0000000601542000
    m_pageId = (3:88910)                m_headerVersion = 1                 m_type = 1
    m_typeFlagBits = 0x0                m_level = 0                        
    m_flagBits = 0x8208
    m_objId (AllocUnitId.idObj) = 99    m_indexId (AllocUnitId.idInd) = 256
    Metadata: AllocUnitId = 72057594044416000                                
    Metadata: PartitionId = 72057594039697408                                Metadata: IndexId = 0
    Metadata: ObjectId = 645577338      m_prevPage = (0:0)                  m_nextPage = (0:0)
    pminlen = 4                         m_slotCnt = 0                      
    m_freeCnt = 8096
    m_freeData = 7981                   m_reservedCnt = 0                   m_lsn
    = (1010:2418271:29)
    m_xactReserved = 0                  m_xdesId = (0:0)                    m_ghostRecCnt
    = 0
    m_tornBits = -249660773             DB Frag ID = 1                      
    Allocation Status
    GAM (3:2) = ALLOCATED               SGAM (3:3) = NOT ALLOCATED          
    PFS (3:80880) = 0x40 ALLOCATED   0_PCT_FULL                              DIFF (3:6) = CHANGED
    ML (3:7) = NOT MIN_LOGGED           
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    Querying the allocation units system catalog shows that all pages are counted as "used".
    I saw some articles, such as the ones listed bellow, which addresses similar situations where pages arent deleted in a HEAP after a delete operation. It turns out pages are only deleted in a table when a table level lock is issued.
    http://blog.idera.com/sql-server/howbigisanemptytableinsqlserver/
    http://www.sqlservercentral.com/Forums/Topic1182140-392-1.aspx
    https://support.microsoft.com/kb/913399/en-us
    To rule this out, I inserted another 100k rows which caused no change on page counts, and then deleted all entries with a TABLOCK query hint. Only one page was deleted.
    So, it appears we have a problem with pages that were created during a transaction that was rolled back, huh? I guess rolling back a transaction doesn't take certain physical factors into consideration.
    I've looked everywhere but couldn't find a satisfactory answer to this. Does anybody have any ideas?
    Just because there are clouds in the sky it doesn't mean it isn't blue. Some people would disagree.

    And this is the reason why you should have heaps (unless your name is Thomas Kejser :-).
    Try TRUNCATE TABLE. Or ALTER TABLE tbl REBUILD.
    Erland Sommarskog, SQL Server MVP, [email protected]
    I rebuilt the HEAP a while ago, and then all pages were gone. I don't know if TRUNCATE would have the same results, I would have to repeat the test to find that out. There are many ways to fix the problem itself, including creating a clustered index as Satish
    suggested.
    Id like to focus on this interesting fact I wanted to bring to the table for discussion: You open a transaction, insert a huge load of records and then roll back. Why would the engine leave the pages created during the transaction behind? More specifically,
    why would they not be marked as "free pages" if they are all empty? Why are they not marked as free so scans would skip them and not generate a lot of I/O throughput and long response times just to query a zero row table? Isn't this like a design
    flaw or a bug?
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • How to print a graph in which internal table has more than 32 entries?

    hiii experts
    i am trying to make a line graph using 'gfw_pres_show' function module.
    but in report internal table has more than 32 entries
    so how can i print a graph having more than 32 entries?

    Hi ricky_lv,
    According to your description, there is main report and subreport in it, when the subreport spans across multiple pages, you want to show column headers on each page. If that is the case, we can set column headers visible while scrolling in main report.
    For detail information, please refer to the following steps:
    In design mode, click the small drop down arrow next to Column Groups and select Advanced Mode.
    Go to your Row Groups pane, click on the first static member.
    In properties grid, set FixData to True.
    Set RepeatOnNextPage to True.
    Here is a relevant thread you can reference:
    https://social.technet.microsoft.com/Forums/en-US/e1f67cec-8fa3-4c5d-86ba-28b57fc4a211/keep-header-rows-visible-while-scrolling?forum=sqlreportingservi
    The following screenshots are for your reference:
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    If you have any feedback on our support, please click
    here.
    Wendy Fu
    TechNet Community Support

  • Need to render the edit button when table is not Empty

    Hi,
    I want to render the edit button , only when we have some records in the table to select.so is there a way to check the view is empty or has some records in the backing bean of my page.Please suggest me,if u have some deas which would work..
    thanks in advance

    Hey guys,I just did that and it is so simple.I just wanted to share it with u guys.It is all the power of EL language.
    For the rendered property of the Edit button ,Bind it to the iterator like (bindings.SatAppliViewObj1Iterator.currentRow!=null).thats it, now the button will only be rendered if the table has some records.Have a lot of facilities ,but no proper giude to enunciate them...So guys please keep sharing ur experiences with us ..
    Message was edited by:
    user526927

Maybe you are looking for