Replicating clobs and blobs in a remote database across dblink

9iR2
When creating a materialized view in a warehouse pointing to a remote table in an OLTP environment, I got this error when trying to replicate a table with 3 clobs.
ORA-22992: cannot use LOB locators selected from remote tables
So, how does Oracle recommend replicating Blobs and Clobs in a remote database/warehouse? Evidently using materialized views doesnt work.

MV replication is obsolete.
Move to 10gR2 and use Streams.

Similar Messages

  • How we handle CLOB and BLOB Datatypes in HANA DB

    Dear HANA Gurus,
    We have would like to build EDW using HANA base on our source system Oracle and it's supports CLOB and BLOB datatypes
    Would you please suggest how do we handle in HANA DB.
    Let not say it's oracle specific.
    Regards,
    Manoj

    Hello,
    check SAP HANA SQL Reference Guide for list of data types:
    (page 14 - Classification of Data Types)
    https://service.sap.com/~sapidb/011000358700000604922011
    For this purpose might be useful following data types:
    Large Object (LOB) Types
    LOB (large objects) data types, CLOB, NCLOB and BLOB, are used to store a large amount of data such as text documents and images. The maximum size of an LOB is 2 GB.
    BLOB
    The BLOB data type is used to store large binary data.
    CLOB
    The CLOB data type is used to store large ASCII character data.
    NCLOB
    The NCLOB data type is used to store a large Unicode character object.
    Tomas

  • Oracle VPD on Remote database using DBLINk

    Hi All,
    How can i apply row level security on a table that is available in another database using DBlink
    we have two databases PDSSM and EVTA, and i would like to apply row level security on a table in EVTA from a schema in PDSSM using dblink. MXODSADM IS A SCHEMA IN EVTA AND MXEMBARGO IS A SCHEMA IN PDSSM. there is a dblink(EVTA.GMM.COM) between mxembargo and mxodsadm.
    begin
    dbms_rls.add_policy (
    object_schema => 'MXODSADM',
    object_name => 'vehicle_retail_sale',
    policy_name => ' MXEMBARGO_EVTA_POLICY',
    function_schema =>'MXEMBARGO',
    policy_type => dbms_rls.SHARED_CONTEXT_SENSITIVE,
    --policy_type => dbms_rls.STATIC,
    policy_function => ' MXEMBARGO_EVTA_POLICY.MXEMBARGO_EVTA_PREDICATE',
    statement_types => 'select, insert,update,delete',
    update_check => TRUE,
    enable => TRUE,
    static_policy => TRUE
    end;
    I am a complete Database person and i need to do this in my application, can anyone provide me how can i do this using dblink.

    wojpik wrote:
    hello
    I have one short question to you.
    Is that possible to create view at remote database using dblink? Following syntax returns error
    create view ViewName@DbLinkDame (ColumnName) as
    (select 1 from dual )
    "ORA-00905:missing keyword"
    Is that possible at all?
    And particulary - is that possible when remote database is MSSQL and I am using heterogeneous services?
    I really appreciate your help
    best regards
    Wojtek
    Edited by: wojpik on Oct 21, 2009 3:59 AMI doubt you would be able to fire any ddl through database link. You have to connect to remote database to run any ddl even if it is Oracle or some other database.
    Regards
    Anurag

  • Insert a blob in remote database using dblink

    i have a view (it has a BLOB column) from where i need to select the records. After selecting i need to insert it into a synonym in the remote database through a db link.
    if i execute the procedure i get error; ora-22992--cannot use LOB locators selected from remote table. My code is
    INSERT INTO [email protected]
    SELECT PID,RNO, PTYPE,blob_field
    FROM view;
    I dont wish to creat a temporary table and still wish to perform the above function.
    So is there any method to do this. I tried with DBMS_LOB.APPEND but it didnt work out. Any solution will be greatly appreciated.
    Thanks,
    -Nitin

    i have a view (it has a BLOB column) from where i need to select the records. After selecting i need to insert it into a synonym in the remote database through a db link.
    if i execute the procedure i get error; ora-22992--cannot use LOB locators selected from remote table. My code is
    INSERT INTO [email protected]
    SELECT PID,RNO, PTYPE,blob_field
    FROM view;
    I dont wish to creat a temporary table and still wish to perform the above function.
    So is there any method to do this. I tried with DBMS_LOB.APPEND but it didnt work out. Any solution will be greatly appreciated.
    Thanks,
    -Nitin

  • ?Working with clob and blob - using Dbms_Lob

    I need to search through a blob and remove some of the data, but having problems working with dbms_lob.erase.
    Reading the documentation, the procedure is supposed to work with either blobs or clobs, but I can't get it to work with the blob.
    Here's what I've coded and it does not work correctly for blobs.
    What have I've done wrong?
    declare
    v_start                   integer;
    v_stop                    integer;
    v_amount                  integer;
    v_max_len                  integer:=32676;
    v_offset                   integer:=1;
    v_new_length               integer;
    v_clob clob;
    v_blob blob;
    begin
    update test_clob
    set clob_id = clob_id
    where clob_id = 1
    returning clob_desc into v_clob;
    v_start := 0;
    v_stop  := 0;
    v_amount := 0;
    v_start := dbms_lob.instr(v_clob, '<property name="Name">SortMode', v_offset );
    v_stop  := dbms_lob.instr(v_clob, '</property>',  v_start );
    v_amount := ((v_stop - v_start)+11) ;
    dbms_output.put_line('Clob: '||v_clob);
    dbms_lob.erase(v_clob, v_amount, v_start);
    dbms_output.put_line('Clob: '||v_clob);
    rollback;
    update test_clob
    set clob_id = clob_id
    where clob_id = 1
    returning blob_desc into v_blob;
    v_start := 0;
    v_stop  := 0;
    v_amount := 0;
    v_start := dbms_lob.instr(v_blob, utl_raw.cast_to_raw('<property name="Name">SortMode'), v_offset );
    v_stop  := dbms_lob.instr(v_blob, utl_raw.cast_to_raw('</property>'),  v_start );
    v_amount := ((v_stop - v_start)+11) ;
    dbms_output.put_line('Blob: '||utl_raw.cast_to_varchar2(v_blob) );
    dbms_lob.erase(v_blob, v_amount, v_start);
    dbms_output.put_line('Blob: '||utl_raw.cast_to_varchar2(v_blob) );
    rollback;
    end trg_bui_user_assoc_layout;
    /This is the output
    Clob: this is only a test <property name="Name">SortMode</property>  should leave this alone
    Clob: this is only a test                                            should leave this alone
    Blob: this is only a test <property name="Name">SortMode</property>  should leave this alone
    Blob: this is only a test

    Well, you left out the table DDL and your insert for sample data (would be nice to have) as well as your Oracle version (pretty much a necessity).
    Since i had to make my own there could be a difference in how you populated your table, but i can't reproduce your findings.
    ME_ORCL?drop table test_clob purge;
    Table dropped.
    Elapsed: 00:00:00.09
    ME_ORCL?
    ME_ORCL?create table test_clob
      2  (
      3     clob_id     number not null primary key,
      4     clob_desc   clob,
      5     blob_desc   blob
      6  );
    Table created.
    Elapsed: 00:00:00.03
    ME_ORCL?
    ME_ORCL?insert into test_clob values
      2  (
      3        1
      4     ,  'this is only a test <property name="Name">SortMode</property>  should leave this alone'
      5     ,  utl_raw.cast_to_raw('this is only a test <property name="Name">SortMode</property>  should leave this alone')
      6  );
    1 row created.
    Elapsed: 00:00:00.01
    ME_ORCL?
    ME_ORCL?commit;
    Commit complete.
    Elapsed: 00:00:00.01
    ME_ORCL?
    ME_ORCL?declare
      2  v_start                   integer;
      3  v_stop                    integer;
      4  v_amount                  integer;
      5  v_max_len                  integer:=32676;
      6  v_offset                   integer:=1;
      7  v_new_length               integer;
      8
      9  v_clob clob;
    10  v_blob blob;
    11
    12  begin
    13
    14   update test_clob
    15   set clob_id = clob_id
    16   where clob_id = 1
    17   returning clob_desc into v_clob;
    18
    19   v_start := 0;
    20   v_stop  := 0;
    21   v_amount := 0;
    22
    23   v_start := dbms_lob.instr(v_clob, '<property name="Name">SortMode', v_offset );
    24   v_stop  := dbms_lob.instr(v_clob, '</property>',  v_start );
    25   v_amount := ((v_stop - v_start)+11) ;
    26
    27   dbms_output.put_line('Clob: '||v_clob);
    28
    29   dbms_lob.erase(v_clob, v_amount, v_start);
    30
    31   dbms_output.put_line('Clob: '||v_clob);
    32
    33   rollback;
    34
    35   update test_clob
    36   set clob_id = clob_id
    37   where clob_id = 1
    38   returning blob_desc into v_blob;
    39
    40   v_start := 0;
    41   v_stop  := 0;
    42   v_amount := 0;
    43
    44   v_start := dbms_lob.instr(v_blob, utl_raw.cast_to_raw('<property name="Name">SortMode'), v_offset );
    45   v_stop  := dbms_lob.instr(v_blob, utl_raw.cast_to_raw('</property>'),  v_start );
    46   v_amount := ((v_stop - v_start)+11) ;
    47
    48   dbms_output.put_line('Blob: '||utl_raw.cast_to_varchar2(v_blob) );
    49
    50   dbms_lob.erase(v_blob, v_amount, v_start);
    51
    52   dbms_output.put_line('Blob: '||utl_raw.cast_to_varchar2(v_blob) );
    53
    54   rollback;
    55
    56  end trg_bui_user_assoc_layout;
    57  /
    Clob: this is only a test <property name="Name">SortMode</property>  should leave this alone
    Clob: this is only a test                                            should leave this alone
    Blob: this is only a test <property name="Name">SortMode</property>  should leave this alone
    Blob: this is only a test                                            should leave this alone
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.03
    ME_ORCL?select *
      2  from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    5 rows selected.
    Elapsed: 00:00:00.03
    ME_ORCL?

  • Can't use clob and blob

    I just created a new database using dbca with General Purpose as the template, as I've done before. All went well until I tried to create tables that used clob or blob datatypes. I kept getting the error "ORA-03001: unimplemented feature".
    What happened? Was the database created improperly? Is there a way to correct this without recreating the database?
    Thanks

    I just created a new database using dbca with General
    Purpose as the template, as I've done before. All
    went well until I tried to create tables that used
    clob or blob datatypes. I kept getting the error
    "ORA-03001: unimplemented feature".
    I have the same problem but only if I assign a tablespace to the table but not if i leave it to the default tablespace.

  • Solution manager and CCMS monitoring of remote database

    I have Solution Manager 7 SP Basis 6 installed. I have oracle remote databases connected successfully. Under DBCockpit I see the instances I want to monitor. I can see the backup, tablespaces, etc. These remote instances are SAP Portal JAVA only version 7 but under the dbcockpit alerts only shows alert monitor. The database check and check conditions are in white. Is there a way to see these and be able to report on them? The CCMS tree under the local solution manager also shows these connected remote instances but they are also white in color as if inactive. All CCMS agents and saposcol are running perfectly on the remote systems. The stats tables have been built, the synonyms for the solution manager are in place. What am I missing?

    Hi,
    Please download that attachment "scripts_1139623.sar" provided in SAP Note 1139623 - Using transaction RZ20 to monitor remote Oracle databases.
    Upon extraction of scripts_1139623.sar the following files you will get.
    rz20_nonabap.sql
    dbcheckora10_oltp.sql
    dbcheckora.sql
    Then Perform the below mentioned Steps to resolve your issue.
    Step 1: Create tables DBCHECKORA and DBMSGORA:
      sqlplus SAPSR3DB/<pwd>
      SQL> @rz20_nonabap.sql
    Step 2: Fill the table DBCHECKORA:
      sqlplus SAPSR3DB/<pwd>
      SQL> @dbcheckora10_oltp   
    Step 3: Create the synonyms (as user ora<sid>):
    brconnect -u / -c -f crsyn -o SAPSR3DB
    sqlplus "/as sysdba"
    CREATE PUBLIC SYNONYM SAP_DBSTATC FOR SAPSR3DB.DBSTATC;
    CREATE PUBLIC SYNONYM SAP_DBCHECKORA FOR SAPSR3DB.DBCHECKORA;
    CREATE PUBLIC SYNONYM SAP_DBMSGORA FOR SAPSR3DB.DBMSGORA;
    Step 4: Grant Privileges for OPS$<SID>ADM User for the above created Tables
    sqlplus "/as sysdba"
    grant all on SAPSR3DB.DBSTATC to OPS$<SID>ADM;
    grant all on SAPSR3DB.DBCHECKORA to OPS$<SID>ADM;
    grant all on SAPSR3DB.DBMSGORA to OPS$<SID>ADM;
    grant all on SAPSR3DB.DBSTAIHORA to OPS$<SID>ADM;
    grant all on SAPSR3DB.DBSTATHORA to OPS$<SID>ADM;
    grant all on SAPSR3DB.DBSTATIORA to OPS$<SID>ADM;
    grant all on SAPSR3DB.DBSTATTORA to OPS$<SID>ADM;
    commit;
    Step 5:
    Now after successful execution of above mentioned Steps , in DBACOCKPIT (system configuration), try to save the Remote DB entry with "Collect Alert Data".  It will be saved without any error.
    If you find any errors, post the error contents as a reply of this thread message.
    Regards,
    Bhavik G. Shroff

  • Loading Clob and Blob data using DBMS_LOB

    I am loading some data into a table that has five columns, two of which are defined as BLOB and CLOB respectively. I get the following errors after the pl/sql procedure that loads it has completed running :
    ERROR:ORA-21560: argument 3 is null,
    invalid, or out of range
    ERROR:ORA-22297: warning: Open LOBs exist
    at transaction commit time.
    The following is the outline of the code that loads the table:
    CREATE OR REPLACE PROCEDURE load_data(dir,seq_val,file_name,
    details, etc <== all these are passed in) IS
    dest_loc BLOB;
    src_loc BFILE;
    Amount INTEGER;
    new_dir string(1000);
    new_file string(1000);
    BEGIN
    new_dir := ''||dir||'';
    new_file := ''||file_name||'';
    src_loc := BFileName(new_dir,new_file);
    Amount := dbms_lob.getlength(src_loc);
    insert into table A
    (id
    ,ver
    ,ver
    ,fil_nm <== This field is a BLOB
    ,details <== This Field is a CLOB
    values
    (seq_val
    ,1
    ,version
    ,empty_blob()
    ,detailed_infor
    --dbms_output.put_line(Amount);
    SELECT fil_nm INTO dest_loc FROM table A WHERE id = seq_val FOR UPDATE;
    /* Opening the LOB is mandatory: */
    --dbms_output.put_line('IN SELECT...');
    DBMS_LOB.OPEN(src_loc, DBMS_LOB.LOB_READONLY);
    /* Opening the LOB is optional: */
    DBMS_LOB.OPEN(dest_loc, DBMS_LOB.LOB_READWRITE);
    DBMS_LOB.LOADFROMFILE(dest_loc, src_loc, Amount);
    /* Closing the LOB is mandatory if you have opened it: */
    DBMS_LOB.CLOSE(dest_loc);
    DBMS_LOB.CLOSE(src_loc);
    --dbms_output.put_line('After SELECT...');
    COMMIT;
    END
    Any feedback would be really appreciated. Thanks.

    I assume thats when the ORA-21560: argument 3 is null, invalid, or out of range error occurs. I'm also wondering why and what the other error means saying LOBs are open during transaction commit time. The data is coming from an xml file that is in the following format.
    - <NAME>
    <FIL_NM>TEST.PDF</FIL_NM>
    <VER>2</VER>
    <DETAILS>xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    xxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyzzzzzzzzzmmmmmmsusssuitttttretc</DETAILS>
    </REPORT>
    <NAME/>
    So what this procedure is doing is opening the pdf and loading the data into the BLOB. I just can't understand what is causing those errors.

  • ? read from clob and store each line in database

    Greetings,
    i would like to read LINE BY LINE the contents of a CLOB column( which stores the contents of a plain file .txt)
    and store each line into a table.
    Is that possible?

    pollywog wrote:
    with t as (select to_clob('fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    v
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa') x from dual
    select
    text
    from t
    model return updated rows
    dimension by (0 d)
    measures (dbms_lob.substr( x, 4000, 1 ) text, 0 position_of_return )  -- the position of return is where the next carriage return is
    rules iterate(100) until position_of_return[iteration_number+1] = 0
    position_of_return[iteration_number + 1] = instr(text[0],chr(10),1,iteration_number + 1),
    text[iteration_number + 1] = substr(text[0],
    position_of_return[iteration_number],
    position_of_return[iteration_number + 1] - position_of_return[iteration_number]
    Hi;
    Thank you for your kind help. The query is very fast. But i would like to ask a question about it. My CLOB contains more than 4000 characters. Is it possible to change the *1* in dbms_lob.substr( x, 4000, *1* ) to start again from where it left.
    I did that by making a pipelined function and loop ing until i get to end of clob. But is there a faster way using just sql..
    Best Regards
    Fatih
    FUNCTION get_clob_lines(cl_data CLOB) RETURN t_x_clob_table
    PIPELINED IS
    yrecords t_x_clob_record;
    CURSOR c_lines(n_start_position IN NUMBER) IS
    SELECT position_of_return
    ,text
    FROM (SELECT position_of_return
    ,text
    FROM dual model RETURN updated rows
    dimension BY(0 d)
    measures(dbms_lob.substr(cl_data, 4000, n_start_position) text, 0 position_of_return)
    rules iterate(4000) until position_of_return [ iteration_number + 1 ] = 0
    (position_of_return [ iteration_number + 1 ] = instr(text [ 0 ], chr(10), 1, iteration_number + 1),
    text [ iteration_number + 1 ] = substr(text [ 0 ],
    position_of_return [ iteration_number ] + 1, position_of_return [ iteration_number + 1 ] - (position_of_return [ iteration_number ] + 1))
    ) ccc
    WHERE ccc.position_of_return <> 0
    ORDER BY ccc.position_of_return;
    l_n_max_position NUMBER;
    l_n_start_position NUMBER;
    BEGIN
    l_n_start_position := 1;
    WHILE l_n_start_position < dbms_lob.getlength(cl_data) LOOP
    FOR r_lines IN c_lines(l_n_start_position) LOOP
    yrecords := t_x_clob_record(n_position => r_lines.position_of_return,
    v_text => r_lines.text);
    l_n_max_position := r_lines.position_of_return;
    PIPE ROW(yrecords);
    END LOOP;
    l_n_start_position := l_n_start_position + l_n_max_position;
    END LOOP;
    RETURN;
    END;

  • Display BLOB from remote database

    Context: We are offloading documents (pdf) from our OLTP database to a dedicated 'output database.' These documents must be displayed from the application on our OLTP database using a database link. Various posts show that querying a BLOB from a remote database can best be implemented using a query on the remote table and an insert on a local (temporary) table. So far, so good. The idea is to display this BLOB using wpg_docload.download_file.
    BUT:
    When trying to display the pdf from this global temporary table an error occurs:
    ORA-14453: attempt to use a LOB of a temporary table, whose data has already been purged
    When trying to display from a normal table and issuing a rollback after wpg_docload.download_file results in another error:
    ORA-22922: nonexistent LOB value
    When trying to display from a normal table and not removing the record in any way, its works fine. Only I now have a garbage collection issue, because the remote date remain in my local (preferably temporary) table.
    It seems to me that mod_plsql needs an active session to display my pdf.
    Does anyone have an explanation for this behaviour and maybe a solution for my problem?
    Environment:
    local: 10.2.0.4.0
    remote: 11.1.0.7.0
    pdf size: ca. 150kB
    code used:
    PROCEDURE show_doc (p_nta_id IN NUMBER
    ,p_sessie IN NUMBER
    IS
    t_lob BLOB;
    t_lob2 BLOB := empty_blob();
    t_mime VARCHAR2(100);
    BEGIN
    -- copy BLOB into local global temp table
    INSERT INTO mvs_tmp_notaprint_bestanden
    npv_nta_id
    , npv_npe_sessie
    , mime_type
    , bestand
    ) -- from remote table
    SELECT npd.npv_nta_id
    ,npd.npv_npe_sessie
    ,npd.mime_type
    ,npd.bestand
    FROM mvs_notaprint_bestanden@marc npd
    WHERE npd.npv_nta_id ; = p_nta_id
    AND npd.npv_npe_sessie = p_sessie
    -- show BLOB from local global temp table
    SELECT t.bestand
    , t.mime_type
    INTO t_lob
    , t_mime
    FROM mvs_tmp_notaprint_bestanden t
    WHERE t.npv_nta_id ; = p_nta_id
    AND t.npv_npe_sessie ; = p_sessie
    t_lob2 := t_lob; -- buffer BLOB
    owa_util.mime_header(t_mime , FALSE );
    owa_util.http_header_close;
    wpg_docload.download_file(t_lob2);
    END show_doc;

    Andrew,
    thank you, the 'preserve rows' did the trick.
    Every query from a browser (even in the same browser session) is a new Oracle session, so the copied records in the global temporary table are gone after the page is displayed.
    Am I correct in assuming that each call from the browser results in a new Oracle session? I did a few tests and found for each call a different sessionid.
    Sincerly,
    Arne Suverein
    Edited by: Arne Suverein on Aug 18, 2009 3:35 PM

  • Copying CLOB data from remote database to local database

    How can i copy a CLOB data from a remote datbase table to a local database table ?
    i have a database link created from my local database to remote database, but looks like i cannot select a clob locator or clob data residing on a remote database through this link ?
    is there any way to do this ? anyone has a readily available code for doing this ? i need this very urgently, your help is greatly appreciated!
    thanks,
    SC.

    Is there a local to your pc database ???? Which is the db version of your pc(if there is)....and your server's database?????
    Simon

  • Mapping CLOB and Long in xml schema

    Hi,
    I am creating an xml schema to map some user defined database objects. For example, for a column which is defined as VARCHAR2 in the database, I have the following xsd type mapping.
    <xsd:element name="Currency" type="xsd:string" />
    If the oracle column is CLOB or Long(Oracle datatype), could you please tell me how I can map it in the xml schema? I do not want to use Oracle SQL type like:
    xdb:SQLType="CLOB" since I need a generic type mapping to CLOB. Would xsd:string still hold good for CLOB as well as Long(Oracle datatype) ?
    Please help.
    Thanks,
    Vadi.

    The problem is that LONGs are not buffered but are read from the wire in the order defined. The problem is the same as
    rs = stmt.executeQuery("select myLong, myNumber from tab");
    while (rs.next()) {
    int n = rs.getInt(2);
    String s = rs.getString(1);
    The above will fail for the same reason. When the statement is executed the LONG is not read immediately. It is buffered in the server waiting to be read. When getInt is called the driver reads the bytes of the LONG and throws them away so that it can get to the NUMBER and read it. Then when getString is called the LONG value is gone so you get an exception.
    Similar problem here. When the query is executed the CLOB and BLOB locators are read from the wire, but the LONG is buffered in the server waiting to be read. When Clob.getString is called, it has to talk to the server to get the value of the CLOB, so it reads the LONG bytes from the wire and throws them away. That clears the connection so that it can ask the server for the CLOB bytes. When the code reads the LONG value, those bytes are gone so you get an exception.
    This is a long standing restriction on using LONG and LONG RAW values and is a result of the network protocol. It is one of the reasons that Oracle deprecates LONGs and recommends using BLOBs and CLOBs instead.
    Douglas

  • Can't fetch clob and long in one select/query

    I created a nightmare table containing numerous binary data types to test an application I was working on, and believe I have found an undocumented bug in Oracle's JDBC drivers that is preventing me from loading a CLOB and a LONG in a single SQL select statement. I can load the CLOB successfully, but attempting to call ResultSet.get...() for the LONG column always results in
    java.sql.SQLException: Stream has already been closed
    even when processing the columns in the order of the SELECT statement.
    I have demonstrated this behaviour with version 9.2.0.3 of Oracle's JDBC drivers, running against Oracle 9.2.0.2.0.
    The following Java example contains SQL code to create and populate a table containing a collection of nasty binary columns, and then Java code that demonstrates the problem.
    I would really appreciate any workarounds that allow me to pull this data out of a single query.
    import java.sql.*;
    This class was developed to verify that you can't have a CLOB and a LONG column in the
    same SQL select statement, and extract both values. Calling get...() for the LONG column
    always causes 'java.sql.SQLException: Stream has already been closed'.
    CREATE TABLE BINARY_COLS_TEST
    PK INTEGER PRIMARY KEY NOT NULL,
    CLOB_COL CLOB,
    BLOB_COL BLOB,
    RAW_COL RAW(100),
    LONG_COL LONG
    INSERT INTO BINARY_COLS_TEST (
    PK,
    CLOB_COL,
    BLOB_COL,
    RAW_COL,
    LONG_COL
    ) VALUES (
    1,
    '-- clob value --',
    HEXTORAW('01020304050607'),
    HEXTORAW('01020304050607'),
    '-- long value --'
    public class JdbcLongTest
    public static void main(String argv[])
    throws Exception
    Driver driver = (Driver)Class.forName("oracle.jdbc.driver.OracleDriver").newInstance();
    DriverManager.registerDriver(driver);
    Connection connection = DriverManager.getConnection(argv[0], argv[1], argv[2]);
    Statement stmt = connection.createStatement();
    ResultSet results = null;
    try
    String query = "SELECT pk, clob_col, blob_col, raw_col, long_col FROM binary_cols_test";
    results = stmt.executeQuery(query);
    while (results.next())
    int pk = results.getInt(1);
    System.out.println("Loaded int");
    Clob clob = results.getClob(2);
    // It doesn't work if you just close the ascii stream.
    // clob.getAsciiStream().close();
    String clobString = clob.getSubString(1, (int)clob.length());
    System.out.println("Loaded CLOB");
    // Streaming not strictly necessary for short values.
    // Blob blob = results.getBlob(3);
    byte blobData[] = results.getBytes(3);
    System.out.println("Loaded BLOB");
    byte rawData[] = results.getBytes(4);
    System.out.println("Loaded RAW");
    byte longData[] = results.getBytes(5);
    System.out.println("Loaded LONG");
    catch (SQLException e)
    e.printStackTrace();
    results.close();
    stmt.close();
    connection.close();
    } // public class JdbcLongTest

    The problem is that LONGs are not buffered but are read from the wire in the order defined. The problem is the same as
    rs = stmt.executeQuery("select myLong, myNumber from tab");
    while (rs.next()) {
    int n = rs.getInt(2);
    String s = rs.getString(1);
    The above will fail for the same reason. When the statement is executed the LONG is not read immediately. It is buffered in the server waiting to be read. When getInt is called the driver reads the bytes of the LONG and throws them away so that it can get to the NUMBER and read it. Then when getString is called the LONG value is gone so you get an exception.
    Similar problem here. When the query is executed the CLOB and BLOB locators are read from the wire, but the LONG is buffered in the server waiting to be read. When Clob.getString is called, it has to talk to the server to get the value of the CLOB, so it reads the LONG bytes from the wire and throws them away. That clears the connection so that it can ask the server for the CLOB bytes. When the code reads the LONG value, those bytes are gone so you get an exception.
    This is a long standing restriction on using LONG and LONG RAW values and is a result of the network protocol. It is one of the reasons that Oracle deprecates LONGs and recommends using BLOBs and CLOBs instead.
    Douglas

  • Out of Order Primary Keys and Blob fields

    Hi,
    I am using Oracle Migration Workbench to transfer the data from SQLServer 2000 to ORacle 9i. I am running into the following issues and was wondering if you had any ideas what may be going on:
    1.) Primary keys are out of order
    2.) Blob fields are inserted into the incorrect row
    Ex. BLOB field stating 'Application active' should be associated with Act_key (primary key) = 5 and it ends up transfering over with act_key 28.
    Thank you for any information you may have on this.
    AK

    I am very interested in this thread because I have encountered a similar problem with BLOB columns, except that I am also seeing this with CLOB's.
    Essentially the all the data gets 'migrated', but the CLOB and BLOB columns are all mixed up!
    Any solutions?

  • Maps Remote Database settings for E71

    I live in South Africa, and require the Maps Remote Database settings to sync my favourites with phone and vice versa.... Can anyone help please?

    Remote Databas settings is used to integrate ACS with an oracle or sql database to generate reports in your environment.
    If you are worried about the logs being generated it would be best to setup the removal and backup configuration and point your acs to either a ftp or nfs repostoring.
    Thanks,
    Tarik Admani

Maybe you are looking for

  • Can I use a Time Capsule on a hard wired iMac and get wireless to other devices?

    I have a 6 year old iMac that does not have an airport extreme card.  I have it connected to a cable modem ARRIS model TM502G which is also the VOIP for my telephone.  My question is, if I buy a Time Capsule, will I be able to have wi-fi for other de

  • Error message when saving a PDF to PNG: Error attempting to write file.

    I get an error message when saving a PDF to PNG. JPG and TIFF works, but no PNG. Disk space is sufficient! 100GB of 250 A few weeks ago it was no problem to save as PNG. But now! (Perhaps from last update?) Messages: When clicking "Ignore": Adobe Acr

  • How to increase the width of Panel Tabbed layout

    How to increase the width of detailed items of a Tabbed Panel ? I have many columns inside a detailed item of tabbed panel i can see only 3 colums ,to see the rest of the column i need to scroll. I tried to set the width but doesnt seems to be workin

  • IPHOTO CRASH, PICS ARE GONE

    Hi guys So i was importing my pictures to IPHOTO from my iphone and when it was finishing its import, it suddenly CRASHED and when i  reopened iphoto, SOME OF MY PICTURES THAT I SAVED BEFORE ARE GONE AND I COULD NOT FIND THOSE. it's like it just went

  • Burned DVD images only appear sharp near end of loaded time in screen

    Hi, I am new to the list. I have produced a slide show in iDVD with still photos, music and some video. Everything is fine except when I view the final presentation on my TV or with a DVD projector. The Problem: The images load and transition fine ex